January 21, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu Server development summary – 21 January 2020

Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: cloud-init 19.4

On the very last days of 2019 we released version 19.4 of cloud-init. This new upstream release is currently available on the supported LTS releases of Ubuntu (Xenial and Bionic) and in the development version of the next LTS release, Focal Fossa. For a list of features released, see the full ChangeLog on GitHub. The 19.4 cloud-init release was the last release to support python 2.7. Any new commits to cloud-init will not require python 2 support.

Spotlight: Ubuntu Pro for AWS

Ubuntu Pro is a premium Ubuntu image designed to provide the most comprehensive feature set for production environments running in the public cloud. Ubuntu Pro images based on Ubuntu 18.04 LTS (Bionic Beaver) are now available for AWS as an AMI through AWS Marketplace

Spotlight: Speed up project bug triage with grease monkey

Bryce Harrington, on the Ubuntu Server team, has written up an excellent post on how to speed up bug triage responses with grease monkey. It simplifies the inclusion of frequent responses the team uses for various projects when maintaining bugs in Launchpad for multiple Ubuntu packages. Thanks Bryce!


  • Add Rootbox & HyperOne to list of cloud in README (#176) [Adam Dobrawy]
  • docs: add proposed SRU testing procedure (#167)
  • util: rename get_architecture to get_dpkg_architecture (#173)
  • Ensure util.get_architecture() runs only once (#172)
  • Only use gpart if it is the BSD gpart (#131) [Conrad Hoffmann]
  • freebsd: remove superflu exception mapping (#166) [Gonéri Le Bouder]
  • ssh_auth_key_fingerprints_disable test: fix capitalization (#165) [Paride Legovini]
  • util: move uptime’s else branch into its own boottime function (#53) [Igor Galić] (LP: #1853160)
  • workflows: add contributor license agreement checker (#155)
  • net: fix rendering of ‘static6’ in network config (#77) (LP: #1850988)
  • Make tests work with Python 3.8 (#139) [Conrad Hoffmann]
  • fixed minor bug with mkswap in cc_disk_setup.py (#143) [andreaf74]
  • freebsd: fix create_group() cmd (#146) [Gonéri Le Bouder]
  • doc: make apt_update example consistent (#154)
  • doc: add modules page toc with links (#153) (LP: #1852456)
  • Add support for the amazon variant in cloud.cfg.tmpl (#119) [Frederick Lefebvre]
  • ci: remove Python 2.7 from CI runs (#137)
  • modules: drop cc_snap_config config module (#134)
  • migrate-lp-user-to-github: ensure Launchpad repo exists (#136)
  • docs: add initial troubleshooting to FAQ (#104) [Joshua Powers]
  • doc: update cc_set_hostname frequency and descrip (#109) [Joshua Powers] (LP: #1827021)
  • freebsd: introduce the freebsd renderer (#61) [Gonéri Le Bouder]
  • cc_snappy: remove deprecated module (#127)
  • HACKING.rst: clarify that everyone needs to do the LP->GH dance (#130)
  • freebsd: cloudinit service requires devd (#132) [Gonéri Le Bouder]
  • cloud-init: fix capitalisation of SSH (#126)
  • doc: update cc_ssh clarify host and auth keys [Joshua Powers] (LP: #1827021)
  • ci: emit names of tests run in Travis (#120)
  • Release 19.4 (LP: #1856761)
  • rbxcloud: fix dsname in RbxCloud [Adam Dobrawy] (LP: #1855196)
  • tests: Add tests for value of dsname in datasources [Adam Dobrawy]
  • apport: Add RbxCloud ds [Adam Dobrawy]
  • docs: Updating index of datasources [Adam Dobrawy]
  • docs: Fix anchor of datasource_rbx [Adam Dobrawy]
  • settings: Add RbxCloud [Adam Dobrawy]
  • doc: specify _ over – in cloud config modules [Joshua Powers] (LP: #1293254)
  • tools: Detect python to use via env in migrate-lp-user-to-github [Adam Dobrawy]
  • Partially revert “fix unlocking method on FreeBSD” (#116)
  • tests: mock uid when running as root (#113) [Joshua Powers] (LP: #1856096)
  • cloudinit/netinfo: remove unused getgateway (#111)
  • docs: clear up apt config sections (#107) [Joshua Powers] (LP: #1832823)
  • doc: add kernel command line option to user data (#105) [Joshua Powers] (LP: #1846524)
  • config/cloud.cfg.d: update README [Joshua Powers] (LP: #1855006)
  • azure: avoid re-running cloud-init when instance-id is byte-swapped (#84) [AOhassan]
  • fix unlocking method on FreeBSD [Igor Galić] (LP: #1854594)
  • debian: add reference to the manpages [Joshua Powers]
  • ds_identify: if /sys is not available use dmidecode (#42) [Igor Galić] (LP: #1852442)
  • docs: add cloud-id manpage [Joshua Powers]
  • docs: add cloud-init-per manpage [Joshua Powers]
  • docs: add cloud-init manpage [Joshua Powers]
  • docs: add additional details to per-instance/once [Joshua Powers]
  • Merge pull request #96 from fred-lefebvre/master [Joshua Powers]
  • Update doc-requirements.txt [Joshua Powers]
  • doc-requirements: add missing dep [Joshua Powers]
  • Merge pull request #95 from powersj/docs/bugs [Joshua Powers]
  • dhcp: Support RedHat dhcp rfc3442 lease format for option 121 (#76) [Eric Lafontaine] (LP: #1850642)
  • one more [Joshua Powers]
  • Address OddBloke review [Joshua Powers]
  • network_state: handle empty v1 config (#45) (LP: #1852496)
  • docs: Add document on how to report bugs [Joshua Powers]
  • Add an Amazon distro in the redhat OS family [Frederick Lefebvre]
  • Merge pull request #94 from gaughen/patch-1 [Joshua Powers]
  • removed a couple of “the”s [gaughen]
  • docs: fix line length and remove highlighting [Joshua Powers]
  • docs: Add security.md to readthedocs [Joshua Powers]
  • Multiple file fix for AuthorizedKeysFile config (#60) [Eduardo Otubo]
  • Merge pull request #88 from OddBloke/travis [Joshua Powers]
  • Revert “travis: only run CI on pull requests”
  • doc: update links on README.md [Joshua Powers]
  • doc: Updates to wording of README.md [Joshua Powers]
  • Add security.md [Joshua Powers]
  • setup.py: Amazon Linux sets libexec to /usr/libexec (#52) [Frederick Lefebvre]
  • Fix linting failure in test_url_helper (#83) [Eric Lafontaine]
  • url_helper: read_file_or_url should pass headers param into readurl (#66) (LP: #1854084)
  • dmidecode: log result after stripping n [Igor Galić]
  • cloud_tests: add azure platform support to integration tests [ahosmanmsft]
  • set_passwords: support for FreeBSD (#46) [Igor Galić]


  • vmtests: skip Focal deploying Centos70 ScsiBasic
  • vmtests: fix network mtu tests, separating ifupdown vs networkd
  • doc: Fix kexec documentation bug. [Mike Pontillo]
  • vmtests: Add Focal Fossa
  • centos: Add centos/rhel 8 support, enable UEFI Secure Boot [Lee Trager] (LP: #1788088)
  • Bump XFS /boot skip-by date out a while
  • vmtest: Fix a missing unset of OUTPUT_FSTAB
  • curthooks: handle s390x/aarch64 kernel install hooks (LP: #1856038)
  • clear-holders: handle arbitrary order of devices to clear
  • curthooks: only run update-initramfs in target once (LP: #1842264)
  • test_network_mtu: bump fixby date for MTU tests


The git-ubuntu snap package has been updated to 0.8.0 for the ‘beta’ channel.

The lion’s share of effort since 0.7.4 has gone towards bug fixing and general stabilization. Documentation and tests received a fair share of attention, as did the snap and setup.py packaging.

The importer now uses a sqlite3 database to store persistent information such as the pending package import status.

A new –only-request-new-imports-once option is added for the backend source package importer. This makes the importer exit immediately after entering new imports to the database.

The –deconstruct option has been changed to –split, to prevent confusion that led people to assume –deconstruct meant the opposite of “reconstruct”.

Launchpad object fetches are cached using Python’s cachetools module, as a performance improvement that reduces the excessive number of API calls to the Launchpad service.

Finally, the backend service is now managed using a systemd watchdog daemon. Prior to this the service would need to be manually restarted whenever it hung or crashed, such as due to Launchpad service outages or network instabilities.

Contact the Ubuntu Server team

Bug Work and Triage

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 3

Uploads Released to the Supported Releases

Total: 80

Uploads to the Development Release

Total: 129

21 January, 2020 07:00PM

Ubuntu Studio: New Website!

Ubuntu Studio has had the same website design for nearly 9 years. Today, that changed. We were approached by Shinta from Playmain, asking if they could contribute to the project by designing a new website theme for us. Today, after months of correspondence and collaboration, we are proud to unveil... Continue reading

21 January, 2020 06:48PM

hackergotchi for Cumulus Linux

Cumulus Linux

It’s a fact: choosing your own hardware means lower TCO

An essential part of open networking is the ability to choose your own hardware. This allows for customization of your network to suit business needs, and it can also dramatically reduce your Total Cost of Ownership (TCO). On average, open networking with Cumulus helps customers reduce their capital expenditures (CapEx) by about 45% and operational expenditures (OpEx) in the range of approximately 50% to 75%.

Choosing the right hardware is a big part of these savings. If you compare bare-metal networking equipment with a similar product from a proprietary networking vendor, you’ll quickly find that bare-metal hardware is much less expensive. One reason for this is competition between hardware vendors in the open networking space.

Open networking is a multi-vendor ecosystem. More than 100 switches are certified to work with Cumulus Linux; they’re manufactured by vendors such as Dell, HPE, Mellanox, Supermicro, and others. Unlike with proprietary switches, there’s no vendor lock-in creating a monopoly situation. In the open networking space, vendors compete for sales, and this keeps costs down.

Another factor in lowering costs is the degree of customization available when you have many products to choose from. Choosing your own hardware means buying what you need—and only what you need. With a large range of available products, you can find hardware at a size and scale that suits your current IT requirements. You don’t have to use a one-size-fits-all network design, and you don’t need to keep largely unused hardware sitting around “just in case.” When growth happens, you can add infrastructure as needed.

Efficient Staffing

While it’s important to avoid exorbitant upfront costs, CapEx isn’t the only factor in determining TCO. It’s important to consider how operating costs will affect the calculation. OpEx includes day-to-day costs of running the data center such as staffing, rent, and power. Of these, payroll is a significant expense.

Open networking with Cumulus Linux can help your IT staff work more efficiently, getting more done for each dollar spent on wages. Many IT practitioners are already familiar with Linux, and you may already be using Linux in your data center. There’s no need to recruit staff with specific vendor certifications (although, if you’re looking for IT training opportunities, Cumulus offers both free educational resources and competitively priced, hands-on courses).

If Linux is already in use in your data center, you can realize additional savings by leveraging existing automation and monitoring tools. This allows IT staff to manage switches just like they do their servers. A familiar and standardized operating environment increases efficiency.

Automate Like Netflix

Automation is another way to improve efficiency and reduce OpEx. Cumulus Linux uses open standards and open APIs, making it easily addressable by automation tools. A highly automated network allows a single staff member to manage many more switches than they could in a traditional environment. Some of Cumulus’s customers have gone from needing one operator for every 50 switches to managing 200 switches per operator. Network automation is how web-scale giants like Google and Netflix run their infrastructure in a cost-effective manner. Cumulus Linux makes it easy for anyone to realize web-scale savings, regardless of size.

Most importantly, there’s no need to sacrifice quality or features to realize savings. Open switches are built around “off-the-shelf” silicon, which means that they’re quick to incorporate new technological developments. New generations of chips become available every 18 to 24 months, and open networking vendors release products accordingly. In contrast, proprietary ASICs used by legacy vendors have longer production cycles, and features lag behind.

Are We Really Compatible?

As noted before, Cumulus Linux has been tested for compatibility and certified to work on more than 100 switches from different vendors. If there’s an issue, Cumulus Networks Global Support Services (GSS) is available 24/7 to help. GSS can identify the root cause of an issue and coordinate with hardware vendors as needed to solve the problem. Open networking comes with world-class support and services.

Organizations have diverse needs, and these needs change with time. The ability to select the right hardware for the task can be a competitive advantage. Choosing your own hardware can lower both CapEx and OpEx, and give you greater flexibility in designing a network to meet business requirements.

This ability to avoid vendor lock-in and choose your hardware is a large part of the reason Cumulus customers enjoy 75% greater efficiency, and reduce their TCO by up to 60%. Why not try it out for free, and see what Cumulus can do for you?

21 January, 2020 05:00PM by Katherine Gorham

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: problem-oriented

Once upon a time, Heathkit was a big business.

Yeah, I know I’m dating myself. Meh.

Heathkit kits were great, but honestly, I had an issue with them: They were either too focused on (re-)teaching basic electronics, or they assumed the tinkerer was an EE, so they didn’t give a lot of consideration to explaining what you could do with them. I mean, my first kit was an alarm clock, and it had a snooze button and big, red numbers that kept me waking up all night for a couple weeks to look for the fire trucks. But in general, most of their really cool items — frequency analyzers, oscilloscopes, and so on — didn’t come with much in the way of “how can I use this device?”

That’s why I’m going to start taking the MAAS blog and doc in a little different direction going forward. I want to start using real-world examples and neat networking configurations and other problem-oriented efforts as my baseline for writing. Heck, I’d even like to try using MAAS to control my little Raspberry Pi farm, although that’s probably not the recommended configuration, and I’m not sure how PXE-booting would work yet. (But if I get it going, I promise to blog it.)

Don’t get me wrong; the MAAS doc is pretty solid. I just want to do more with it. As in not just update it for new versions, but make it come alive and show off what MAAS can do. I also want to pick up some of the mid-range applications and situations. MAAS is well-envisioned in large datacentres, and there are obviously hobbyists and small shops tinkering, but that’s not the bulk of people who could genuinely benefit from it. I want to dig into some of the middle-industry, small-to-medium-size possibilities.

Since I already know something about small hospital datacentres, having worked with them for about ten years, that might be a good place to start. Hospitals from 50-200 beds tend to have the same requirements as a full-size facility, but the challenges of a somewhat smaller budget and lower IT headcount. It really feels like a good sample problem for MAAS.

Yeah, I’m gonna sleep on it for a week and tinker a little, so set your Heathkit alarm clock for next Tuesday and check back to see where it’s going. And turn over the other way, so you’re not staring at the bright-red, segmented LEDs all week.

21 January, 2020 04:22PM

hackergotchi for Tails


Tails report for December, 2019


The following changes were introduced in Tails 4.1:


Documentation and website

User experience

Hot topics on our help desk

  1. Users reported various issues related to GPU nvidia and radeon.

  2. Seahorse still cannot import GPG keys (#17183) as well as other issues like encrypts file using most recent key only and connect to keyserver.

  3. Mac - kernel panic (fixed, caused an emergency release), graphical distorsions (there's a workaround), and no boot.


  • We discussed our needs and desires regarding our upcoming GitLab instance and eventually decided to hosting it at immerda.ch.

    Thanks immerda.ch for your support since so many years!

  • We taught our CI to reproducibly build automatic upgrades.



Past events

On-going discussions

  • Our Release Managers have been discussing how to adapt to the Firefox 4-weeks release cycle.

Press and testimonials


All the website

  • fr: 88% (5418) strings translated, 1% strings fuzzy, 86% words translated
  • es: 51% (3137) strings translated, 4% strings fuzzy, 42% words translated
  • de: 46% (2832) strings translated, 7% strings fuzzy, 41% words translated
  • fa: 32% (1982) strings translated, 10% strings fuzzy, 33% words translated
  • it: 32% (1969) strings translated, 5% strings fuzzy, 28% words translated
  • pt: 25% (1553) strings translated, 7% strings fuzzy, 21% words translated

Total original words: 65286

Core pages of the website

  • fr: 96% (1705) strings translated, 1% strings fuzzy, 96% words translated
  • es: 80% (1417) strings translated, 9% strings fuzzy, 82% words translated
  • de: 71% (1252) strings translated, 12% strings fuzzy, 74% words translated
  • it: 63% (1112) strings translated, 17% strings fuzzy, 65% words translated
  • pt: 45% (798) strings translated, 13% strings fuzzy, 48% words translated
  • fa: 34% (615) strings translated, 13% strings fuzzy, 33% words translated

Total original words: 16439


  • Tails has been started more than 819 136 times this month. This makes 26 424 boots a day on average.

How do we know this?

21 January, 2020 03:18PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Anbox Cloud disrupts mobile user experience

With the launch of the iPhone in 2007, mobile users were introduced to the smartphone as we still know it today: touchscreen, cameras and app stores. The launch of Android spurred low-cost alternatives to the iPhone, bringing the smartphone to the masses. Popularisation and growth in app consumption drove demand for mobile broadband.

Smartphones, app stores and mobile broadband are the foundations of mobile UX today. However, we’ve been using mobile devices the same way for over a decade now. But, with Anbox Cloud delivered by telcos, this is about to change.

What’s Anbox Cloud?

Anbox Cloud is a mobile cloud computing platform that containerises mobile workloads using Android as a guest operating system. With Anbox Cloud, mobile applications can resort to boundless compute and storage capacity in the cloud. Graphics are streamed to clients running in any web browser, or wrapped into mobile or desktop applications.

Using Anbox Cloud, telecommunication providers can create disruptive mobile user experiences for their 4G, LTE and 5G mobile network customers. Let’s see how.


With Anbox Cloud, applications are ceased to be delivered as locally installed software binaries. Mobile apps become remotely streamed content. Streaming from the cloud frees apps from hardware compatibility constraints.

In a world where apps are streamed, mobile users have access to a much richer selection. As a consequence, apps will be discovered and consumed as seamlessly as media content currently is. Think of an experience akin to Netflix, Spotify or Youtube: recommended systems, subscriptions, advertising and all. 

Anbox Cloud can be hosted within the cloud infrastructure of telco operators. This allows mobile operators to own their own branded distribution channel for apps, thereby breaking away from the Google-Apple duopoly of centralised app stores. Telco owned app catalogs, delivered via Anbox Cloud, would open new avenues for innovative value added services.

Cloud-augmented smartphones

Anbox Cloud gives the flexibility to offload compute, storage and energy intensive applications from mobile devices to hyperscale clouds. What’s more, any number of virtual devices can be instantiated on demand in the elastic cloud.

Offloading and elasticity are orchestrated to augment capability-constrained mobile devices. Any smartphone can be spinned into a hyper-phone, with several clones running in parallel in the cloud. 

Through cloud augmentation of smartphones, telco operators will deliver traditionally device-dependant features from their cloud infrastructure.  This will strengthen their position in mobile telecommunications ecosystems by reducing reliance on mobile OEMs for shaping user experience.

Consistent user experience will be accessible to any user regardless of the device they own. Besides consistency, user experience will become utterly enriched on any phone. Imagine users capable of turning any given smartphone into a gaming console, a workplace device, or even an action camera, at the push of a button, thanks to the cloud.

Democratising wearables and headsets

When it comes to AR/VR headsets and wearables (like smart glasses), first class performance is highly dependent on ultra-powerful hardware. Due to this constraint, highly performant wearables and headsets are neither mobility friendly nor power efficient yet. Most crucially, they are not affordable to the masses. However, the confluence of 5G and Anbox Cloud will change these circumstances.

Offloading graphically intensive processes to the telco edge clouds through Anbox Cloud, frees OEMs from the need to embed such capabilities in devices. This will drive the hardware bill of materials (BOM) cost down, while also easing portability.

5G compatible AR/VR headsets and wearables will, therefore, be more portable and power efficient. Affordability and usability will in turn open up new lines of revenues for telco operators, beyond mobile telephony.

Try-out Anbox Cloud

Telco operators will be granted priority access to the Anbox Cloud demo service. If you are a mobile telco innovator, sign-up today. Evaluation licenses are available for companies that want to go one step further and develop a proof of concept. To accelerate your time to market, Canonical will be by your side for engineering support. Get in touch with us to know more about our terms of commercialisation.

21 January, 2020 02:49PM

hackergotchi for ArcheOS


360-degree video for speleoarchaeological explorations

Hello everybody,
this fast post is to public a preliminary test we performed with a 360-degree video, during the second speleoarchaeological exploration of the natural cave called "Bus dela Spia" (here the report of our first mission).
Personally I think that 360-degree photos and video have good potential to enhance the tourist fruition of "Hidden Cultural and Natural Heritage", granting an accessibility based on VR and WebVR applications to a wider public. 
We will go on in performing new test with this technology (also with better resolution) in "extreme" archaeological contests. I hope we will write some feedback soon, in the meantime, here is the video:

Have a nice day!

21 January, 2020 08:12AM by Luca Bezzi (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical introduces Anbox Cloud – scalable Android™ in the cloud

Canonical today announced Anbox Cloud, a platform that containerises workloads using Android1 as a guest operating system enabling enterprises to distribute applications from the cloud. Anbox Cloud allows enterprises and service providers to deliver mobile applications at scale, more securely and independently of a device’s capabilities. Use cases for Anbox Cloud include cloud gaming, enterprise workplace applications, software testing, and mobile device virtualisation.

The ability to offload compute, storage and energy-intensive applications from devices (x86 and Arm) to the cloud enables end-users to consume advanced workloads by streaming them directly to their device. Developers can deliver an on-demand application experience through a platform that provides more control over performance and infrastructure costs, with the flexibility to scale based on user demand.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, Director of Product at Canonical. “Enterprises are now empowered to deliver high performance, high density computing to any device remotely, with reduced power consumption and in an economical manner.”

With cloud gaming adoption on the rise, Anbox Cloud enables graphic and memory intensive mobile games to be scaled to vasts amounts of users while retaining the responsiveness and ultra-low latency demanded by gamers. By removing the need to download a game locally on a device, Anbox Cloud creates an on-demand experience for gamers while providing a protected content distribution channel for game developers. 

Anbox Cloud enables enterprises to accelerate their digital transformation initiatives by delivering workplace applications directly to employee’s devices, while maintaining the assurance of data privacy and compliance. Enterprises can reduce their internal application development costs by providing a single application that can be used across different form factors and operating systems.

Developers can also utilise Anbox Cloud as part of their application development process to emulate thousands of Android devices across different test scenarios and for integration in CI/CD pipelines.

Anbox Cloud can be hosted in the public cloud for infinite capacity, high reliability and elasticity or on a private cloud edge infrastructure, where low latency and data privacy are a priority. Public and private cloud service providers can integrate Anbox Cloud into their offering to enable the delivery of mobile applications in a PaaS or SaaS-model. Telecommunication providers can also create innovative value-added services based on virtualised mobile devices for their 4G, LTE and 5G mobile network customers.

Notes to editors:

Anbox Cloud is built on a range of Canonical technologies and runs Android on the Ubuntu 18.04 LTS kernel. Containersation is provided by secure and isolated LXD system containers. LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines – depending on streaming quality and/or workload complexity. A higher container density drives scalability up and unit economics down. MAAS is utilised for remote infrastructure provisioning and Juju provides automation tooling for easy deployment, management and reduced operational costs. The Ubuntu Advantage support programme is included with Anbox Cloud, providing continuous support and security updates for up to ten years. 

Canonical partners with Packet, the leading cloud computing infrastructure provider, as an option to deploy Anbox Cloud on-premise or at target edge locations in the world. To provide the best experience with Anbox Cloud, Canonical collaborates with Ampere (ARM) and Intel (x86) as silicon partners. These hardware options are optimised to provide the best density, GPU models and cost efficiency to shorten the time to market for customers building their services on top of Anbox Cloud.

Partner quotes:

“As the vast library of Android and Arm-native applications continues to grow, developers need proven systems that provide scalable capacity, reliable performance and deployment flexibility. The combination of Ampere’s Arm-based servers with a provisioned virtualisation solution like Canonical’s Anbox Cloud delivers the flexible, high-performance and secure infrastructure that developers need in order to deliver a better user experience for consumers.”

Jeff Wittich, SVP of Products at Ampere

“Canonical’s inclusion of the Intel Visual Cloud Accelerator Card – Render as part of their newly launched Anbox Cloud solution will enable the delivery of enhanced cloud and mobile gaming experiences on Android devices, supporting an emerging industry opportunity today, and for the upcoming 5G era.”

Lynn Comp, Vice President, Data Platforms Group and General manager of the Visual Cloud Division, Intel

“With Anbox Cloud, Canonical is bringing to market a disruptive product that is both powerful and easy to consume. As small, low-powered devices inundate our world, offloading applications to nearby cloud servers opens up a huge number of opportunities for efficiency, as well as new experiences. We’re excited to support the Anbox Cloud team as they grow alongside the worldwide rollout of 5G.”

Jacob Smith, Co-founder and CMO at Packet <

For more information on Anbox Cloud, visit anbox-cloud.io or click here to download the joint whitepaper with Intel –  Cloud gaming for Android: Building a high performing and scalable platform.

About Canonical  

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

1. Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

21 January, 2020 08:00AM

Ubuntu Blog: Implementing an Android™ based cloud game streaming service with Anbox Cloud

Since the outset, Anbox Cloud was developed with a variety of use cases for running Android inside containers. Cloud gaming, more specifically for casual games as found on most user’s mobile devices, is the most prominent one and growing in popularity. Enterprises are challenged to find a solution that can keep up with the increasing user demand, provide a rich experience and keep costs affordable while shortening the time to market.

Anbox Cloud brings Android from mobile devices to the cloud. This enables service providers to deliver a large and existing ecosystem of games to more users, regardless of their device or operating system. Existing games can be moved to Anbox Cloud with zero to minimal effort.

Canonical has built Anbox Cloud upon existing technologies that allow for a higher container density compared to traditional approaches, which helps to reduce the overall cost of building and operating a game streaming service. The cost structure of a casual game, based in the cloud, also shows that density is key for profitability margins. To achieve density optimisation, three factors must be considered: container density (CPU load, memory capacity and GPU capacity), profitability and user experience optimisation. Additional considerations include choosing the right hardware to match the target workload, intended rendering performance and the pricing sensitivity of gamers. Finding the optimal combination for these factors and adding a layer of automation is crucial to improve profitability margins and to meet SLAs.

To further address specific challenges in cloud gaming, Canonical collaborates with key silicon and cloud partners to build optimised hardware and cloud instance types. Cloud gaming has a high demand on various hardware components, specifically GPUs which provide the underlying foundation for every video streaming solution. Utilising the available hardware with the highest density for cost savings, requires optimisation on every layer. Anbox Cloud specifically helps to get the maximum out of the available hardware capacity. It keeps track of resources spent by all launched containers and optimises placement of new containers based on available capacity and resource requirements of specific containers.

Next to finding the right software and hardware platform, cloud gaming mandates positioning the actual workload as close to the user as possible to reduce latency and ensure a consistent experience. To scale across different geographical regions, Anbox Cloud provides operational tooling and software components to simplify the deployment without manual overhead and ensures users get automatically routed to their nearest location. By plugging individual regions dynamically into a control plane allows new regions to be easily added on the go without any downtime or manual intervention.

Anbox Cloud builds a high-density and easy-to-manage containerisation platform on top of the LXD container hypervisor which helps to minimise the time to market and reduce overall costs. It reflects Canonical’s deep expertise in cloud-native applications and minimises operational overhead in multiple ways. With the use of existing technologies from Canonical like Juju or MAAS, it provides a solid and proven platform which is easy to deploy and maintain. Combined with the Ubuntu Advantage support program from Canonical, an enterprise can ensure it gets long-term help whenever needed.

As differentiation is key in building a successful cloud gaming platform, Anbox Cloud provides a solid foundation which is extensible and fits into many different use cases. For example, integrating a custom streaming protocol is possible by writing a plug-in and integrating it via provided customising hooks into the containers which power Anbox Cloud. To make this process easy, Canonical provides an SDK, rich documentation with example plugins and engineering services to help with any development around Anbox Cloud.

In summary, Anbox Cloud provides a feature rich, generic and solid foundation to build a state of the art cloud gaming service which provides optimal utilisation of the underlying hardware to deliver the best user experience while keeping operational costs low.

If you’re interested to learn more, please come and talk to us.

Android is a trademark of Google LLC. Anbox Cloud uses assets available through the Android Open Source Project.

21 January, 2020 08:00AM

January 20, 2020

The Fridge: Ubuntu Weekly Newsletter Issue 614

Welcome to the Ubuntu Weekly Newsletter, Issue 614 for the week of January 12 – 18, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

20 January, 2020 10:40PM

hackergotchi for Whonix


Intel GPU a Security Danger

@HulaHoop wrote:

CVE-2019-14615 Graphics Vulnerability, a.k.a. iGPU Leak

  • Enables remote websites to fingerprint the system more easily via WebGL (not enabled in Tor Browser/ not accessibe in VMs at least KVM ones)

  • Allows malicious code running locally to steal AES keys.

Intel’s fix absolutely destroys their GPU performance. You should really move away from Intel hardware.

Also: Intel’s VT-x is well known to be vulnerable (L1TF), and is currently impossible to mitigate when HT is enabled.

Posts: 5

Participants: 3

Read full topic

20 January, 2020 02:40PM by @HulaHoop

January 19, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: Number word sequences

I was idly musing about number sequences, and the Lychrel algorithm. If you don’t know about this, there’s a good Numberphile video on it: basically, take any number, reverse it, add the two, and if you get a palindrome stop, and if you don’t, keep doing it. So start with, say, 57, reverse to get 75, add them to get 57+75=132, which isn’t a palindrome, so do it again; reverse 132 to get 231, add to get 132+231=363, and that’s a palindrome, so stop. There are a bunch of interesting questions that can be asked about this process (which James Grime goes into in the video), among which are: does this always terminate? What’s the longest chain before termination? And so on. 196 famously hasn’t terminated so far and it’s been tried for several billion iterations.

Anyway, I was thinking about another such iterative process. Take a number, express it in words, then add up the values of all the letters in the words, and do it again. So 1 becomes ONE, and ONE is 15, 14, 5 (O is the fifteenth letter of the alphabet, N the fourteenth, and so on), so we add 15+14+5 to get 34, which becomes THIRTY FOUR, and so on. (We skip spaces and dashes; just the letters.)

Take a complete example: let’s start with 4.

  • 4 -> FOUR -> 6+15+21+18 = 60
  • 60 -> SIXTY -> 19+9+24+20+25 = 97
  • 97 -> NINETY-SEVEN -> 14+9+14+5+20+25+19+5+22+5+14 = 152
  • 152 -> ONE HUNDRED AND FIFTY-TWO -> 15+14+5+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+20+23+15 = 251
  • 251 -> TWO HUNDRED AND FIFTY-ONE -> 20+23+15+8+21+14+4+18+5+4+1+14+4+6+9+6+20+25+15+14+5 = 251

and 251 is a fixed point: it becomes itself. So we stop there, because we’re now in an infinite loop.

A graph of this iterative process, starting at 4

Do all numbers eventually go into a loop? Do all numbers go into the same loop — that is, do they all end up at 251?

It’s hard to tell. (Well, it’s hard to tell for me. Some of you may see some easy way to prove this, in which case do let me know.) Me being me, I wrote a little Python programme to test this out (helped immeasurably by the Python 3 num2words library). As I discovered before, if you’re trying to pick out patterns in a big graph of numbers which all link to one another, it’s a lot easier to have graphviz draw you pretty pictures, so that’s what I did.

I’ve run numbers up to 5000 or so (after that I got a bit bored waiting for answers; it’s not recreational mathematics if I have to wait around, it’s a job for which I’m not getting paid). And it looks like numbers settle out into a tiny island which ends up at 251, a little island which ends up at 285, and a massive island which ends up at 259, all of which become themselves1. (You can see an image of the first 500 numbers and how they end up; extending that up to 5000 just makes the islands larger, it doesn’t create new islands… and the diagrams either get rather unwieldy or they get really big and they’re hard to display.2)

A graph of the first 500 numbers and their connections

I have a theory that (a) yes all numbers end up in a fixed point and (b) there probably aren’t any more fixed points. Warning: dubious mathematical assertions lie ahead.

There can’t be that many numbers that encode to themselves. This is both because I’ve run it up to 5000 and there aren’t, and because it just seems kinda unlikely and coincidental. So, we assume that the fixed points we have are most or all of the fixed points available. Now, every number has to end up somewhere; the process can’t just keep going forever. So, if you keep generating numbers, you’re pretty likely at some point to hit a number you’ve already hit, which ends up at one of the fixed points. And finally, the numbers-to-words process doesn’t grow as fast as actual numbers do. Once you’ve got over a certain limit, you’ll pretty much always end up generating a number smaller than oneself in the next iteration. The reason I think this is that adding more to numbers doesn’t make their word lengths all that much longer. Take, for example, the longest number (in words) up to 100,000, which is (among others) 73,373, or seventy-three thousand, three hundred and seventy-three. This is 47 characters long. Even if they were all Z, which they aren’t, it’d generate 47×26=1222, which is way less than 73,373. And adding lots more doesn’t help much: if we add a million to that number, we put one million on the front of it, which is only another 10 characters, or a maximum added value of 260. There’s no actual ceiling — numbers in words still grow without limit as the number itself grows — but it doesn’t grow anywhere near as fast as the number itself does. So the numbers generally get smaller as they iterate, until they get down below four hundred or so… and all of those numbers terminate in one of the three fixed points already outlined. So I think that all numbers will terminate thus.

The obvious flaw with this argument is that it ought to apply to the reverse-and-add process above too and it doesn’t for 196 (and some others). So it’s possible that my approach will also make a Lychrel-ish number that may not terminate, but I don’t think it will; the argument above seems compelling.

You might be thinking: bloody English imperialist! What about les nombres, eh? Or die Zahlen? Did you check those? Mais oui, I checked (nice one num2words for supporting a zillion languages!) Same thing. There are different fixed points (French has one big island until 177, a very small island to 232, a 258, 436 pair, and 222 which encodes to itself and nothing else encodes to it, for example.Not quite: see the update at the end. Nothing changes about the maths, though. Images of French and German are available, and you can of course use the Python 3 script to make your own; run it as python3 numwords.py no for Norwegian, etc.) You may also be thinking “what about American English, eh? 101 is ONE HUNDRED ONE, not ONE HUNDRED AND ONE.” I have not tested this, partially because I think the above argument should still hold for it, partially because num2words doesn’t support it, and partially because that’s what you get for throwing a bunch of perfectly good tea into the ocean, but I don’t think it’d be hard to verify if someone wants to try it.

No earth-shattering revelations here, not that it matters anyway because I’m 43 and you can only win a Fields Medal if you’re under forty, but this was a fun little diversion.

Update: Minirop pointed out on Twitter that my code wasn’t correctly highlighting the “end” of a chain, which indeed it was not. I’ve poked the code, and the diagrams, to do this better; it’s apparent that both French and German have most numbers end up in a fairy large loop, rather than at one specific number. I don’t think this alters my argument for why this is likely to happen for all numbers (because a loop of numbers which all encode to one another is about as rare as a single number which encodes to itself, I’d guess), but maybe I haven’t thought about it enough!

  1. Well, 285 is part of a 285, 267, 313, 248, 284, 285 loop.
  2. This is also why the graphs use neato, which is much less pleasing a layout for this than the “tree”-style layout of dot, because the dot images end up being 32,767 pixels across and all is a disaster.

19 January, 2020 10:02PM

hackergotchi for ARMBIAN


hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Assiste ao vivo ao próximo episódio do Podcast Ubuntu Portugal

Com o objectivo constante de inovar vamos hoje, dia em que gravamos o episódio 74 do nosso podcast preferido, permitir a todos os que lerem esta publicação a tempo – e tiverem disponibilidade – poder assistir à gravação do PUP.

No futuro, este será um privilégio da patronagem (é $1, deixem-se de coisas!) mas por enquanto todos vão poder fazer parte.

Queremos com esta iniciativa atingir 3 objectivos:

  • Dar mais amor aos nossos patronos;
  • Aumentar o número de seguidores que temos no yt;
  • Aumentar o número de patronos.

Se, nesta altura, continuas com vontade de assistir, basta abrires esta ligação uns minutos antes das 22.00:

19 January, 2020 04:54PM

Stuart Langridge: The tiniest of Python templating engines

In someone else’s project (which they’ll doubtless tell you about themselves when it’s done) I needed a tiny Python templating engine. That is: I wanted to be able to say, here is a template string, please substitute a bunch of variables into it. Now, Python already does this, in about thirty different ways, and str.format or string.Template do most of it as built-in.

str.format works like this:

"My name is {name} and I am {age} years old".format(name="Stuart", age=43)

and string.Template like this:

    "My name is $name and I am $age years old"
    ).safe_substitute(name="Stuart", age=43)

Both of which are pretty OK.

However, what they’re missing is loops; having more than one of a thing in your template, and looping over a list, substituting it each time. Every even fractionally-more-featureful templating system has this, whether Mustache or Jinja or whatever, of course, but I didn’t want another dependency. All I needed was str.format but with loops. So, I thought, I’ll write one, in about four lines of code, so I can just drop the function in to my Python file and then I’m good.

def LoopTemplate(s, ctx):
    def loophandler(m):
        md = m.groupdict()
        return "".join([LoopTemplate(md["content"], val)
                        for val in ctx[md["var"]]])
    return re.sub(r"\{loop (?P<var>[^}]+)\}(?P<content>.*?)\{endloop\}",
                  loophandler, s, flags=re.DOTALL).format(**ctx)

And lo, twas so. So I can now do

    "I am {name} and my imps' names are: {loop imps}{name}{endloop}",
        "name": "Stuart",
        "imps": [
            {"name": "Pyweazle"}, {"name": "Grimthacket"}, {"name": "Hardebon"}

and it all works. Not revolutionary, of course, but I was mildly pleased with myself.

Much internal debate about whether loophandler() should have been a lambda, but I eventually decided it was more confusing that way, on the grounds that it was confusing me and I knew what it was meant to be doing.

A brief explanation: re.sub lets you pass a function as the thing to replace with, rather than just a string. So we find all examples of {loop something}...{endloop} in the passed string, look up something in the “context”, or the dict of substitution variables you passed to LoopTemplate, and then we call LoopTemplate again, once per item in something (which is expected to be a list), and pass it the ... as its string and the next item in something as its context. So it all works. Of course, there’s no error handling or anything — if something isn’t present in the context, or if it’s not a list, or if you stray in any other way from the path of righteousness, it’ll incomprehensibly blow up. So don’t do that.

19 January, 2020 10:30AM

hackergotchi for ArcheOS


Archaeological Project Manager, code online

Hello everyone,
this fast post is to notify you that finally we upload some code of the Archaeological Project Manager we are developing thanks to our friend Andres Reyes (expert in NLP and Virtual Assistants). The current prototype is build on top of Amazon Alexa and you can find more info in the post I wrote some weeks ago. If you want to help us with the project or see the code, here is the direct link to GitLab.

The source code of APM (on GitLab)
Sorry for the delay, but in these weeks we are all very busy. Hopefully we will upload soon some updates. Stay tuned and have a nice day!

19 January, 2020 05:56AM by Luca Bezzi (noreply@blogger.com)

January 18, 2020

hackergotchi for rescatux


Rescatux 0.72-beta7 released


Rescatux 0.72-beta7 ISO (688 MB)
MD5SUM: d2b8b061b8956a1c909d8d7da822f0ef


This is another beta version of Rescatux. The last Rescatux beta was released on December 2019. That’s about one month ago.

This new version has two major improvements. Startup wizard has an option to just use default values and skip all the questions. The second one is that options (the ones which hadn’t been reworked yet) have been reworked so that they manage devices like hard disks not being found properly.

Rescatux 0.72 beta 7 with the improved startup wizard

What’s new on Rescatux

  • Startup wizard has an option to just use default values
  • New 2019 December background
  • Enable customizations thanks to optional make_common.custom file.

What’s new on Rescapp

  • Removed no longer needed sip module dependency
  • Completed handling of not found partitions of files
  • Minor improvements

Known bugs

  • Documentation is lacking on many of the options.

18 January, 2020 11:33PM by adrian15

hackergotchi for SparkyLinux



There is a new tool available for Sparkers: Stremio

What is Stremio?

Stremio is a one-stop hub for video content aggregation. Discover, organize and watch video from all kind of sources on any device that you own.
Movies, TV shows, series, live television or web channels like YouTube and Twitch.tv – you can find all this on Stremio.

– Easily discover new movies, TV shows, series and channels to watch. Browse by category, genre, rating, recency, etc.
– All of your video content on one screen
– Organize your video library
– Keep track of everything you watch
– Enjoy your video content on a bigger screen
– Watch video from many different sources

Installation (amd64 only):
sudo apt update
sudo apt install stremio

or via APTus-> VideoPlayer.


The project page at GitHub: github.com/Stremio/stremio-shell
It is a free and open source application, released under the GNU General Public License Version 3.


18 January, 2020 03:45PM by pavroo

January 17, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Design and Web team summary – 17 January 2020

The second iteration of this year is the last one before our mid-cycle sprint next week.

Here’s a short summary of the work the squads in the Web & Design team completed in the last 2-week iteration.

Web, Ubuntu and Brand squad

Web is the squad that develop and maintain most of the brochure websites across Canonical and is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain. The Brand Squad are tasked with updating and managing the overall style of Canonical, Ubuntu and the many fantastic products we create both on and off-line.

New canonical.com website

Yesterday we released the new canonical.com website, which has been a few months in the making. The site is more succinct, consolidating the content into a single page, with clear, standout statements:

The largest piece of work was the new careers section, which provides a more interactive experience for discovering careers at Canonical:

Redesign of /download/server/thank-you

We’ve updated the thank-you page for downloading Ubuntu Server with a new form for signing up to our newsletter and also getting access to the CLI pro-tips 2020 cheatsheet.

451 Research: Kubernetes report

We’ve highlighted a new report from 451 Research on our homepage and with a dedicated page of its own.


The MAAS squad develops the UI for the maas project.

The maas-ui team was focused on two main areas this iteration – fixing up UI bugs for the upcoming 2.7 release and completing the first part of the work on importing the main machine listing data into the React machine listing component. In addition to that we spent a significant amount of time preparing for the upcoming sprint in South Africa, ensuring we have all the specifications documents we need to discuss with engineers and have prepared a presentation to inform everyone of the work we’ve done so far this cycle.


The JAAS squad develops the UI for the Charm Store and Juju GUI projects.

Controller view

The team worked on a first iteration of the Controller view for the new JAAS dashboard. This view is tailored for admin in particular, listing all the controllers that are under that group or user.

‘Group by’ functionality

The team implemented the functionality of grouping the model list table of the JAAS dashboard by status (default), owner and clouds and regions.

User testing

During our product sprint in South Africa we will be doing some user testing of the JAAS dashboard with internal users, before expanding the target group to customers and community users. The results will help us understand the prioritisation of the implementation and possible feature requests.

CharmHub POC

The Snapcraft team implemented the design of the detail page of the new CharmHub store exploring different front-end solutions in order to optimise the maintenance of the current Snap store on Snapcraft.io and the new CharmHub.io

UX and design explorations

The team explored different solutions on the graphs for the controller view of the JAAS dashboard, the side navigation and the table react component working with the MAAS team on the definition of the patterns.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Multistage builds for docs.vanillaframework.io

We’ve been working on optimising our production builds recently. One of these optimisation is to use Docker’s build kit and multistage builds to both reduce image size and speed up subsequent builds.

This iteration we applied these enhancements to the build for docs.vanillaframework.io to improve the site’s release process.

Styling of the range input

Our existing Slider component was simply a styling on the HTML range input, so to keep consistency with the rest of native form inputs we removed the necessity of using p-slider class name. Any range input will now get Vanilla styling automatically.

This change will be live with the next version of Vanilla framework.

Encapsulating components

To make sure all of our components can be included and built independently from each other we started the work on encapsulating component styles, building them individually and making sure we have example pages for each individual component stylesheet.

This will allow us to make sure we don’t introduce any unnecessary dependencies between patterns in the future.


The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Integrating automated builds into snapcraft.io

We want to gradually import functionality from build.snapcraft.io to snapcraft.io. We have added authentication with GitHub and allowed the publisher the possibility of linking a GitHub repository with a Snap, this is done through a call to the Launchpad API.

17 January, 2020 03:57PM

Ubuntu Blog: 5 key steps to take your IoT device to market

Bring your IoT device to market

IoT businesses are notoriously difficult to get off the ground. No matter how good your product is or how good your team is, some of the biggest problems you will face are just in getting to market and maintaining your devices once they’re in the field. The webinar will take a look at how Canonical’s Brand Store product allows you to get to market while catering for long term problems and the need to keep your product up to date in the future.

More specifically, this webinar will look at the common problems we see organisations facing on their way to getting an IoT device to market, and cover five key steps to solve these problems. Along the way we will dig a little into serval case studies Canonical has done with various customers and partners to show you what has already been achieved with these solutions.

Watch the webinar

17 January, 2020 03:50PM

hackergotchi for Volumio


Volumio Primo Turns One!

Times flies so fast! The Volumio Team can’t believe that a little more than a year ago we introduced you to Volumio Primo, the complete Audiophile Music player and Streamer designed and manufactured by our team and partners.  The Primo is, in our intentions, the complete package for the audio lover, providing elegance, simplicity and high-quality, both on the outside and inside. 

Volumio Primo device

All photos courtesy of Lorenzo Chiccoli, follow him on Instagram


When we launched Primo, even though every feature was carefully thought-out, this was to us more of an experiment than anything else. Our goal was to understand if there was an interest for Volumio branded products. You can then imagine our surprise when we saw Primo’s first batch of 20 units go sold-out in less than 20 hours. Not even in our wildest dreams, we could have imagined that.

And our surprise kept on growing in the following months when we could not get the pace of incoming orders (you might have noticed, that Primo has been regularly unavailable in the first months… luckily we now know how to plan production a little bit better…). Our first year of Primo has been literally a rollercoaster of emotions: the joy of seeing it so appreciated by customers and the struggle to learn how to effectively distribute a product worldwide (keeping customer support and satisfaction as high as we could).

There was so much to learn: from logistics management to customs and duties calculations, production planning, certifications, packaging design, supply-chain, distribution agreements. Luckily, in the process, we met and worked with several great partners from production, to logistics, retail distribution and legal advice that helped us achieve what we wanted. So, it’s really fair to say that Primo would not have been possible without the help of our partners.

And most important, Primo’s biggest success is the satisfaction of our customers. So we want to say thanks to everyone that believed in us getting, reviewing, trying and listening to Primo.

But enough talking from us. We think that the best way of celebrating Primo’s first anniversary is by letting who tried it speak for it: we feel so proud about the reviews and comments, from the minimalist compact box design to the functionality in both hardware and software, the Primo has been reviewed as an all-in-one solution for the audiophile.

We received high praises from big members of the HiFi community, from John Darko saying that Volumio Primo is a “very good sound streamer both digitally and analog-wise and a nicely made product”, to Hans Beekhuysen telling the “Primo offers very good sound quality and build quality for the money.” 

Audiophile streamer device - Volumio Primo

AVGuide.ch expressed that “The Volumio Primo is an excellent streaming/player software in a modest device with outstanding sound characteristics and a neat GUI with many possibilities”. Primo has been reviewed as being an easy-to-use device, any music lover can get and use their Primo. According to Digiphonix, Volumio Primo is “an exciting and pleasant product, especially for a first time user of a networked streamer device, someone seeking a good music source that can grow with their audio gear acquisitions over time and also, for the avid listener that already makes use of lossless music files and streaming.”

The Volumio Primo was created to give three main fundamental properties every audiophile should enjoy: the sound quality, the practicality and the wide range of features, all by using best audio hardware combination made of the ASUS Tinker board S and the ESS ES9028Q2M DAC.

To this regard, AV Forums has described the Volumio Primo as “a seriously impressive product that comes enthusiastically recommended.” As well as HiFi Choice saying that “When you consider the flexibility and outstanding sonic performance on offer, the Volumio Primo looks like a very attractive option at the price.”


Volumio Primo connectivity

It’s called Primo not only because of the Italian meaning “first“, being the first of its kind created under Volumio qualifications, but also because Primo means “of top quality or importance”. Volumio’s purpose is to give our community that, the uppermost experience.

We are kindly grateful to you, because with your help, Volumio Primo came to life and now, according to all the reviews, it is one of the top HiFi music player streamers in the market. Needless to say, this gave us the strong motivation to keep on bringing new products to life (the next of which will be Motivo… ). We have many new products in the works and we are uttermost excited for the future ahead of us.

So, once again: thanks to everyone who made it possible. Thanks to anyone who’s listening to music with Primo. Thanks to anyone who talked about it. Thanks to anyone who believed in it. Thanks to anyone who waited impatiently for it.

And… thanks to anyone who hasn’t got it yet but will do it now.

Happy birthday Primo!

The post Volumio Primo Turns One! appeared first on Volumio.

17 January, 2020 03:50PM by Monica Ferreira

hackergotchi for Freedombone


Mitigating Google Tracking

Epicyon now replaces YouTube links with invidio.us automatically. This doesn't eliminate Google's tracking, but reduces the amount of it such that by watching a video you're sending less data about yourself to Google. invidio.us is just an alternative free software interface to YouTube.

The history of these sorts of workarounds is that eventually the company finds some way to block the alternative interface, but for now at least this is a kind of practical harm reduction. The less data the surveillance companies get, the better.

17 January, 2020 10:10AM

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.18 LTS Beta (5.17.90) Available for Testing

Are you using Kubuntu 19.10 Eoan Ermine, our current Stable release? Or are you already running our development builds of the upcoming 20.04 LTS Focal Fossa?

We currently have Plasma 5.17.90 (Plasma 5.18 Beta)  available in our Beta PPA for Kubuntu 19.10.

The 5.18 beta is also available in the main Ubuntu archive for the 20.04 development release, and can be found on our daily ISO images.

This is a Beta Plasma release, so testers should be aware that bugs and issues may exist.

If you are prepared to test, then…..

For 19.10 add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

In case of issues, testers should be prepare to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], Telegram [2] or mailing lists [3].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.16 or 5.17?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

17 January, 2020 09:48AM

January 16, 2020

Podcast Ubuntu Portugal: Ep 73 – WSL por Nuno do Carmo (parte 1)

Episódio 73 – WSL por Nuno do Carmo (parte 1). 2 Ubuntus e 1 Windows entram num bar e… Isto podia ser o início de mais uma anedota, mas o que realmente aconteceu foi mais 2 Ubuntus e 1 Windows entram num podcast e começam a falar sem parar sobre WSL, e não só, de tal maneira que a ocnversa ficou a meio e terá de ser cotinuada no próximo episódio. Já sabem: oiçam, comentem e partilhem!

  • https://meta.wikimedia.org/wiki/WikiCon_Portugal
  • https://www.humblebundle.com/books/python-machine-learning-packt-books?partner=PUP
  • https://www.humblebundle.com/books/holiday-by-makecation-family-projects-books?partner=PUP
  • https://stackoverflow.com/questions/56979849/dbeaver-ssh-tunnel-invalid-private-key
  • https://fosdem.org
  • https://github.com/PixelsCamp/talks
  • https://pixels.camp/


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

16 January, 2020 10:45PM

hackergotchi for Cumulus Linux

Cumulus Linux

Kernel of Truth season 2 episode 15: 2019 retrospect and 2020 predictions

Subscribe to Kernel of Truth on iTunes, Google Play, SpotifyCast Box and Sticher!

Click here for our previous episode.

In this episode, hosts Brian O’Sullivan and Roopa Prabhu are joined by Kernel of Truth podcast guest pros Pete Lumbis and Rama Darbha. The group looks back at 2019, discussing what they learned from the year and then move on to their 2020 predictions. Want a teaser? Automation was a hot topic in 2019 not just on our podcast but with our customers. It’s become less of a “nice to have” and more of a “need to have.” If you’re hungry for more 2019 retrospect and 2020 predictions, be sure to listen to this jam-packed podcast.

Guest Bios

Brian O’Sullivan: Brian currently heads Product Management for Cumulus Linux. For 15 or so years he’s held software Product Management positions at Juniper Networks as well as other smaller companies. Once he saw the change that was happening in the networking space, he decided to join Cumulus Networks to be a part of the open networking innovation. When not working, Brian is a voracious reader and has held a variety of jobs, including bartending in three countries and working as an extra in a German soap opera. You can find him on Twitter at @bosullivan00.

Roopa Prabhu: Roopa is Director of Engineering, Linux software at Cumulus Networks. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. She loves working at Cumulus and with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at @__roopa

Pete Lumbis: CCIE R&S #28677 and CCDE 2012::3, Pete is a Technical Marketing Engineer at Cumulus Networks. He helps customers build and design next generation, fully automated data centers. He can be found on Twitter at @PeteCCDE

Rama Darbha: Rama is a Senior Consulting Engineer at Cumulus Networks, helping customers and partners optimize their open networking strategy. Rama has an active CCIE #22804 and a Masters in Engineering and Management from Duke University. You can find him on LinkedIn here.

16 January, 2020 02:44PM by Katie Weaver

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Доступан је Сербиан 2020 КДЕ

Доступна је за преузимање нова верзија оперативног система Сербиан ГНУ/Линукс 2020, са КДЕ графичким окружењем. Графички изглед новог издања посвећен је српској архитектуриСербиан долази подешен на ћириличну опцију, а кроз системске поставке може бити одабрана и латиница, као и ијекавица за обе варијанте. Као основа за образовање дистрибуције коришћен је Debian (Buster) у својој стабилној верзији, са КДЕ графичким окружењем. 

Сербиан 2020, као и претходних шест издања, намењен је свим корисницима који желе да имају оперативни систем на српском језику. Намењен је и као могући избор за садашње кориснике власничких оперативних система. Такође, постоје и корисници који не умеју све сами да подесе и који су до сада користили Линукс дистрибуције које важе као више пријатељске за употребу. Додатне снимке екрана можете погледати овде.

На новом издању, поред уобичајених програма који долазе уз КДЕ графичко окружење, налази се и колекција програма који ће корисницима омогућити квалитетно извршавање постављених задатакаСви преинсталирани програми су преведени на српски језик. Употребљен је кернел 4.19.0-6, а у односу на претходну верзију, побољшана је подршка за екстерне уређаје, додато је пар нових апликација, па садашњи избор изгледа овако:

Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје краће од 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. По инсталацији, Сербиан ће заузети нешто изнад 5 ГБ, тако да би било пожељно да му приликом партиционисања, за удобно деловање одредите 12 до 15 ГБ.


Када вам се подигне тек инсталирани систем, прочитајте приложени текстуални документ где је записано неколико савета. Као битније, треба истаћи да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. Десктоп ефекти који су подешени подразумевано и покрећу се притиском на тастере F10, F11 и F12. У нашој ризници софтвера могу се пронаћи и инсталирати пакети: teamviewer, viber, veracrypt, deadbeef, dropbox, yandex-disk, master-pdf, megasync итд.

На крају, хвала свим читаоцима ових редова, корисницима који имају или ће имати Сербиан на свом рачунару, као и свим медијима и појединцима који су дали свој допринос популаризацији оперативног система на српском језику. Ако има заинтересованих који желе да помогну у промоцији, доступни су и банери за ту намену.

16 January, 2020 09:20AM by DebianSrbija (noreply@blogger.com)

hackergotchi for SolydXK


Waterfox Downgrade

There were reports on Waterfox Classic and Waterfox Current version 2020.01 not starting. In terminal the user would be presented with this error: "version `GLIBC_2.30' not found".

We decided to downgrade both Waterfox Classic and Waterfox Current in our repository and wait until upstream fixes the issue.

If you encounter problems starting your Waterfox install you can downgrade with either one of these commands:

apt install waterfox-classic=2019.12-4
apt install waterfox-current=2019.10-4

You can follow the discussion here: https://forums.solydxk.com/viewtopic.php?f=7&t=7749

16 January, 2020 08:38AM

January 15, 2020

hackergotchi for Freedombone


Tempgraph yearly update

At the beginning of each new year I update the tempgraph data set to get the full data for the previous year. The overall picture of climate change is grim. For a while it was looking like the rate of temperature increase was slowing, but now it's evident that's not the case.

Global temperature anomalies 1960-2019

The slowing of the rate of increase correlates with the beginning of The Great Recession in 2008, then after about 2015 temperatures rise again. In the last 40 years average temperatures have risen by over one degree. If nothing changes then we can expect to be past two degrees by 2060 and at that point it's probably game over for human civilization as we know it. If there are positive feedback effects it could happen sooner. Two degrees might not sound like much, but these are averages and in a system as big as the planet it takes gigantic forces to move the average up or down.

15 January, 2020 06:59PM

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: KUserFeedback 0.9.90 Beta Release

KUserFeedback is a framework for collecting user feedback for applications via telemetry and surveys.

The library comes with an accompanying control and result UI tool.


Signed by Jonathan Riddell <jr@jriddell.org> 2D1D5B0588357787DE9EE225EC94D18F7F05997E

KUserFeedback as it will be used in Plasma 5.18 LTS

15 January, 2020 04:15PM

Dmitry Shachnev: Qt packages built with OpenGL ES support are now available

Some time ago, there was a thread on debian-devel where we discussed how to make Qt packages work on hardware that supports OpenGL ES, but not the desktop OpenGL.

My first proposal was to switch to OpenGL ES by default on ARM64, as that is the main affected architecture. After a lengthy discussion, it was decided to ship two versions of Qt packages instead, to support more (OpenGL variant, architecture) configurations.

So now I am announcing that we finally have the versions of Qt GUI and Qt Quick libraries that are built against OpenGL ES, and the release team helped us to rebuild the archive for compatibility with them. These packages are not co-installable together with the regular (desktop OpenGL) Qt packages, as they provide the same set of shared libraries. So most packages now have an alternative dependency like libqt5gui5 (>= 5.x) | libqt5gui5-gles (>= 5.x). Packages get such a dependency automatically if they are using ${shlibs:Depends}.

These Qt packages will be mostly needed by ARM64 users, however they may be also useful on other architectures too. Note that armel and armhf are not affected, because there Qt was built against OpenGL ES from the very beginning. So far there are no plans to make two versions of Qt on these architectures, however we are open to bug reports.

To try that on your system (running Bullseye or Sid), just run this command:

# apt install libqt5gui5-gles libqt5quick5-gles

The other Qt submodule packages do not need a second variant, because they do not use any OpenGL API directly. Most of the Qt applications are installable with these packages. At the moment, Plasma is not installable because plasma-desktop FTBFS, but that will be fixed sooner or later.

One major missing thing is PyQt5. It is linking against some Qt helper functions that only exist for desktop OpenGL build, so we will probably need to build a special version of PyQt5 for OpenGL ES.

If you want to use any OpenGL ES specific API in your package, build it against qtbase5-gles-dev package instead of qtbase5-dev. There is no qtdeclarative5-gles-dev so far, however if you need it, please let us know.

In case you have any questions, please feel free to file a bug against one of the new packages, or contact us at the pkg-kde-talk mailing list.

15 January, 2020 02:55PM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Alien Labs 2019 Analysis of Threat Groups Molerats and APT-C-37

In 2019, several industry analyst reports confused the threat groups Molerats and APT-C-37 due to their similarity, and this has led to some confusion and inaccuracy of attribution. For example, both groups target the Middle East and North Africa region (with a special emphasis on Palestine territories). And, they both approach victims through the use of phishing emails that contain decoy documents (mostly in Arabic) and contain themes concerning the political situation in the area. To improve understanding of the differences and similarities of the two groups (as well as the links between them), we at Alien Labs™ are providing an analysis of their 2019 activity. A recent spear-phishing document from Molerats APT-C-37 Overview APT-C-37, also known as Pat-Bear or the Syrian Electronic Army (SEA), was first seen in October 2015 targeting members of a terrorist organization. Since 2015, however, APT-C-37 has broadened their objectives to include government agencies, armed forces leadership, media...

Fernando Martinez Posted by:
Fernando Martinez

Read full post


15 January, 2020 02:00PM

hackergotchi for Qubes


XSA-312 does not affect the security of Qubes OS

The Xen Project has published Xen Security Advisory 312 (XSA-312). This XSA does not affect the security of Qubes OS, and no user action is necessary.

This XSA has been added to the XSA Tracker:


15 January, 2020 12:00AM

Qubes OS 4.0.3-rc1 has been released!

Shortly after the announcement of 4.0.2, a bug was discovered in the dom0 kernel included in that release. This was not a security-related bug, but rather a compatibility bug that would have presented installation difficulties for the majority of users. (Users on unaffected hardware were safe to continue using the release.) This bug has now been fixed, along with a few installer fixes, resulting in 4.0.3-rc1.

In keeping with standard semantic versioning, we’ve incremented the patch version number to reflect this latest fix, so 4.0.2 has become 4.0.3. This is the first release candidate (rc1) for 4.0.3, because we’d like to give the community an opportunity to test it before declaring it to be the stable release. However, the changes from 4.0.2 are minimal, and 4.0.2 itself was preceded by three release candidates, so we plan to keep the 4.0.3-rc1 testing period brief.

As with 4.0.2, 4.0.3-rc1 includes many updates over the initial 4.0 release, in particular:

  • All 4.0 dom0 updates to date
  • Fedora 30 TemplateVM
  • Debian 10 TemplateVM
  • Whonix 15 Gateway and Workstation TemplateVMs
  • Linux kernel 4.19 by default

Qubes 4.0.3-rc1 is available on the Downloads page.

What is a point release?

A point release does not designate a separate, new version of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.0) inclusive of all updates up to a certain point. Installing Qubes 4.0 and fully updating it results in the same system as installing Qubes 4.0.3.

What should I do?

If you installed Qubes 4.0, 4.0.1, or 4.0.2 and have fully updated, then your system is already equivalent to a Qubes 4.0.3 installation. No further action is required.

Regardless of your current OS, if you wish to install (or reinstall) Qubes 4.0 for any reason, then the 4.0.3 ISO makes this more convenient and secure, since it bundles all Qubes 4.0 updates to date.

Note: The Qubes 4.0.3 ISO will not fit on a single-layer DVD (for the technical details underlying this, please see issue #5367). Instead, we recommend copying the ISO onto a sufficiently large USB drive. However, if you would prefer to use optical media, we suggest selecting a dual-layer DVD or Blu-ray disc.

Thank you to all the release candidate users for testing this release and reporting issues!

Release candidate planning

If no major issues are discovered in 4.0.3-rc1, we expect the stable release of 4.0.3 to follow sometime next week. As usual, you can help by reporting any bugs you encounter.

15 January, 2020 12:00AM

Qubes Canary #22

We have published Qubes Canary #22. The text of this canary is reproduced below. This canary and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View Qubes Canary #22 in the qubes-secpack:


Learn about the qubes-secpack, including how to obtain, verify, and read it:


View all past canaries:


                    ---===[ Qubes Canary #22 ]===---


The Qubes core developers who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is January 13, 2020.

2. There have been 56 Qubes Security Bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

    427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce

5. We plan to publish the next of these canary statements in the first
two weeks of April 2020. Special note should be taken if no new canary
is published by that time or if the list of statements changes without
plausible explanation.

Special announcements


Disclaimers and notes

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently
compromised.  This means that we assume NO trust in any of the servers
or services which host or provide any Qubes-related data, in
particular, software updates, source code repositories, and Qubes ISO

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other
means, like blackmail or compromising the signers' laptops, to coerce
us to produce false declarations.

The news feeds quoted below (Proof of freshness) serves to demonstrate
that this canary could not have been created prior to the date stated.
It shows that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to
anybody. None of the signers should be ever held legally responsible
for any of the statements made here.

Proof of freshness

Mon, 13 Jan 2020 11:12:28 +0000

Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
The U.S. Versus Iran: A Dangerous New Era in the Middle East
Germany Plans to Repatriate Ebola Patients
Can Nuclear Power Offer a Way Out of the Climate Crisis?
Killing of Iran General Soleimani Akin to War Declaration
Dissendent Describes 'Cultural Genocide' Against Uighurs

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Seven Days in January: How Trump Pushed U.S. and Iran to the Brink of War
Desperate Residents Ignore Dangers of Philippine Volcano and Return Home
A New Home for French Socialists, on Paris’s Periphery
A Growing U.S. Base Made This Afghan Town. Now It’s Dying.
Iran Cracks Down as Protests Over Downing of Airliner Grow

Source: BBC News - World (https://feeds.bbci.co.uk/news/world/rss.xml)
Taal volcano: Lava spews as 'hazardous eruption' feared
Iran plane downing: Canadian PM promises 'justice' at memorial
Hevrin Khalaf: Death of a peacemaker
Retired Pope Benedict warns Francis against relaxing priestly celibacy rules
Egypt-Ethiopia row: The trouble over a giant Nile dam

Source: Reuters: World News (http://feeds.reuters.com/reuters/worldnews)
'Oust Uncle': Thailand's jog for dissent signals new breed of activists
Britain's royal showdown: queen hosts Meghan-Harry crisis talks
Iran protesters take to the streets in third day of demos over plane
Australian prime minister's approval rating goes up in flames
Thai elephants march in silence for Australian bushfires

Source: Blockchain.info


[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this
canary in the qubes-secpack.git repo, and (2) via digital signatures
on the corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures!

15 January, 2020 12:00AM

January 14, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: New Ubuntu Theme in Development for 20.04

Yaru is the user interface theme that has been used in Ubuntu since 18.10. The theme is what determines the colours, borders, shadows, size, and shape of individual elements on the screen.

Last week, the Yaru team visited London to plan the future of Yaru with members of Canonical’s Design and Ubuntu Desktop teams. I’d like to thank Carlo, Frederik, Mads and Stuart for travelling across Europe to collaborate with us at the Canonical offices.

Yaru in Ubuntu 19.10

Yaru in Ubuntu 19.10

The face of Ubuntu Desktop

Since Ubuntu 18.10, Yaru has been the face of Ubuntu Desktop; the most popular Linux based desktop operating system. Changing the Ubuntu Desktop default theme requires careful consideration and the work created by the Yaru team isn’t just limited to Ubuntu.

Yaru is available for Fedora users and for Arch users, too. Last October, Pop! OS rebased their theme on Yaru. We’ve also had requests for Yaru variants that use the colours of Linux Mint, Manjaro and the Ubuntu flavours.

Yaru Design Sprint 2020

The importance of branding

For most operating system vendors, having a distinctive look for the OS is important in establishing their brand. For example, one of the most obvious visual changes planned for Ubuntu 20.04 LTS is that check-boxes, radio buttons, and switches will change from green to Ubuntu aubergine. This will reduce the abundance of colours used overall, while still making it unmistakably Ubuntu.

Yaru - New and Old

Yaru before and after the design sprint.

Through our attendance at conferences such as GUADEC and Linux Application Summit, we’ve learned that some GNOME/GTK contributors develop using distributions other than Ubuntu. However, they want to ensure their applications render correctly for Ubuntu users without having to dual boot or maintain Ubuntu virtual machines. To help facilitate that, a community-contributed FlatPak of the Yaru theme will be made available to compliment the packages of Yaru already available in the Fedora archive and the Arch Linux AUR.

To further minimise the likelihood of inconsistent application presentation when using Yaru, the Yaru team more closely aligned with the upstream Adwaita theme last year. With the introduction of GitHub actions, the Yaru team are now automatically receiving each upstream Adwaita change as a pull request on the Yaru repo. This helps reduce the delta between the themes and enables the Yaru developers to stay current.

Light, dark, and in between

Ten years ago, Ubuntu 10.04 LTS — with its Radiance and Ambiance themes — popularised the choice of having lighter and darker variations of the same default theme.

Radiance Theme

Ambiance Theme

The original theme for GNOME 3.0, Adwaita, was designed with the intent that there would be one theme with no variations. Since then, however, macOS and Windows have adopted a similar approach to Ubuntu, with light and dark choices.

With Ubuntu 20.04 LTS, we plan to go one step further. As well as the dark variation, and the standard version with light controls and dark headers, we will be introducing a third variation that is light throughout. We also plan to reintroduce the ability to switch between these variations within ‘Settings’, for which we have design work in progress.

Window Colour Switcher Mockup

Yaru vairant switcher mockup

In future, we would also like these settings to switch the theme for shell elements, such as the top bar and notification bubbles. Achieving this without requiring a logout each time will require additional work in GNOME Shell – something we are investigating.

Folder icon exploration

We are experimenting with some alternative folder icons that aim to preserve the Ubuntu identity, while maintaining good contrast in both the light and dark variants of Yaru.

Light and Dark

Easier contribution

We have also planned some activities to make it easier for potential new contributors to get involved with the Yaru project. This includes better documentation that describes the theme architecture, making the build system support alternative coloured variants which we will pre-populate with an alternative theme for each of the official Ubuntu flavours.

The Yaru team has also been regularly attending GNOME Design team meetings. Members of the Canonical Design and Desktop team will also join those meetings on a regular basis so we can better collaborate with upstream.

Not finished yet

In between mugs of tea on the roof of the Canonical offices in London, we achieved a great deal during the design sprint. But, we’re not finished yet and have identified a number of areas for improvement we’ll be working on in the weeks ahead. I hope you’ve enjoyed this little look at what we’re working towards for Ubuntu 20.04.

A cup of tea on the roof

14 January, 2020 09:00PM

hackergotchi for VyOS


VyOS Access subscriptions for documentation

Documentation, sadly, is a sour topic for many open source networking projects. Web frameworks and libraries, graphics software, sometimes even games have more detailed documentation than some of the widely used network software projects.

14 January, 2020 05:26PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Kubernetes: a secure, flexible and automated edge for IoT developers

Cloud native software such as containers and Kubernetes and IoT/edge are playing a prominent role in the digital transformation of enterprise organisations. They are particularly critical to DevOps teams that are focused on faster software releases and more efficient IT operations through collaboration and automation. Most cloud native software is open source which broadens the developer pool contributing and customising the software. This has led to streamlined versions of Kubernetes with low footprints which are suited for IoT/edge workloads.

Read this report from analyst firm, 451 Research, to learn more including:

  • How the maturity of IoT and the value it brings is underpinned by the right technology stack
  • The factors driving execution venues for IoT workloads
  • The requirements needed to maximise usage of Kubernetes at the edge

To view the report, fill out the form below.

14 January, 2020 04:48PM

Jonathan Riddell: Zanshin 0.5.71


We are happy and proud to announce the immediate availability of Zanshin 0.5.71.

This updates the code to work with current libraries and apps from Kontact.

The GPG signing key for the tar is
Jonathan Riddell with 0xEC94D18F7F05997E


14 January, 2020 03:37PM

Ubuntu Blog: Why you should upgrade Windows 7 to Ubuntu

Windows 7 has reached the end of its life. It will no longer receive security updates and Microsoft’s technical support will stop. Running an out-of-date OS can have serious potential risks. Fortunately, there are two simple ways to solve this problem: 1. Buy a new computer running another operating system, or 2. Install Linux on any computer you like. In this blog, we’re talking about the Linux option, specifically Ubuntu.

What is Ubuntu?

Ubuntu is an open-source operating system supported by Canonical, with millions of users and five years of support, for free. It runs on PCs, in the cloud and on “Internet of Things” devices. It can host thousands of applications, it is a platform with a global community of users backing it, and it is designed to be secure by default.


Upgrading from Windows to Ubuntu gives access to thousands of apps ready to install. The majority of these applications are free and open-source. You can do anything from photo editing to media streaming, from reading the news to messaging your friends. But crucially, you can continue using your old favourites too. Like:

Google Chrome

Moving from Windows, it’s more than likely a lot of users will be used to Google chrome. Ubuntu runs Firefox by default but changing to chrome is very simple to do. With chrome, the approximate 56.1% of web browsers who were using Google Chrome at the end of last year can happily move to Ubuntu. You can even move all of your bookmarks from chrome on Windows 7 to Ubuntu.

To do so, click on the three dots at the top right of the browser.  Move your mouse to “Bookmarks” and open “Bookmark manager”. You’ll see this part can also be achieved with Ctrl+Shift+O. Next, select the bookmarks you want to keep, select to the three dots at the top right of the page, and click “Export bookmarks”. Google Chrome will then save your bookmarks as a HTML file for you to save online. You can later use that file to import your bookmarks back into chrome once Ubuntu is installed.


If you’re moving from Windows 7, you’re going to want to keep your music. And given that Spotify is the most popular global audio streaming subscription service with 248m users. It’s a good job Spotify is on Ubuntu too.


WordPress has a dedicated desktop client for Ubuntu. The app lets you manage WordPress sites, write and edit the design of your site without having to switch to browser tabs.


A free, open-source 3D creation application, for creating 3D printed models, animated films, video games and more. It comes with an integrated games engine that can be used to develop and test video games

and Skype

Skype is the most widely used cross-platform video calling app. It brings features like voice calls, video calls and desktop screen sharing to your computer. On Ubuntu you can continue skyping to your heart’s content.

With these apps, most PC users will be able to function as normal. You can continue to search the web, listen to music, watch films, talk with your friends and download new apps. Plus, you can discover thousands more designed and built by the community.

Getting new apps

For general users, there is still a preconception that Linux is complicated. But the technology you are already using has had roots in Linux for years. Chromebooks run Linux outright and the Android operating system is based directly on Linux too. Plus, installing software on Ubuntu is actually easier than on Windows. On Ubuntu users install apps using the Software Centre and the Snap Store. Both are similar to the android and ios app stores you are used to but have been around much longer. And then installing is just a case of clicking install. Without needing to click through Windows asking to make changes to your computer.

Though to be clear, there are things that do not hold up. The two biggest stand out differences at the moment are gaming, and Microsoft Office. Gaming on Ubuntu is a work in progress that can be difficult unless you already know how to do it. For example, installing Steam on Linux is really easy and there are lots of popular games available (Dota 2, Counter-Strike: Global Offensive, Hitman, Dota) to play. However, a lot of other popular games are not yet available to run natively. To play, they require work which could present a big challenge for the average user.

Microsoft Office is also not available on Ubuntu. It has become the go-to office suite for writing documents or creating presentations, but it is not the only option. There are alternatives that are worth trying, or that you might even prefer. The Linux answer to Microsoft Office is LibreOffice. A set of applications that aim to achieve the same things as Microsoft Office, pre-installed with any version of Ubuntu.

Web apps

There some great online alternatives too. Applications with the same functionality as Microsoft Office, available through Ubuntu, that do just as good a job. For example, google’s office suite. These applications provide the same functions as Microsoft Office, online, for free. The basic tools are the same and your documents are more accessible. Sharing and storing work is easier simply by hosting them online. All online web apps are available with Ubuntu.


Open-source means Ubuntu is built by people, for users. It is backed by Canonical who provide extra services and support for large businesses to use Ubuntu in their organisations. It is free for anyone in the world to use, for anyone to contribute to, and so anyone can suggest or request new things. Even Microsoft is contributing in order to have their say. Open-source communities are famous for being passionate about their work and for being collaborative, they are open to everyone. Using Ubuntu is a step towards joining a global community, and contributing your work to a bigger picture. 

But people are what makes a community great. The Ubuntu community is not just software people and computer people, but artists and photographers, entrepreneurs and inventors. People who contribute and feedback their own views to make apps and features the best they can be.


Going back to why the switch from Windows 7 is necessary, Ubuntu brings security.  Every line of code is thoroughly reviewed and vetted by Canonical or a member of the community. Code isn’t implemented until it works as it’s supposed to, and is checked for vulnerabilities. There are full-time employees at Canonical actively looking for bugs and vulnerabilities.

Ubuntu is the most popular choice for an operating system in the cloud and within enterprises. The system you can download for your own computer is based on the same robust security principles that companies like Amazon and Google rely on for built-in security. It’s also popular on a smaller scale in devices like robots and home automation for the same important security reasons.

Documentation for Ubuntu’s security features is available online. For example, a feature known as AppArmour confines applications to limit attack space and restricts access to specific users more tightly. And a feature known as LivePatch allows for live security updates to be installed onto your computer without restarting. If a bug in your system is found the update rolls out automatically to fix it without you needing to do anything. It’s done in the background to keep your computer secure by default. 

How to get Ubuntu

There are several ways to get going with Ubuntu. Following this blog, there will soon be a series that will walk you through the stages of upgrading from Windows 7 to Ubuntu. There are three main options.

  1. You can install Ubuntu on a computer you already have. This can be difficult if you haven’t done it before but there are tutorials available and an upcoming blog series to walk you through how to do it.
  2. You can buy a new computer pre-installed with Ubuntu from one of Canonical’s partners. This blog post on Dell’s computers will point you in the right direction. Buying a pre-installed computer is the quickest and easiest way to get Ubuntu, but it can be expensive.
  3. You can also install Ubuntu in a virtual environment. This option might sound the most confusing and is the least intuitive if you’re new to Linux, but it is the most straightforward. This option means installing Ubuntu in a virtual environment on Linux, Windows or MacOS. Inside an application that lets you access Ubuntu from your desktop. A simple walkthrough is available on the Ubuntu community Wiki page.  


To conclude, if you know anyone still running Windows 7, a relative, a small business owner or any other less than techy person in your life, let them know Windows 7 is soon going to leave their system exposed. There are a few options to take, one of which is Ubuntu. A Linux operating system that offers thousands of new apps to explore and most of the features you can get from Windows, for free. Ubuntu is well looked after by the community, with users across the globe, and by Canonical, who help to make it secure and function to an industry-leading level of reliability.

14 January, 2020 02:34PM

hackergotchi for Freedombone


Epicyon and Spam Mitigation

I notice that the Pleroma project (another ActivityPub server) has been having trouble with spam, and there have also been earlier spam problems with Mastodon instances. They've mitigated it by having a captcha by default. Personally, I don't like captchas. I don't like them mainly because I can't solve them (the ones with heavily distorted text). As far as captcha systems are concerned I am a robot. Beep boop.

So how does Epicyon deal with spam?

In its design ActivityPub is quite similar to email, and that means it can potentially suffer from similar problems. There are a few ways that fediverse instances in the last couple of years have dealt with this.

The main one is http signatures. Without getting into the details of http signatures as a cryptographic mechanism this basically gives a reasonable assurance about which account a post is coming from when it gets delivered. But that on its own isn't enough. An adversary can potentially generate arbitrary numbers of separate accounts at electronic speeds.

An additional mitigation commonly used has been registration limits. On a public instance you might open new registrations for a limited time or for a limited number of new accounts and then close it again and allow time for the newcomers to settle. The settling time tends to avoid admins becoming overwhelmed by newbie questions, trolls or spam floods. This seems to have worked quite well, and Epicyon also has this available. You can set registrations to be open and then also specify the maximum number of new registrations. By default new registrations are allowed and the maximum is set to 10. In a Freedombone installation with the Epicyon app installed new registrations are closed and only created via a command in the background when new members are added from the admin screen.

Epicyon also has quotas, with a maximum limit on the number of posts which can be received from an account or a domain per day. So if there's a rogue instance sending out a lot of spam or if one of your friends accounts gets hijacked then the maximum rate at which posts can arrive is contained.

Then there is the infamous DDoS scenario. Suppose that there are a million bad instances out there on different domains and they all send one spam per day. In this case it's down to the firewall, and Freedombone only allows a limited number of simultaneous connections on the https port.

Epicyon also does things in a way which makes life difficult for spammers. As a general rule you only see posts from people that you're following. There is no public or federated timeline. And there is no relaying of posts going on either. To a large extent what you see is what you get, with no additional stuff from random accounts you're not interested in. So unless you are following a spam account they may have difficulty getting into your timeline. An extra feature which is off by default but which can be turned on if you need it is to only receive DMs from people that you are following.

It should also be said that Epicyon isn't designed to run large public instances with thousands of accounts. It's intended to support about ten accounts at the upper limit, for self-hosting or small groups. At large scale Epicyon would probably perform poorly, and this is another reason why it would be unattractive for use by spammers. A Small Tech approach has advantages which would otherwise become headaches for projects fixated upon scaleability.

14 January, 2020 11:48AM

hackergotchi for Tails


Tails 4.2.2 is out

This release is an emergency release to fix a critical security vulnerability in Tor Browser.


  • Update Tor Browser to 9.0.4.

    This fixes a critical vulnerability in the JavaScript JIT compiler of Firefox and Tor Browser.

    Mozilla is aware of targeted attacks in the wild abusing this vulnerability.

    This vulnerability only affects the standard security level of Tor Browser. The safer and safest security levels are not affected.

Fixed problems

  • Avoid a 2-minutes delay when restarting after doing an automatic upgrade. (#17026)

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 4.2.2

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from 4.2 to 4.2.2.

    Users of Tails 4.0, 4.1, and 4.1.1 have to upgrade to 4.2 first and then to 4.2.2.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

All the data on this USB stick will be lost.

To download only

If you don't need installation or upgrade instructions, you can directly download Tails 4.2.2:

What's coming up?

Tails 4.3 is scheduled for February 11.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

14 January, 2020 11:00AM

hackergotchi for Freedombone


Epicyon Scheduled Posts

In Epicyon you can now schedule posts to be delivered at some time in the future. This can be useful for creating reminders to yourself to do things (eg. don't forget the milk) by posting a DM to yourself in the future. It could be used to promote an event by scheduling information posts leading up to it. Or it could also be used to handle time zone issues where you'd like a post to be seen but the expected recipients may not be awake if you post it right now.

With this type of feature there is the potential for spam, so the number of posts which can be scheduled at any point in time is quite small. Also spammers would have much easier methods for generating and sending a lot of posts, and the signature checking tends to mitigate against the kinds of spamming which happens with email.

14 January, 2020 10:45AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How to launch IoT devices – Part 1: Why it takes so long

(This blog post is part of a 5 part series, titled “How to launch IoT devices”. It will cover the key choices and concerns when turning bright IoT ideas into a product in the market. Check out this white-paper on IoT app stores for background reading.)

You have a budget and a bright idea. How do you turn that into a revenue generating and market capturing product? How do you escape “Pilot Purgatory” and the volume of decisions that need to be made about IoT before day-0? 

This blog series distills the learnings from over 30 Canonical project summaries and case studies from IoT launches. You’ll learn about the key design decisions and choices to consider. The goal is to increase the velocity of your technical decision making. It will show how Ubuntu Core and its suite of products are designed to turn IoT launches into a 5-step journey that can take as little as 2-weeks.

Need for speed: How e-commerce launches fast

This decade, IoT will become a trillion dollar industry. Currently it sits between $200 and $250 million, so there is a lot of planned growth. Best of all, the projected value is still up for grabs by new players in the market, i.e. you.

Contrast this sentiment with McKinsey’s finding that 85% of IoT products are still in pilot, one year from starting. 25% take more than 2 years to release. To find out why, let’s compare IoT with another “modern” business model: e-commerce and retail. 

A generic launch sequence
An e-commerce launch, positioned against generic launch sequence

Generally speaking*, the steps to launch a retail product are outlined above. The internet’s impact is that each stage is, at best automated (Amazon, Sourcify, Swipe) and at worst commoditised (Ali Baba, East-Asian manufacturing and wholesale, 3PL). The result is a de-risked, faster and cheaper retail product launch.

So why are so many IoT business ideas, stuck in Pilot Purgatory?

Automated and Commoditised launch of IoT devices

To answer this question, I looked at Canonical’s internal database of projects, to see how successful IoT devices are launched, as well as the problem statements customers came to us with. This covers +30 business cases, project summaries and case studies. 

A few things stood out as different (but similar) to e-commerce and retail, as seen below.

An IoT launch, positioned against generic launch sequence


A decision on what IoT hardware to use becomes a decision on what your entire software stack needs to be. Parts of that stack might not work together or with the operating system, apps or your current CI/CD while remaining reliable, robust and secure.

Best case scenario: Select hardware that is known to work with a full-featured IoT stack, now and in the future. 

Customer Interaction

Making a portal is not about finding a GUI that a human user/customer can easily access (as is the case with retail). IoT devices are typically unmanned for much of a device’s life. While fleet management vendors exist, it is difficult to get this to work with a) the hardware selected and b) the operating system and software stack selected. 

Best case scenario: Select a software distribution method that allows a device manager full control over what is running on a device, with minimal downtime and in-field engineering costs when apps change.

Value exchange

Write good software that solves a use-case, and integrate this so it works well with hardware and the operating system. This requires deep knowledge of embedded development and the hardware, operating system and kernel. 

Best case scenario: Leverage your current code base and DevOps while writing code that will definitely work on your device


Finally, to ship a device, it needs to be configured specifically for each customer. Letting the customer configure a device, is a large source of friction. Configuring a device before shipping reduces first touch friction for the customer at the cost of engineering time.

Best case scenario: Provision a device in your factory, but without needing extensive engineering support and costs.


Many modern businesses have commoditised or automated steps which make taking a product to market faster and easier. IoT does not have this yet. To get the answer to these questions now, contact Canonical and read this white-paper on IoT app stores for a bit more background. 

The following blogs in this series will show you how to reach the “best case scenario” in all of the above steps. Next, we will look at selecting hardware. You should sign up to our IoT newsletter on the right hand side of this page to make sure you read it when it comes out.

*Specifically, you can start with Chinese wholesalers like Alibaba and get the raw materials to sell. Amazon provides access to +250 million customers with a store front, or Shopify can be used for the DIY enthusiast. Then, East-Asian manufacturing can be leveraged at any scale, for example by using Sourcify. This adds value to raw materials and turn it into a unique product, while Stripe, Paypal, or banks take payments. Finally, third-party logistics gets the product from warehouse to consumer.

14 January, 2020 09:47AM

January 13, 2020

The Fridge: Ubuntu Weekly Newsletter Issue 613

Welcome to the Ubuntu Weekly Newsletter, Issue 613 for the week of January 5 – 11, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

13 January, 2020 09:35PM

January 12, 2020

Kubuntu General News: Kubuntu 19.04 reaches end of life

Kubuntu 19.04 Disco Dingo was released on April 18, 2019 with 9 months support. As of January 23, 2020, 19.04 reaches ‘end of life’. No more package updates will be accepted to 19.04, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 19.10 Eoan Ermine continues to be supported, receiving security and high-impact bugfix updates until July 2020.

Users of 19.04 can follow the Kubuntu 19.04 to 19.10 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 18.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 19.04 Disco Dingo.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2020-January/000252.html
[2] – https://help.ubuntu.com/community/EoanUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

12 January, 2020 11:23PM

Lubuntu Blog: Lubuntu 19.04 End of Life and Current Support Statuses

Lubuntu 19.04 (Disco Dingo) will reach End of Life on Thursday, January 23, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 19.10 as soon as possible if you are still running 19.04. After January 23rd, the only supported releases […]

12 January, 2020 06:50PM

hackergotchi for SparkyLinux


Sparky 5.10

A quarterly update of live/install media of Sparky 5.10 “Nibiru” of the stable line is out. This release is based on Debian 10 “Buster”.

– the base system has been upgraded from Debian stable repos as of January 10, 2020
– Linux kernel 4.19.67-2+deb10u1 LTS (PC)
– Linux kernel 4.19.75-v7+ (ARMHF)
– Chromium web browser changed to Firefox-ESR (ARMHF)
– small bug fixes and small improvements

System reinstallation is not required; if you have Sparky 5.9 installed make full system upgrade:
sudo apt update
sudo apt full-upgrade

or via the System Upgrade tool.

Sparky 5 is available in the following flavors:
– amd64 & i686: LXQt, Xfce, MinimalGUI (Openbox) & MinimalCLI (text mode)
– armhf: Openbox & CLI (text mode)

New iso/img images of the stable line can be downloaded from the download/stable page.

12 January, 2020 12:45PM by pavroo

January 11, 2020

hackergotchi for Freedombone


Freedombone on Rock64

There is now a Freedombone image for the Rock64 single board computer. They're fairly cheap and sufficiently powerful that I've been using one of these as a desktop machine for the last year without any major problems. The Rock64 has an A53 processor which doesn't do speculative execution and so is not vulnerable to an entire category of possible security problems.

There are two images available here. freedombone-main-rock64-arm64.img.xz is the clearnet version and freedombone-onion-rock64-arm64.img.xz is the onion version. It's recommended that you install to an SSD and then connect it to the USB3 port with a USB3 to SATA adapter cable. You will also need to install this boot utility which changes the boot order so that the Rock64 can boot from USB.

If you want to run a Matrix homeserver or NextCloud on one of these it's recommended to use the 2GB or 4GB RAM version.

11 January, 2020 08:34PM

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Podcast Ubuntu Y Otras Hierbas S04E03: Privacidad en la Red y entrevista Paco Molinero por traducciones Ubuntu

Fernando Lanero, Paco Molinero y Marcos Costales analizamos la privacidad en la red de redes: Internet. Además entrevistamos a Paco Molinero como responsable del grupo de traductores de Ubuntu al Español en Launchpad URL de traducción.

Escúchanos en:

11 January, 2020 03:56PM

January 10, 2020

Ubuntu Blog: The State of Robotics – Robotics Over the Holidays

Canonical closes for the holidays, but robots just get more festive. Roboticists seem to feel the festive spirit, and it turns their projects into festive robots. The Ubuntu robotics team isn’t quite ready to let go of the festive cheer. So we’d like to share with you some of our favourite projects that we saw over the holidays. As ever if you want us to talk about what you’re doing, send an email to robotics.community@canonical.com and let’s talk. Next month we will be back to usual programming, for now, get look at these!

Pepper the Robot’s Christmas Wish

The Autonomous Systems Lab at ETH Zurich put together what should really be described as a short film, show off their robots and granting the wish of their poor robot pepper. The lab is dedicated to the creation of robots and intelligent systems able to autonomously operate in complex and diverse environments. Their primary interests lie in mechatronic design, control of systems, novel robot concepts and festive robotic short films. (One of those might not be accurate

Santa’s Modular Helper

Robo dev’s main aim in this video is to appeal to Santa’s elves. The poor elves have been working away in the North Pole since CocaCola came up with the idea. With Robodev’s modular designs, the elves can finally take some time off. The overall system consists of intelligent mechatronic modules and a software assistant. The combination enables automated solutions that are easy to handle and quickly adaptable.

FZI Christmas Robots

The FZI Research Center for Information Technology is a non-profit institution for applied research in information technology and technology transfer. Based in Germany, they are tasked with providing businesses and public institutions with the latest findings in IT. This video shows off their robots picking, procuring and decorating a tree to a festive track.

Norlab Robots Playing in the Snow

Norlabs, or the Northern Robotics Laboratory, gave their robots some time off for the festive season. Of course, this only meant they made their way outside and rolled around in the snow. Still, maybe next year they’ll build a snowman. Norlabs specialises in mobile and autonomous systems working in northern or difficult conditions. Investigating new challenges related to navigation algorithms for mobile robots in real-life conditions. Their current focus is on localization algorithms designed for laser sensors and 3D reconstruction of the environment.

ABB’s Bloomingdale Festive Robotics

ABB took it all to another level. The videos so far have been robots taking a break or doing some favours. ABB made their robots really work for their holidays. Joining up with Bloomingdales they had their robots working with humans in a collaborative festive robot initiative. Twelve ABB robots interacting with visitors to Bloomingdale’s holiday celebration at their 59th Street flagship store. ABB’s robots were the main attraction in three of Bloomingdale’s twelve-holiday window displays. They’re going a great job in making robotics more retail friendly.


ROS-Industrial in Stuttgart happened at the beginning of December. A place where roboticists and organisations working with ROS came together to present their work in the industrial robotics space. There were plenty of robots being shown off, lots of food and some really engaging talks that will be released soon. Canonical sent Rhys the robotics Product manager to give a presentation entitled “How to maintain a robot that outlives its support.” You can check out a summary of events and a discussion of the whole thing in the blog post.


We love the holidays. So many passion projects come to the forefront of peoples lives, the internet lights up with festivity and robot parts appear under trees everywhere. In January the robotics team is back to work as usual so expect updates on that in February. In the meantime, if you have passion projects or robotics projects you’d like us to highlight in our next blog, send a summary to robotics.community@canonical.com and let’s talk. Happy 2020 all.

10 January, 2020 06:02PM

hackergotchi for Volumio


The Volumio Driver for ELAN Home Control System is Finally Here!

Ladies and gentlemen, shall your 2020 start with another great news. As you know, we believe that Volumio should allow you to listen to your favorite music in the way you prefer, from the control point you prefer. And on this matter, we were missing on something big: Home Automation Systems. Until now. 

That’s why we partnered with Innovo, a world-class company that develops home automation drivers and apps (make sure to follow them on Instagram @inn.ovo) to create the Volumio ELAN Driver, a driver you can integrate with ELAN Smart Home Control & Automation System, and control your Volumio from there. All your house managed and controlled in one system, and now your favorite Hi-Fi music player as well.

This is the first of a series of Drivers coming for the most important Home Automation Systems. Crestron, Control4 and RTI will follow later this year.

But first, if you haven’t heard about it, what’s a home control and automation system? It is basically having a smart home, where you can control and manage pretty much anything you have in your house, from small things such as windows and lighting to all your entertainment devices as well as home security. 


ELAN Home Control System

ELAN is one of the top companies that provides intelligent and personalized home automation systems to stay connected to your home through their control remotes, touch panels and keypads or if you are outside your home, you can control everything from your own tablet or mobile phone and, on top of that, it’s easy to use.

Your music, movies, sports and TV are at your command via a sleek remote, tablet or convenient touch pad in any room. Your security system, climate, lights, garage door, and more are all under your control from your smart devices in any location. The ELAN Entertainment and Control Smart Home System brings this power and convenience to you—personalized to your lifestyle. – ELAN Home Control System

Volumio can now be added as one of the top features you can have through your ELAN controller with the new Volumio ELAN driver.

Volumio ELAN Driver

While there are many advantages of acquiring the Volumio driver to your ELAN controller, the most important we should mention is that our Hi-Fi system is compatible to Elan’s control system, meaning you can listen to all the good music with all the attributes MyVolumio offers, including the hi-res streaming services TIDAL & Qobuz. 

Also, you can manage easily your Volumio Primo with the Volumio driver for your ELAN controller system. With an user interface nice and simple in its use, you can listen to great music at home and have full control of your Volumio Primo and all its features, from browsing, navigation and playback functions.

Now your Volumio player can integrate smartly with an ELAN control system. The Volumio Elan driver makes your life easier by using with the standard Elan Media control to give you a familiar interface, as seen in the video above, providing you an excellent music playback environment in any part of your home.

Now it’s time to add the new Volumio ELAN driver to your smart home system to have a full entertainment experience at home.

If you are a dealer, you can get your first copy for free, and later ones can get with a substantial discount. Contact us with your dealership details to receive your promo code before purchase.

 Get your Volumio ELAN driver here


The post The Volumio Driver for ELAN Home Control System is Finally Here! appeared first on Volumio.

10 January, 2020 02:47PM by Monica Ferreira

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Infrastructure-as-Code mistakes and how to avoid them

Two industry trends point to a gap in DevOps tooling chosen by many. Operations teams need more than an Infrastructure-as-Code approach, but a complete model-driven operations mentality. Learn how Canonical has addressed these concerns by developing Juju, an open source DevOps tool, to allow it create multiple world-leading products.

Two DevOps trends in 2020

Let’s consider the two trends and their impacts on DevOps:

Microservices push complexity from applications into operations

Switching to microservices involves breaking up large, complex applications into many smaller service units. In a ‘divide and conquer’ style, each unit becomes easier to deploy, scale-out, update and remove in isolation from the rest. Yet, microservices architectures risk pushing complexity onto operations, rather than removing it.

Adopting Infrastructure-as-Code tools removes some of this burden, but not all of it. Suddenly you need to consider network latency, message formats and connectivity. Each new service needs to connect to every other service. Without adequate tooling, upgrades and migrations can become time-consuming and brittle.

Container-based deployment can incur a performance penalty

A container-based deployment platform, such as Kubernetes, is a joy for many applications. But containers do not suit all use cases. Databases and other middleware are often bound by I/O constraints. These applications receive a performance penalty when they run as containers.

Most Infrastructure-as-Code tools tend to work well for hosting applications on a single platform.  But reality is more complex. They rarely allow you to deploy applications across platforms. And life is even more complicated when containers and virtual machines are combined.

Systems engineers and operations teams want to provide maximum performance, but they also want to provide agility to product engineering teams writing applications. Eventually, complexity will catch up to the business. A management decision to consolidate on tooling is likely to mean that databases are moved into Kubernetes clusters.  

Self-driving software is here

Canonical develops Juju to address these concerns. Its approach differs from other DevOps tools in the ways:

  • Reduced complexity by allowing devops teams to work at a higher level of abstraction
  • Increased stability by enabling applications to dynamically respond to their deployment
  • Increased flexibility by decoupling configuration management from an application’s specific hosting environment

Juju is simple, secure devops tooling built to manage today’s complex applications wherever you run your software. Compute, storage, networking, service discovery and health monitoring come for free and work with Kubernetes, the cloud and the laptop. 

Juju allows your software infrastructure to maintain always-optimal configuration. As your deployment changes, every applications’ configuration operations are dynamically adjusted by charms. Charms are software packages that are run alongside your applications. They encode business rules for adapting to environmental changes.

Using a model-driven mentality means raising the level of abstraction. Users of Juju quickly get used to a flexible, declarative syntax that is substrate-agnostic. Juju interacts with the infrastructure provider, but operations code remains the same across. Focusing on creating a software model of your product’s infrastructure increases productivity and reduces complexity.

Automating infrastructure at a low level of abstraction, DevOps has bought the industry from breathing space. But that breathing space is running out.

Examples of Infrastructure-as-Code excellence

Using a single tool that speaks multiple backends at the same time has several benefits: productivity increases and complexity remains constant. Here are two examples of Canonical’s products and services that rely on Juju to provide their competitive advantage: 

  • Open Source MANO (OSM) is an open-source implementation of the ETSI NFV MANO stack supported by global telecommunications service providers like Telefonica, BT and Telenor. By providing a native framework for implementing service orchestration capabilities, Juju charms have been selected by the OSM community as an engine behind Open Source MANO project.
  • Canonical’s managed services for OpenStack and Kubernetes provide the fastest and cost-effective path to a private cloud and container orchestration platform. The cost reductions are enabled through enhanced tooling, namely Juju.

Learn more

The best place to learn about Juju is by following its getting started guide.

10 January, 2020 06:39AM

January 09, 2020

Podcast Ubuntu Portugal: Ep 72 – Tangerina Canivete!

Episódio 72 – Tangerina Canivete! De volta ao nosso registo mais tradicional, neste episódio escolhemos alguns dos assuntos que captaram a nossa atenção no início de 2020. Novos snaps, phones e portatéis mas não só. Tudo isto e muito mais neste novo episódio do fantástico Podcast Ubuntu Portugal. Já sabem: oiçam, comentem e partilhem!

  • https://bartongeorge.io/2020/01/01/introducing-the-2020-xps-13-developer-edition-this-one-goes-to-32/
  • https://fosdem.org
  • https://github.com/PixelsCamp/talks
  • https://pixels.camp/
  • https://protonmail.com/blog/protoncalendar-beta-announcement/
  • https://twitter.com/Mariogrip/status/1213319224057323521
  • https://twitter.com/thepine64/status/1210942451453698048
  • https://ubuntu.com/blog/the-ubuntu-20-04-lts-pre-release-survey
  • https://www.humblebundle.com/books/holiday-by-makecation-family-projects-books?partner=PUP
  • https://www.humblebundle.com/books/python-machine-learning-packt-books?partner=PUP
  • https://www.omgubuntu.co.uk/2020/01/standard-notes-snap-app
  • https://www.zdnet.com/article/chinese-hacker-group-caught-bypassing-2fa/


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

09 January, 2020 10:45PM

Ubuntu Blog: “MAAS. What number would you like?”

Enhanced MAAS Network Testing and Link Checking

I remember getting my own phone line when I was about thirteen years old, thanks to my first job in a grocery.  My friend, Evan, could tell you all about the sounds that happen before the called party’s phone starts to ring.  He could tell you the routing, the set-up delay, and even warn you when the Jane Barbe intercept message was coming. He could also tell you, most of the time, what kind of equipment had routed it (e.g., Crossbar). I traded a lot of pizza for just a little of his learning, a very handy skill to pick up.

With the upcoming release of MAAS 2.7, Metal-as-a-Service has basically gained that skill, to your benefit. One of the big features of MAAS 2.7 is network testing that identifies broken and slow network links when you try to commission machines. In this release, we offer specific link tests, as well as the ability to test networking in a configurable way, even using your own scripts.

First, MAAS tests whether links are connected or disconnected. Previously, when commissioning, you couldn’t detect unplugged cables. Now you can, sort of like knowing the telephone line noise change when you’re about to hear that “your call did not go through.”You do have to take a couple of steps: First you have to upgrade to 2.7, then run commissioning again to see if a link is disconnected. But you no longer have to puzzle over what’s broken when this happens.

Second, MAAS makes sure you’re getting the most out of your link speed. As servers and hardware get faster — 10G, 40G, even 100G NICs — the chances increase that you might plug your 10G NIC into a 1G switch, for example. Just like when I would call my grandmother long-distance, and I had some idea how long till the the “ring” happened, just from call set-up noises.

Previously, with MAAS, you’d be stuck with the speed of the slowest link, but there wasn’t a way to verify your link speed without recommissioning.  Depending on your physical hardware, that might still be an issue, but the MAAS UI can now warn you if your interface is connected to a link slower than what the interface supports.  And all information shown in the UI is available via the API, as well. You can still replace a slow switch without recommissioning.

Third, MAAS allows you to configure network connectivity testing in a number of ways. If you can’t connect to the rack controller, deployment can’t complete, the same way that Evan sometimes knew right away that a call wouldn’t go through (I never mastered that one). Now MAAS can check connectivity to the rack controller and warn you if there’s no link, long before you have to puzzle over it.

If you can’t connect to your gateway controller, traffic can’t leave your network. It’s a little like trying to call long-distance without dropping a dime: you can dial, but the call won’t go through. MAAS can now check this link and recognize that there’s no connectivity, which alleviates a lot of annoying (and sometimes hard-to-detect) network issues.

Fourth, Internet connectivity testing has been greatly expanded. Previously, MAAS gave a yes/no link check during network testing, like the ANI numbers that would read you back your phone number: nice to know, but it’s not a great revelation. Now you can give a list of URLs or IP addresses to check.

In the ephemeral environment, standard DHCP is still applied, but when network testing runs, we can apply your specific configuration for the duration of the test. While all URLs / IPs are tested with all interfaces, we do test each of your interfaces individually, including breaking apart bonded NICS and testing each side of your redundant interfaces.

You can also run different tests on each pass, e.g., a different set of URLs, although each run would be a different testing cycle (one number, one phone call). For testing individual interfaces, you can use the API.

Of course, the main feature for 2.7 is improved — and customisable — network testing. You can now create your own commissioning scripts and tests related to networking. You can create your own network tests (e.g., a network throughput test) and run them during the network testing portion of the MAAS workflow. There are no particular restrictions on these scripts, so you can test a wide variety of possible conditions and situations. On the phone system, this was only something I could do when the phone tech came around and let me hook up his telephone test set.  As I said, enough decent pizza and anything is possible.

Okay, gotta go. Phone’s ringing.

09 January, 2020 09:40PM

hackergotchi for Freedombone


Copyleft Adoption

I've been publishing software on the internet for a long time. Most of it wasn't very exciting, nor especially useful. Why did I choose the GPL license from about 1999 onwards? The reasons were not very complicated.

Prior to 1999 I just uploaded code to a website without any licenses. It was mainly small demos for technologies which are now thoroughly obsolete. Then at some point in 1999, or maybe 2000, someone emailed saying something like: I see you are publishing source code. Unless you add a license with a warranty disclaimer someone might try to sue you. Also without a license this isn't public, it's all rights reserved. I wasn't very interested in legal stuff and so I did a bit of reading and found that GPL best matched what I was doing. My thinking was that if I'm putting software out there with the intention of it being public then I'd rather that it remained public and not have any proprietary forks. That way if there are improvements I can incorporate them back into the original. The sharealike nature of GPL fitted that goal.

There are other reasons to use copyleft licenses, but this is still mostly my reasoning about it now. Other kinds of licenses seem to have more down sides to them if the goal is a global public software commons.

09 January, 2020 09:06PM

Freedombone at 36C3

At the recent 36C3 congress there was a talk about the Freedombone project for the first time. It's in German and there aren't any English translations but since I've given a similar talk in Manchester earlier in 2019 I know roughly what's being described. The slides for the English version of the talk can be downloaded here.

Freedombone has been going for quite a while now, but having someone other than myself doing a talk about it at a CCC event where there are likely to be people who are interested is some kind of significant milestone for the project.

Every year I review what projects I'm working on and try to assess whether they're still relevant and worth continuing with. Technology moves quickly and what may be highly relevant one year may be technically and/or socially obsolete the next. But in the case of self-hosting projects - of which Freedombone is one - this still seems more relevant to the current time and the likely near future than at any point in the past. If anything, the problems which Freedombone tries to overcome are only becoming more acute and more conspicuous to the average internet user.

09 January, 2020 07:54PM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

AT&T Alien Labs analysis of an active cryptomining worm

This blog post provides an overview of the AT&T Alien Labs™ technical analysis of the common malicious implants used by threat actors targeting vulnerable Exim, Confluence, and WebLogic servers. Upon exploitation, malicious implants are deployed on the compromised machine. While most of the attacks described below are historical, we at Alien Labs are continuing to see new attacks, which can be further researched on the Alien Labs Open Threat Exchange™ (OTX). The main goal of these malicious implants thus far has been mining Monero crypto-currency. Below, we have included a diagram of a typical attack vector for this cryptomining worm. Installation For our research, we analyzed the following sample (you can also see related pulses on this in OTX): f00258815853f767d70897db7263f740b161c39ee50c46c26ab247afb824459a. First, the adversaries attempt exploitation. When they are successful and code execution is achieved,...

Fernando Dominguez Posted by:
Fernando Dominguez

Read full post


09 January, 2020 02:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Ubuntu Studio 19.04 (Disco Dingo) reaches End of Life on January 23, 2020

Ubuntu Studio 19.04 was released on April 18, 2019. As a non-LTS release, 19.04 has a 9-month support cycle, and, as such, the support period is now nearing its end and Ubuntu Studio 19.04 will reach end of life on Thursday, January 23, 2020. The supported upgrade path from Ubuntu... Continue reading

09 January, 2020 12:00AM

January 08, 2020

Ubuntu Blog: Data Ops at petabyte scale

Deploying Apache Spark in production is complex. Should you deploy Kubernetes? Should that Kubernetes cluster be backed by Ceph? Perhaps stick with a traditional Hadoop/HBase stack? Learn how Juju and model-driven operations have enabled one data engineering team to evaluate several options and come to an ideal solution.

This article is an interview between Tim McNamara, Developer Advocate at Canonical and James Beedy of OmniVector Solutions. James has spent years refining his approach for packaging Apache Spark and managing large-scale deployments. With data volumes into the petabyte range and current operations to maintain, he has used Juju to create purpose-built solutions for his team and their customers.

The interview is divided into multiple sections:

Introducing the Data Ops problem

Tim: Hi James, how did you become interested in Spark? And especially Spark with Juju?

I’ve had the privilege of creating the ops and workflows for PeopleDataLabs over the past few years. PeopleDataLabs, or PDL for short, runs a myriad of batch style data processing workloads. They’re predominantly based on the Apache Spark framework and its Pyspark extension for Python.

When I started working with PDL they were using CDH—the Cloudera Distribution of Hadoop. As I started digging in, I quickly realized that this implementation presented some major roadblocks.

[Juju’s] modelling approach alleviates the person implementing and maintaining the software from monotonous work. This allows engineers to spend cycles where it counts. Juju can handle the rest.

James Beedy from Omnivector Solutions explains why you should use Juju

First and foremost, as the company added headcount, developers began experiencing contention when accessing the available compute and storage resources. While CDH had been a great way for PDL to get up and running, it was clear to me that a better solution was needed for the company to continue its rapid growth.

Second to that, it was critical that we maintained consistency across all production and development environments. This would allow developers to execute workloads using a standardized workflow regardless of what substrate was being used. It shouldn’t matter whether the processing was being carried out on-prem, in the cloud or some hybrid model.

I had been a member of the Juju community for a number of years at this point. I knew that this technology would allow me to create a robust and scalable solution to the problems presented above. Furthermore, Juju would allow us to easily evolve the technology stack as the company continued to grow.

Packaging Spark for repeated and ad-hoc workloads

Development jobs are heavily interactive. From an ops perspective, they’re much lumpier than production. It’s impossible to know if a developer will create a job that is ridiculously expensive. That’s why we need Juju, actually.

James Beedy, OmniVector Solutions

How did you begin packaging Spark applications?

I was presented with multiple data processing challenges that require highly parallel and distributed solutions. As I started researching these problems, I quickly determined that Apache Spark was a perfect fit for many of them.

Knowing this, I needed to learn more about how Spark applications were deployed and managed. I spent a fair amount of time digging into the Spark documentation and experimenting with different configurations and deployment methodologies.

As someone fairly new to Spark at the time, the top level differentiator that stuck out to me was that there were two primary types of workloads that Spark could facilitate: batch processing and stream processing. 

Batch processing is essentially static processing. You have input data in one location, load it, process it, write the output to the output location. Work is batched up and one job is more-or-less independent from the next. Aggregation and summaries are very important here.

Stream processing has less to do with aggregating over large static sources, and more to do with in-flight processing of data that might be generated on-the-fly. Think sensor readings or processing events generated by some backend server.

Another way to distinguish the two models is to think about whether the data comes from disk. As far as Spark is aware of, data for streaming workloads never touches disk. Batch processing, on the other hand, is 100% stored on disk. 

Which of those models best describes your use case?

PeopleDataLabs largely does batch, extract transform load (ETL) processing, but with two different modes: production and development.

What is a production job for your team?

Production workloads are ETL-type jobs. Data is stored in some obscure format, loaded into Spark, molded and interrogated, and finally transformed and then dumped back to disk. These production jobs are semi-automated, headless Spark workloads running on Yarn and backed by HDFS.


The Hadoop File System. It spreads files over many machines and makes them available to applications as if they were stored together. It is the workhorse that created the Big Data movement.

Great. How do things look like in development?

Development workloads consist of Jupyter Notebooks running on similar clusters with a similar setup. Apache Spark backed by Yarn and HDFS clusters.

Development jobs are heavily interactive. From an ops perspective, they’re much lumpier than production. It’s impossible to know if a developer will create a job that is ridiculously expensive. That’s why we need Juju, actually.

We want to be able to create and maintain multiple isolated environments that share common infrastructure. Anyway, this interactive development workflow generates our ETL workloads that are later deployed into production.

That’s pretty important background. How did you get started rolling it out?

Right, so after doing some research and diving into our codebase, I developed a checklist of features that I needed to support. 

Moving away from HDFS

I was targeting a batch processing architecture. Data movement is critical in that environment. That means a fat pipe connecting big disks.

Here is the list requirements. Some firmer than others.

We knew that were using HDFS for our distributed backend. HDFS is where the input and output data goes. But what was surprising after looking deeper that the only component of upstream Hadoop we were using was HDFS. Almost everything else was purely Spark/Pyspark.

Our codebase was dependent on the Spark 2.2.0 API. Everything we wanted to deploy needed to support that version.

Every node in the cluster needed to have identical dependencies in identical locations on the filesystem in order for a job to run. In particular, we needed a few Hadoop libs to exist in the filesystem where the Spark executors run.

Our method of distributing code changes to nodes in the cluster was to zip up the Jupyter Notebook’s work directory and scp to every node in the cluster.

So those are your requirements. Were there any “nice to haves”?

For sure! Replacing HDFS with a more robust storage solution would completely remove our dependency on Hadoop. This would significantly reduce the amount of effort it would take to maintain the stack. The more we could simplify, the better.

At this point, you probably experimented with a practical evaluation?

Right. At this point I had a clear understanding of the requirements. I started out by looking into swapping out CDH with Charmed Bigtop. The prototype highlighted two huge technical issues that needed to be overcome:

  • Charmed Bigtop supported the Spark 2.1 API but PeopleDataLabs jobs required version 2.2 of the Spark API.
  • PeopleDataLabs needed to sustain 1PB of SSD storage across all Hadoop task nodes. Charmed Bigtop only provisions HDFS onto the root directory.

This reaffirmed that a more flexible solution would be needed to meet the job processing requirements .

They sound like fairly critical shortcomings..

Well they meant that I knew right from the start that the Charmed Bigtop stack would not work for us out of the box. I needed to find another solution to provision our Spark workloads.

Following further analysis on moving forward with HDFS, it made a lot of sense to look into decoupling the storage infrastructure from the compute nodes.

What stack did you end up choosing?

We ended up deploying S3A with Ceph in place of Yarn, Hadoop and HDFS. 

Interesting. Why?

There were many upsides to this solution. The main differentiators were access and consumability, data lifecycle management, operational simplicity, API consistency and ease of implementation.

Could you please explain what you mean by “access” and “consumability”?

If we wanted to have multiple Hadoop clusters access the data, we needed to have the data on the respective HDFS filesystem for each cluster. This is a large part of the contention our developers were experiencing. Specifically, access to the data, the ability to process it, and the resources needed to house it.

Moving the data to Ceph and accessing it remotely via S3 solved this problem. By allowing many Spark jobs to read and write data simultaneously, consumability and access were no longer an issue.

It was quickly becoming clear that Ceph would provide a much more robust approach to distributed storage. On top of that, the Juju Ceph charms make deployment straight forward and painless.

What’s “data lifecycle management”? What problems were you facing that are solved by your new storage backend?

Migrating the data back and forth from HDFS to long term storage was a huge pain point. We could process data on HDFS just fine, but getting the data in and out of the distributed file system to a proper long term storage system on a frequent basis was creating contention. It limited how often jobs could be run. A user would need to wait for data to finish transferring before the cluster was available to run the next job.

That sounds like a great win for PeopleDataLab’s developers. You mentioned “API consistency and scale” as well. Would you mind explaining how those two are related?

Simply put, we can make Ceph talk to AWS S3. That makes it really easy to point our Spark jobs at wherever the data lives in Ceph.

The Hadoop AWS module makes this very easy. Plugging that into Ceph and Radosgw (HTTP REST gateway for the RADOS object store) meant that remote access via Spark is suddenly compatible with S3. 

Decoupling storage from compute by moving to Ceph S3 opened up a whole new world for us. We were already using object storage a fair amount for other purposes, just not for our processing backend.

This change allowed us to run jobs in the public cloud using Spark in the same way jobs are executed on premise. 

That’s really cool. I suppose that feeds into your “operational simplicity” point?

That’s right. After decoupling the storage with Ceph and dropping the need for HDFS, we only had to account for a single process: Spark.

Previously we had to account for a whole ocean of applications that needed to run in harmony. That was the reality for us running Pyspark on top of Cloudera Hadoop. 

Having the data on separate infrastructure allowed us to manage the compute and storage independent of each other. This enhanced user access, made the data lifecycle simpler, and opened up doorways for us to more easily run our workload in the cloud and on-prem alike.

Day-N benefits of Juju

Juju handles the details. All I needed to do was get Spark running with storage support. Juju completely manages the gritty details of talking to the underlying hosting infrastructure.

James Beedy, Omnivector Solutions

Supporting two hosting environments—cloud and on-prem—doesn’t sound like it matches your “ease of implementation” point.

That’s where Juju comes in. Juju handles the details. All I needed to do was get Spark running with storage support. Juju completely manages the gritty details of talking to the underlying hosting infrastructure.

Knowing that I wasn’t going to need to account for HDFS anymore, I took a closer look at Spark’s internal storage requirements: Spark has a need for an executor workspace and a cache. 

From prior experience I knew that Juju’s storage model fit this use case.

Sorry to interrupt, but could you please explain the term “Juju storage model” for anyone that’s not familiar?

Juju allows you to provision persistent storage volumes and attach them to applications. Actually that’s not the whole story. The Juju community would use the phrase, “Juju allows you to model persistent storage.”

Everything managed by Juju is “modelled”. The community calls this “model-driven operations”. You declare a specification of want you, e.g. “2x200GB volumes backed by SSD”. Juju implements that specification.

The term modelling is used because storage—and other things managed by Juju such as virtual machines and networking—is, in some sense, abstract. When we’re talking about an entire deployment, we actually don’t care about a block device’s serial number. And from my point of view, I just care that Spark will have access to sufficient storage to get its job done.

Okay cool, so it’s easy to give Spark access to two storage volumes, one for its executor workspace and the other for its working cache.

That’s right. Juju completely solves the storage challenge. But now we need to package the various Hadoop libs that Spark needs for our specific use case. Spark versions are tied to Hadoop versions, so this is more complicated than it should be.

Provisioning the Spark and Hadoop libs seemed to be a perfect fit for Juju resources. I downloaded the Spark and Hadoop upstream tarballs, attached as charm resources via  layer-spark-base and layer-hadoop-base. And it worked perfect!

Juju resources”?

Ah right, sorry – more jargon. A resource is some data, such as a tarball of source code, that’s useful for the charm. So, our tarballs are considered resources from Juju’s perspective.

A resource can be any binary blob you want to couple with your charm. Resources can be versioned and stored with the charm in the charmstore, or maintained separately and supplied to the charm by the user on a per deployment basis. 

And what do you mean by “layer-spark-base” and “layer-hadoop-base”?

Layers are reusable code snippets that can be used within multiple charms. Charm authors use layers to create charms quickly.

A charm is a software package that’s executed by Juju to get things done, like install, scale up, and some other functionality that we’ve touched on such as storage and network modelling.

Our private code dependencies and workflow was accommodated via another layer: layer-conda. This allowed our developers to interface to Juju to get our code dependencies on to the compute nodes.

I wrapped our data processing code up as a Python package. This allowed our developers to use the layer-conda config to pull our code dependencies into the conda environment at will. It also provides a more formal interface to managing dependencies.

Combining layer-conda, layer-spark, layer-hadoop, and layer-jupyter-notebook I was able to create a much more manageable code execution environment that featured support for the things our workload required.

If I’m hearing this correctly, you have the bulk of your implementation within 5 or so different code libraries—called layers—that allowed you to not only to deploy Spark on Ceph/S3A, but also enable developers to iterate on and deploy new workflows directly to production Spark cluster. 

More or less. It’s pretty cool. But the solution itself wasn’t entirely optimal.

What’s wrong with what you deployed?

I’ve covered a lot of ground. Perhaps before answering, I’ll review where we got to, implementation-wise.

I swapped out the standard HDFS backend for a Juju-deployed Ceph with object storage gateway. In this new architecture we were running Spark standalone clusters that were reading and writing data to the S3 backend using the S3A to communicate with an S3 endpoint from Spark.

Decoupling the storage and ridding ourselves of HDFS was huge in many ways. The Spark storage bits are handled by Juju storage. This accomodates the storage needs of Spark really well. The code dependencies bit via layer-conda was a huge improvement in how we were managing dependencies.

My Spark project had come a long way from where it started, but was nowhere near finished.

The build and runtime dependency alignment across the entire Bigtop stack is of critical importance if you intend to run any Apache software components in conjunction with any other component of the Apache stack. Thus illuminating the genius and importance of the build system implemented by the original Charmed Bigtop stack. This also shed light on reasons why my slimmed down solution wasn’t really full circle.

I realized that if I wanted to make a Spark charm that allowed for Spark and Hadoop builds to be specified as resources to the charm, that I would need a way of building Spark against Hadoop versions reliably.

Recommendations for other people deploying Apache Spark

So, is your solution something that you recommend for others?

Ha, well like a lot of things in technology – it depends.

The Spark standalone charm solution I created works great if you want to run Spark Standalone. But has its snares when it comes to maintainability and compatibility with components of the greater Apache ecosystem.

Without context, it’s impossible to know which Spark backend and deployment type is right for you. And even once you have an architecture established, you also need to decide where to run it.

I’ve evaluated three alternatives that suit different use cases: Elastic Map Reduce (EMR), Docker and Kubernetes with Ceph.

EMR is great if you want a no-fuss, turnkey solution. Docker provides more flexibility, but still suffers from misaligned dependencies.  In some ways, wanting to make use of Docker while keeping everything consistent naturally leads to Kubernetes. 

Elastic Map Reduce (EMR)?

AWS provides Elastic Map Reduce, Hadoop as a service. You give it one or more scripts, give them to the spark-submit command to run and voilà. Behind the scenes, custom  AMIs launch and install emr-hadoop. Once the instances are up, EMR runs your job.

It’s easy to ask the EMR cluster to terminate after it completes the jobs you’ve defined. This gives you the capability to spin up the resources you need, when you need them, and have them go away when the provided steps have been completed. 

In this way EMR is very ephemeral and only lives for as long as it takes to configure itself and run the steps.

Using EMR in this way gave us a no-fuss, sturdy interface to running Spark jobs in the cloud. When combined with S3 as a backend, EMR provides a hard to beat scale factor for the number of jobs you can run and amount storage you can use.

What’s it like to run Spark in Docker containers? 

I faced challenges in the likes of mismatched build and runtime versions and mismatched dependencies. It was a pain.

This issue became more prevalent as I tried to package Spark, Hadoop and Jupyter Notebook in hopes of getting remote EMR execution to work with yarn-client.

There were subtle mismatched dependencies all over the place. An emr-5.27.0 cluster is going to run Spark 2.4.4 and Hadoop 2.8.5. This means if you want to execute Jupyter Notebook code against the EMR cluster, the Spark and Hadoop dependencies that exist where the Jupyter Notebook is running need to match those provisioned in the EMR cluster. Keeping everything in sync is not easy.

Dockerizing Spark below version 3.0.0 is tedious as it was built from an unsupported Alpine image. Newer versions of Spark are more viable. Spark now uses a supported OpenJDK base built on debian-slim. This makes building on top of spark images far more streamlined. 

On the plus side, once Juju is installed, you can deploy Jupyter + Apache Spark + Hadoop using Docker with in a single line of code:

$ juju deploy cs:~omnivector/jupyter-docker

cs:~omnivector/jupyter-docker is a “Juju Charm”. Once you’ve deployed it, you can change the Docker image that’s running via changing configuration settings:

$ juju config jupyter-docker \

Alternatively, you can supply your image at deployment time:

juju deploy cs:~omnivector/jupyter-docker \
    --config jupyter-image="omnivector/jupyterlab-spark-hadoop-base:0.0.1"

Example Docker images compatible with cs:~omnivector/jupyter-docker are available from our open source code repositories.

Yay for things actually working. And Kubernetes?

In the progression of packaging and running workloads via docker, you can imagine how we got here. Running on Kubernetes has provided improvements in multiple areas of running our workload.

It’s possible to build the Jupyter Notebook image and the Spark driver image from the image that your executors run. Building our Spark application images in this way, provided a clean and simple way of using the build system to organically facilitate the requirements of the workload. Remember the dependencies and file system need to be identical where the driver and executors run.

This is the largest come up of all in the packaging of Jupyter/Spark applications; the ability to have the notebook and inherently also Spark driver image built from the same image as the Spark executors.

To facilitate this all happening, the layer-jupyter-k8s applies a role to grant the Jupyter/Spark container the permission needed to provision other containers (Spark workloads) on the Kubernetes cluster. This allows a user to login to the Jupyter web interface and provision Spark workloads on demand on the Kubernetes cluster via running cells in the notebook.

I have a few high level takeaways. 

  • Multi-tenancy is great. Many developers can execute against the k8s cluster simultaneously.
  • Dockerized dependencies. Package your data dependencies as docker containers. This works really well when you need to lock in Hadoop, Spark, conda, private dependencies, and other software to the version for different workloads.
  • Increased development workflow. Run workloads with different dependencies without having to re-provision the cluster.
  • Operational simplification. Spark driver and executor pods can be built from the same image.

Some drawbacks of Kubernetes and Ceph vs HFDS:

  • Untracked work. Spark workloads provisioned on k8s via Spark drivers are not tracked by Juju.
  • Resource intensity. It takes far more mature infrastructure to run Ceph and K8s correctly than it does to run Hadoop/HDFS. This comes down to the fact that you can run a 100 node Hadoop/HDFS installation on an L2 10G network by clicking your way through the CDH GUI. For Ceph and K8s to work correctly, you need to implement an L3 or L2/L3 hybrid network topology that facilitates multi-pathing and scaling links in a pragmatic way, you can’t just use L2, 10G if you want to do Spark on K8S + Ceph past a few nodes.

Given what we’ve talked about, you can see how packaging Spark and Hadoop applications can get messy. Using Juju though, developers and ops professionals alike can model their software application deployments. The modelling approach alleviates the person implementing and maintaining the software from monotonous work. This allows engineers to spend cycles where it counts. Juju can handle the rest.

Juju is simple, secure devops tooling built to manage today’s complex applications wherever you run your software. Juju can be used to deploy and manage any networked service, whether that service is delivered from bare metal hardware, containers or virtual machines. Visit the Juju website to learn more.

James Beedy is a dev ops specialist from Omnivector Solutions. Visit the Omnivector Solutions website and chat with them on Twitter at @OV_Solutions.

08 January, 2020 10:48PM

hackergotchi for Freedombone


XMPP simplification

The XMPP app on Freedombone has been improved a little by going to a single configuration file and also using the Debian package. Previously it was using a very hacky nightly version of Prosody, and the reasons for that are historical and no longer apply.

For most of the time that the Freedombone project has been going XMPP was being renovated and having all of the features which you would expect from a modern chat app added. Things like end-to-end security, working avatars and client state indication. So if you wanted to run Conversations on Android and have all of the server tests pass you needed to be compiling a recent version of Prosody from source. Debian moves at a glacial pace, but now the Debian packaged version is good enough.

The previous XMPP notifications system has also been replaced with sendxmpp, and this reduces the amount of maintenance needed.

XMPP may be old but it's still one of the most practical IM systems. An XMPP server can run even on the most minimal single board computer - unlike certain other chat systems that could be mentioned - and also supports the use of onion addresses. Many people are unaware that WhatsApp is really just an XMPP server with a proprietary client app and federation turned off.

08 January, 2020 10:02AM