April 14, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: DISA has released the Ubuntu 20.04 LTS STIG benchmark

The Security Technical Implementation Guides (STIG) are developed by the Defense Information System Agency (DISA) for the U.S. Department of Defense. They are configuration guidelines for hardening systems to improve security. They contain technical guidance which when implemented, locks down software and systems to mitigate malicious attacks.

DISA has, in conjunction with Canonical, developed STIGs for Ubuntu 20.04 LTS and is available for download at the STIGs document library.

14 April, 2021 01:28PM

hackergotchi for Volumio


hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: From lightweight to featherweight: MicroK8s memory optimisation

If you’re a developer, a DevOps engineer or just a person fascinated by the unprecedented growth of Kubernetes, you’ve probably scratched your head about how to get started. MicroK8s is the simplest way to do so. Canonical’s lightweight Kubernetes distribution started back in 2018 as a quick and simple way for people to consume K8s services and essential tools. In a little over two years, it has matured into a robust tool favoured by developers for efficient workflows, as well as delivering production-grade features for companies building Kubernetes edge and IoT production environments. Optimising Kubernetes for these use cases requires, among other things, some problem-solving around memory consumption for affordable devices of small form factors.

Optimised MicroK8s footprint

As of the MicroK8s 1.21 release, the memory footprint was reduced by an astounding 32.5%, as benchmarked against single node and multi-node deployments. This improvement was one of the most popular requests from the community, looking to build clusters using hardware such as the Raspberry Pi or the NVIDIA Jetson. Canonical is committed to pushing that optimisation further while keeping MicroK8s fully compatible with the upstream Kubernetes releases. We welcome feedback from the community as Kubernetes for the edge evolves into more concrete uses cases and drives even more business requirements.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/uyeq9ecmjGBvTuHHE5M1izeabc8Cp-l4BStQSAfrFQTSiPkWWKRAQAz_HcCSA3tiUrXF1exc9SGNP4CRknmLX_WdOGSrxMg4EbmLbISiR9B8sW6eXcIs0f3euN4BeUM0On7QVfok" width="720" /> </noscript>
Comparing the memory footprint of the latest two MicroK8s versions

How MicroK8s shed 260Mb of memory

If you’re asking yourself how MicroK8s dropped from lightweight to featherweight, let us explain. The previous versions either simply packaged all Kubernetes upstream binaries as they were or compiled them in a snap. That package was 218Mb and deployed a full Kubernetes of 800Mb. With MicroK8s 1.21, the upstream binaries were compiled into a single binary prior to the packaging. That made for a lighter package – 192Mb – and most importantly a Kubernetes of 540Mb. In turn, this allows users to run MicroK8s on devices with less than 1Gb of memory and still leave room for multiple container deployments, needed in use cases such as three-tier website hosting or AI/ML model serving.

Working with MicroK8s on NVIDIA

As MicroK8s supports both x86 and ARM architectures, its reduced footprint makes it ideal for devices as small as the 2Gb ARM-based Jetson Nano and opens the door to even more use cases. For x86 devices, we are particularly excited to work with NVIDIA to offer seamless integration of MicroK8s with the latest GPU Operator, as announced last week. MicroK8s can consume a GPU or even a Multi-instance GPU (MIG) using a single command and is fully compatible with more specialised NVIDIA hardware, such as the DGX and EGX.

Possible future memory improvements 

Hopefully, this is the first of many milestones for memory optimisation in MicroK8s. The MicroK8s team commits to continue benchmarking Kubernetes on different clouds – focusing specifically on edge/micro-clouds – and putting it to the test for performance and scalability. A few ideas for further enhancements we are looking into include combining the containerd runtime binary with the K8s services binary and compiling the K8s shared libraries into the same package. This way, the MicroK8s package memory consumption and build times will decrease even further, while MicroK8s will remain fully upstream compatible.

If you want to learn more you can visit the MicroK8s website, or reach out to the team on Slack to discuss your specific use cases.

Latest MicroK8s resources

14 April, 2021 12:03PM

April 13, 2021

hackergotchi for SparkyLinux


Rocket.Chat Desktop

There is a new application available for Sparkers: Rocket.Chat Desktop

What is Rocket.Chat?

Rocket.Chat is a reliable communication platform for high-private team chatting and collaboration. Highly scalable, our solution increases business efficiency by bringing messages, video calls, file sharing and all team communication into one place.
Desktop application for Rocket.Chat is available for macOS, Windows and Linux using Electron.

– Team Collaboration
– Remote Work
– Multi-platform
– Highly Configurable
– File sharing
– Tex Math rendering
– Video conferencing
– Screen sharing
– Multiple Integrations

Installation (Sparky 5 & 6 amd64):

sudo apt update
sudo apt install rocketchat

or via APTus -> IM -> Rocket.Chat icon.


License: MIT
Web: github.com/RocketChat/Rocket.Chat.Electron


13 April, 2021 07:40PM by pavroo

hackergotchi for Pardus


Türkiye Açık Kaynak Platformu Çevrim içi Pardus Yarışması ve Ödüller

Türkiye Açık Kaynak Platformu’nun gerçekleştireceği “Çevrim İçi Pardus Yarışması” ödülleri belli oldu. Pardus alanında fikre sahipseniz programa başvurmayı unutmayın!

Son başvuru tarihi: 21 Nisan 2021

Başvuru için:

Yarışmanın Kapsamı

Türkiye Açık Kaynak Platformu tarafından yürütülen “PARDUS’un Yaygınlaştırılması” projesi kapsamında, proje paydaşlarının desteğiyle “Çevrim içi PARDUS Yarışması” düzenlenmektedir. Yarışma kapsamında, PARDUS ve açık kaynak yazılım teknolojilerinin geliştirilmesi, yaygınlaştırılması ve destek verilmesi amaçlanmaktadır. Yarışma süresince yoğun eğitim ve mentörlük verilmesi planlanmaktadır. Bu kapsamda yarışma öncesinde 5 gün eğitim verilecek, ardından 5 gün sürecek yarışma başlatılacaktır. Katılımcılardan aşağıda belirtilen konu başlıklarında uygulama geliştirmeleri beklenmektedir.

  • Kullanıcı deneyimini iyileştirmek
  • Etkileşimli Tahta Uygulamaları Geliştirmek
  • Eğitim: Temel formasyon uygulamaları geliştirmek
  • Antivirüs yönetim arayüzleri geliştirmek
  • GTK Tabanlı uygulama mağazaları oluşturmak
  • Elektron tabanlı sistem altyapısı oluşturmak
  • QEMU için arayüz yazılımı geliştirmek
  • QT, Electron, GTK uygulamaları geliştirmek

Katılım Şartları

  • Yarışmacılar başvuru formunu doldururken girdikleri tüm bilgilerin doğru olduğunu kabul eder.
  • Katılımcılar, başvuru sürecinde; kendilerini, takım arkadaşlarını ve proje fikirlerini anlatan bir sunum hazırlamak zorundadır. Buna ek olarak yarışmacılar, söz konusu sunumu bulut.pardus.org.tr alanına yükleyip, ilgili linki başvuru formunun en alt kısmında yer alan alana girmek zorundadır.
  • Yarışmaya katılım bireysel ya da takım halinde olabilir. Takımlar en fazla dört katılımcıdan oluşabilir.
  • Yarışma kapsamında hazırlanacak sunumda takım üyelerinin tamamının proje kapsamında görev tanımları yapılmalıdır.
  • Yarışma süresince geliştirilecek kodların kod.pardus.org.tr’de paylaşımı zorunludur. Bu bağlamda tüm yarışmacılar projeleri ile ilgili gerekli dokümantasyonu yapmak zorundadır. Yarışmacılar, Pardus ile ilgili yaptıkları çalışmaları Platform ile paylaşmalıdır.
  • Yarışmaya katılacak projeler Açık Kaynak ve/veya özgür yazılım lisanslarla sunulmalıdır.
  • Yarışmacılar, yarışma tarihinden önce geliştirdiği ve mevcut durumda üzerinde çalıştıkları projelerini sunamaz.
  • Yarışmacılar, projelerinde kullandıkları her türlü bileşen, kütüphane, araç vb. yazılımların lisans koşullarını kontrol etmekle ve onlara uyum göstermekle yükümlüdür.
  • Yarışma süresince, yarışmacıların daha önce başka yerde yayınlanmış ya da satılmış projeleri kullanılamaz.
  • Proje çıktısı herhangi bir emülasyon aracı olmadan düzgün bir şekilde Pardus üzerinde çalıştırılmalıdır.

13 April, 2021 11:45AM

hackergotchi for Maemo developers

Maemo developers

GStreamer WebKit debugging tricks using GDB (1/2)

I’ve been developing and debugging desktop and mobile applications on embedded devices over the last decade or so. The main part of this period I’ve been focused on the multimedia side of the WebKit ports using GStreamer, an area that is a mix of C (glib, GObject and GStreamer) and C++ (WebKit).

Over these years I’ve had to work on ARM embedded devices (mobile phones, set-top-boxes, Raspberry Pi using buildroot) where most of the environment aids and tools we take for granted on a regular x86 Linux desktop just aren’t available. In these situations you have to be imaginative and find your own way to get the work done and debug the issues you find in along the way.

I’ve been writing down the most interesting tricks I’ve found in this journey and I’m sharing them with you in a series of 7 blog posts, one per week. Most of them aren’t mine, and the ones I learnt in the begining of my career can even seem a bit naive, but I find them worth to share anyway. I hope you find them as useful as I do.

Breakpoints with command

You can break on a place, run some command and continue execution. Useful to get logs:

break getenv
 # This disables scroll continue messages
 # and supresses output
 set pagination off
 p (char*)$r0

break grl-xml-factory.c:2720 if (data != 0)
 call grl_source_get_id(data->source)
 # $ is the last value in the history, the result of
 # the previous call
 call grl_media_set_source (send_item->media, $)
 call grl_media_serialize_extended (send_item->media,

This idea can be combined with watchpoints and applied to trace reference counting in GObjects and know from which places the refcount is increased and decreased.

Force execution of an if branch

Just wait until the if chooses a branch and then jump to the other one:

6 if (i > 3) {
(gdb) next
7 printf("%d > 3\n", i);
(gdb) break 9
(gdb) jump 9
9 printf("%d <= 3\n", i);
(gdb) next
5 <= 3

Debug glib warnings

If you get a warning message like this:

W/GLib-GObject(18414): g_object_unref: assertion `G_IS_OBJECT (object)' failed

the functions involved are: g_return_if_fail_warning(), which calls to g_log(). It’s good to set a breakpoint in any of the two:

break g_log

Another method is to export G_DEBUG=fatal_criticals, which will convert all the criticals in crashes, which will stop the debugger.

Debug GObjects

If you want to inspect the contents of a GObjects that you have in a reference…

(gdb) print web_settings
$1 = (WebKitWebSettings *) 0x7fffffffd020

you can dereference it…

(gdb) print *web_settings
$2 = {parent_instance = {g_type_instance = {g_class = 0x18}, ref_count = 0, qdata = 0x0}, priv = 0x0}

even if it’s an untyped gpointer…

(gdb) print user_data
(void *) 0x7fffffffd020
(gdb) print *((WebKitWebSettings *)(user_data))
{parent_instance = {g_type_instance = {g_class = 0x18}, ref_count = 0, qdata = 0x0}, priv = 0x0}

To find the type, you can use GType:

(gdb) call (char*)g_type_name( ((GTypeInstance*)0x70d1b038)->g_class->g_type )
$86 = 0x2d7e14 "GstOMXH264Dec-omxh264dec"

Instantiate C++ object from gdb

(gdb) call malloc(sizeof(std::string))
$1 = (void *) 0x91a6a0
(gdb) call ((std::string*)0x91a6a0)->basic_string()
(gdb) call ((std::string*)0x91a6a0)->assign("Hello, World")
$2 = (std::basic_string<char, std::char_traits<char>, std::allocator<char> > &) @0x91a6a0: {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x91a6f8 "Hello, World"}}
(gdb) call SomeFunctionThatTakesAConstStringRef(*(const std::string*)0x91a6a0)

See: 1 and 2

0 Add to favourites0 Bury

13 April, 2021 10:49AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Telecom AI: a guide for data teams

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/13dd/TelcoAI.jpg" width="720" /> </noscript>

Data is the new oil, and Artificial Intelligence is the way to monetize it. According to an IDC report, Artificial Intelligence (AI), alongside 5G, IoT, and cloud computing, is one of the technologies reshaping the telecom industry. From data-driven decisions to fully automated and self-healing networks, AI developments are accelerating innovation and driving costs of operation down.

However, while it is easier than ever to implement AI solutions in the telecom space, navigating a landscape of multiple databases, workflow engines and ML frameworks remains difficult.

Our latest telco whitepaper aims to provide a guide of existing use-cases for AI/ML in mobile networks. Download to discover:

  • Key questions to consider around core network, radio network, and enterprise IT parts before implementation, by use case
  • Recommendations of open source software components for building efficient solutions
  • Operation tips for AI in production

Download whitepaper

13 April, 2021 10:42AM

Ubuntu Blog: Ubuntu in the wild – 13th of April 2021

The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/07d7/Ubuntu-in-the-wild1.png" width="720" /> </noscript>
Ubuntu in the wild

UX Design vs Security

Tom Canning, VP of IoT at Canonical, explores the delicate balance between great user experience and secure applications. The two are usually pitted against each other, but is it really justified? And what if open source could help reconcile them?  

Read more on that here!

The reality of enterprise cloud adoption

How quickly are companies really adopting cloud? While you may think it is a question with an easy answer, the reality is a bit more complicated. Even if cloud computing is more and more common in enterprise IT, a lot of companies are still quite early in the cloud adoption cycle. Nicholas Dimotakis, VP, Worldwide Field Engineering at Canonical explores the reasons behind this surprising observation.

Read more on that here!

Building the future of Ubuntu on Windows

Canonical recently released the Ubuntu on Windows Community Preview, a special build of its OS for WSL 2. The goal is to provide a space for the community to collectively shape the future of Ubuntu on WSL, experimenting with new features and functionality in a sandbox environment. 

Read more on that here!

ROS Extended Security Maintenance

Great news for the robotics community: Open Robotics and Canonical announced a partnership around extended security maintenance (ESM) and enterprise support for the Robot Operating System (ROS). This will make it easy to access a hardened and long-term supported ROS system for robots,  with a single contact point for enterprise support.

Read more on that here!

Kubernetes 1.21: Full enterprise support

Earlier this week, Canonical announced full enterprise support for Kubernetes 1.21, covering Charmed Kubernetes, MicroK8s, and kubeadm. The goal is to remove complexity around Kubernetes operations from cloud to edge.

Read more on that here!

13 April, 2021 09:45AM

Ubuntu Blog: Security at the Edge: hardware accelerated AI-based cybersecurity with Canonical Ubuntu and the BlueField-2 DPU

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/945a/StorageReview-NVIDIA-bluefield-dpu.jpg" width="720" /> </noscript>

During GTC last fall, NVIDIA announced an increased focus on the enterprise datacenter, including their vision of the datacenter-on-a-chip. The three pillars of this new software-defined datacenter include the data processing unit (DPU) along with the CPU and GPU. The NVIDIA BlueField DPU advances SmartNIC technology, which NVIDIA acquired through their Mellanox acquisition in 2020.

Here at Canonical, we are proud of our long partnership with NVIDIA to provide the best experience to developers and customers on Ubuntu. This work has advanced the state of the art with secure NVIDIA GPU drivers and provisions for GPU pass-through. Our engineering teams collaborate deeply to provide the fastest path to the newest features and the latest patches for critical vulnerabilities. For networking, this has meant partnering with Mellanox (now NVIDIA) engineering to provide not just Ubuntu support but also support for hardware offload going back to the oldest ConnectX devices. In fact, Ubuntu was the first Linux distro enabled on the Bluefield cards back in 2019. Increasingly, Ubuntu, which has long been the operating system of choice for cutting-edge machine learning developers, data scientists, containers and Kubernetes is seeing more enterprise adoption across verticals.

In the past decade, NVIDIA revolutionized AI and machine learning with their industry-leading SDKs including CUDA, RAPIDS and more, with Ubuntu being the operating system of choice. The vision we now bring is, if you use the GPU for offloading your machine learning algorithms –  can the software-defined datacenter be disaggregated further using the dedicated hardware accelerators on the DPU to offload your security and storage workloads? Canonical’s focus, as always, is on ensuring these hardware features have supporting software stacks that are thoroughly tested and available for the world to consume in a clean and supportable fashion.

If you train your latest deep learning algorithms on cutting-edge NVIDIA GPUs on Ubuntu, why would you settle for anything other than the same thought leadership for your networking, security, and storage needs?

What’s new with the BlueField-2 DPU? And what’s next?

In addition to ConnectX-6, the Bluefield-2 DPU packs 8 ARM A72 CPU cores, 200Gb/s networking bandwidth and a 130Gb/s memory bandwidth (DDR4). As the figure below shows, this provides a trusted environment for offloading of datacenter ‘infrastructure’ applications – i.e storage, security and networking freeing up host servers to run more ‘applications’ – stuff that your business really needs to run. In addition to offload, the DPU isolates the security control plane from the host OS and applications, while using dedicated hardware accelerators to improve system performance. NVIDIA estimates up to 30% of host CPU cycles are currently spent on infrastructure applications, and the potential ROI improvements are significant. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/NwreZYtWbCZq0kn_j5ITln-JZuasy7yHsKdPiJPi1oCCXjqQbKcpqqVHep0olCh1X6P5mgMVSqPvSU6NGvrl-6DizzvUfGJS2SguMPj_vaeKo-sqb6Qy1sJrN8Sr3_-9zSh0G5hB" width="720" /> </noscript>

Closely following the BlueField-2, is the BlueField-3 datacenter-on-a-chip DPU, announced today and available in early 2022. This next-generation card offers 16 even more powerful ARM A78 cores (up from 8 A72s), 5X more compute and 2X more network bandwidth. If that’s not enough, it also supports the latest generation PCIe Gen5 interface and two DDR5 memory channels, providing 4X improvements in I/O and memory bandwidth and 2X faster acceleration for crypto and security than the BlueField-2. The NVIDIA DOCA SDK offers backward compatibility, meaning applications developed for and running on BlueField-2 silicon can work seamlessly on the BlueField-3.

Canonical Ubuntu and Kubernetes on the BlueField-2

<noscript> <img alt="" height="343" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_611,h_343/https://ubuntu.com/wp-content/uploads/fbe8/Title-Cards-19.png" width="611" /> </noscript>

Our partnership with NVIDIA means Ubuntu 20.04 is ready for download today with support for all the latest features. PXE boot support which provides the ability to provision the DPU remotely is imminent. Canonical provides the same support guarantees for Ubuntu on the DPU as on the host, whether you run a containerized Ubuntu on the DPU ARM cores or run natively to customize your system with snaps for bulletproof security.

Additionally, Canonical invests in Kubernetes, strongly driven by its footprint in the cloud native space, with 63% of all Kubernetes solutions running on Ubuntu. Canonical Kubernetes is pure-upstream Kubernetes. Charmed Kubernetes offers cloud-to-edge customization while MicroK8s is compatible with the powerful A72 cores on the DPU. (Charmed Kubernetes is a composable, multi-cloud, Kubernetes with automated operations. MicroK8s is a lightweight, low-touch, opinionated Kubernetes distribution for edge and IoT, loved by developers and enterprises for it’s simplicity.) Kubernetes on systems with the DPU provide streamlined operations for the cloud-native world.

As NVIDIA continues to upstream their patches, feature support continues to improve over time. If you are interested in learning more, here is a link to a talk we presented at GTC 2020. The talk touches upon DPU use-cases, and demos launching Ubuntu and MicroK8s on the DPU before showcasing a real-life deployment with Kubeflow and Charms. 

OVS/OVN Offload

Traditional cybersecurity has focused on external security threats. However, the rise of multi-tenant datacenters in this era of fragmentation and microservices has led to a new class of potential vulnerabilities and threats from other applications or tenants running on the same infrastructure as you. The DPU is uniquely positioned to monitor and control all traffic within the datacenter and even traffic between VMs and containers on the same server.

DPUs introduce an architecture shift into infrastructure deployments: networking services that used to provide virtual switching and routing services locally on compute nodes get moved to the DPUs, and thus isolated from the hypervisor hosts. As a result, there are control plane changes that are required for network interface provisioning to instances or pods. Canonical is developing changes necessary to enable seamless support for provisioning of DPU-accelerated network interfaces in OpenStack, Kubernetes and other projects that utilize Open Virtual Network (OVN) and Open vSwitch(OVS). The Canonical team is working to enable OVS and OVN support across NVIDIA’s Bluefield portfolio. These technologies form the backbone for use-cases on the DPU.

Security at the Edge and NVIDIA Morpheus – AI for Cybersecurity

Recently we were privy to a very interesting deployment using Kubernetes to orchestrate a system with thousands of BlueField-2 cards in a hybrid high-performance multi-tenant datacenter. The use case required strict performance consistency and isolation. This deployment offloads OVN to the DPUs to first create a Virtual Private Cloud (VPC), providing complete isolation between multiple tenants, and then uses the DPUs to enforce security policies at each node to create intelligent cloud security infrastructure.

As compute requirements evolve and compute moves into edge datacenters, closer to where the data is generated, we expect more and more such use cases that leverage the capabilities of the DPU.

Case in point – today at GTC, NVIDIA announced their Morpheus AI application framework. Morpheus combines NVIDIA’s traditional strengths in AI and ML on GPUs with the DPU’s ability to monitor and respond to threats and malicious actors in real-time. The framework offers pre-trained AI models that enhance security in the zero-trust environment of the modern containerized datacenters.

To summarize, the DPU decouples applications (DevOps) from infrastructure (IT) and holds promise as the next step in datacenter evolution. We at Canonical are really excited by the possibilities it brings to the next-generation software-defined datacenter. We would love to explore how we can help you get better ROI on your investments by offloading your infrastructure workloads to optimized hardware such as the BlueField-2.

Browse sessions

SCanonical at GTC

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8c15/Title-Cards-18.png" width="720" /> </noscript>

External Links

13 April, 2021 08:37AM

April 12, 2021

The Fridge: Ubuntu Weekly Newsletter Issue 678

Welcome to the Ubuntu Weekly Newsletter, Issue 678 for the week of April 4 – 10, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

12 April, 2021 10:37PM

Ubuntu Blog: Design and Web team summary – 12 April 2021

The web team at Canonical run two-week iterations building and maintaining all of Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

This iteration has seen many of the team out of the office as schools are out in the UK. This has not limited the exciting new features and developments from the team.

Meet the team

Hi, I’m Amy. I joined Canonical as Senior UX Designer a week before the pandemic and haven’t been back to the office since. Despite working remotely with the webteam, it still feels like we’re very close and I miss everyone so much. 

I started out as a Software Engineer and discovered my passion in HCI and UX with a huge love for dumplings. My main purpose in life is to simplify complicated concepts through interaction design, haptic interfaces, and natural interactions. I’m currently working on MAAS (Metal-as-a-service) which is an awesome provisioning tool for private cloud infrastructure. Some of you might have seen my work on the CLI prototype, workload annotations, event logs handling, LXD projects in MAAS, and many more to come. 

I ❤️  cooking because it is very much like product designing – when you learn enough about your audiences’ taste palette, you know how to create a cool dish that they will enjoy. Food, to me, is a form of art that brings you good friends and family. If you really want to make friends, go to someone’s house and eat with him or her, the people who give you their food give you their heart.

Web squad

The Web Squad develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

Login in the ubuntu.com navigation

Over time ubuntu.com has introduced a number of services which require authentication with our Ubuntu SSO. Each service which requires login handles the login within the page in different ways.

We now have a login option available in the header of the entire site which provides a standardised way of login in and maintaining that state.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Y6MpGxubFPIE_1i8VrwkX7nSsNf3FLKO5J8NOTdiTSSufTqJocGGItA6wIY-kv0SVAwITfh6fmTLrZ3XovTqGf1IIq_kNNCpqX5hbmHXO5QNGb-gaIjLNJOBubkHqWkG_XOlcJXG" width="720" /> </noscript>

Docs linkchecker

The docs team asked if we had a tool for checking links across our websites which could discover broken links in Canonical’s documentation and report these issues to them. We use a tool called linkchecker to crawl our sites and report issues to use in our internal Mattermost channel.

Therefore, we decided to set up the same system to the docs team. We introduced a new set of Github actions to crawl all the sets of docs. And, on a weekly cadence report back any issues to the Docs channel.

This docs we are now scanning regularly are:


The MAAS squad develops the UI for the MAAS project.

Summing up the self-driven snap upgrade feature

After six design iterations, we’ve come to the final design for the first phase of development. In this work, our goal is to provide upgrade control to MAAS users with snap package format, where all users will be able to select their ideal upgrade time and walk through the upgrading process with error tolerance and security. 

In the first phase of our delivery, we want to focus on making sure the data we show are clear and relevant enough to inform our users about the upgrade and the stages that each controller is going through.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/rJnHBu1q2qCg6LvvN4kwo13PkHHVdLVkcHxkRXqq6ozT-StC1SI4hofz_J_jl14pL6Bh-LTYPFZey_kX2RKRgt0XBHMJX8YfCw0yyXvkK2f6JNsTmQV0iGN1cPekZsT0lONEDlxN" width="720" /> </noscript>

There are not a lot of changes via the interactions but a many changes regarding the data. We’ve simplified the data changes into 5 stages from 7+ stages. 

Investigate Information Architecture revamp

In the beginning of this cycle, we’ve sent out a cloud-like experience survey to 100 participants to understand our users’ MAAS environment and the scale that we should aim for. The problem with the current information architecture is that it is hard to scale and identify the right flow. So with this work, we hope to identify some overlaps between each feature, simplify them and categorise them properly. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/dTV0JQh-7t4Y0W_I69ITspr_LmRrWwxWgB5CXSu02vv8yOlQTytWqWXHFOWeYAOrv1GBUhjROO3Jd9Yn5_5MpOA99rCs2gIyAoeMvCkRoQmgJHZTEpowKWyMOJDuS_PSG6b54xtZ" width="720" /> </noscript>

With the information we received from 11 responses in the survey, we are able to gather some lead on which user group we should focus on and how these extreme user groups will provide use cases that will be sufficient for all other use cases in our platform. We’ve also learned about the scale and future scaling potential that can try to target and how these user stories can impact the organisation of Information Architecture. We are still accepting feedback from anyone in the MAAS community who is interested in contributing to MAAS for a better experience. Here is the link to the survey.

Our next step is to do an open card sorting session with the team and map the cluster relationship using K-means clustering method. 

Convert the machine details events and logs view to React

This week we finished migrating the machine details from AngularJS to React. The new machine details has many bug fixes, performance improvements, UX and UI tweaks and a few new features as well.

A lot of work went into this migration so we’re all happy to have this behind us and can’t wait to deliver it to our users in the next release.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/F15cTlmVA-S3MKz4CnoqRebrH6-pN8QZJHyrJJKx9eJVg_B5cWSYq28RVwkKTdjb6ZeMtvHGZtQtZF7G83ItdgkRcxypu1X7dZIzdzONkQeIm2VVvbWRclLnLgLXeOTWrORsq4Td" width="720" /> </noscript>

Improving LXD VM host support

The UI for LXD VM hosts has undergone major changes in order to highlight resources usage across multiple LXD projects. In the “Resources” tab you can now view how much memory/CPU a LXD project is using in context of the entire LXD server’s resources.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/bdd8pPpq6JXDlMGWPt2ZLV1yMZg_kCp3xKIAdXgCsaeraZlNuhIU1mnqJTfUIQdYx1GRxA9_rRGqxg88m2HRWD7Lx13hiJQ26xXTsF7k_6bgOuVhmwUx3NBTg_JqS3na7qiScgy_" width="720" /> </noscript>

The original resources card and NUMA node cards have also been rebuilt with a new underlying data structure (which made the project view possible), and has been made to be more composable should we decide to use these little charts elsewhere.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/9HCQlG2o5TedKZcBovuu-zJs3LKleg19cWJuhVAGHehSgrAzNx9X5RvLr9EO5NS8O9witfZLPAMdW5M6Vhc_52C8-VGBC1CF2BKKnKjmpNMWN40IOk7sv2Xyn7coYiR178q2j1zT" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/fuWwCPzkLEvD9TJ5I9X4ndOIRyaM2Sc2Gt-3iDMobs2RJrWg7W_xL9TjpVbO6H-3gtqFpLkwVVzWaeh4NBeTX1vFWpfzjhdDMDe1hnuTQweFPJ3qZpAbN-C_xfE0SZD321Udz4LY" width="720" /> </noscript>

The last major bit of LXD work is to be able to perform actions on VMs directly from the VM host page instead of having to navigate to the machine list.


The JAAS squad develops the JAAS dashboard for the Juju project.

This iteration the team focused mainly on maintenance tasks.

JAAS/Juju feedback survey

Our team has been working hard on providing organised views over models to unit details, and adding actionable features to the dashboard from juju CLI.

We would like to invite all Juju users to participate in this short survey and help us build a better JAAS dashboard user experience.

Please have a look at the survey here:

[JaaS/Juju feedback survey] Bring your experience to our dashboard design!

The purpose of this survey is

  1. to identify principle/secondary pieces of information in your day-to-day use, so we can provide a clean but essential view on the dashboard;
  2. to widely gather usage habits around ‘creating offers’, ‘managing relations and CMR’ and ‘running Juju actions’, so we can better position our new features to maximize your ease

The survey would take around 5 minutes.

Thank you!


The Vanilla squad designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Upstreaming styles from React components

This iteration we focused our efforts on cleaning up our React components library by migrating any custom styling to Vanilla framework itself

The biggest part of this was upstreaming the styles for the SearchAndFilter component to Vanilla framework.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/ORkd0BK_HvJqTVgj_J4r1U3sJJuNiVEdVAIBTg2Apt-Di0oTipRFgTFDW4bVKFJ0ogFCPXFVZP-e0uQpzgWs6bFKvco_-wo7EVt5HtmD8P9p7anxxb1VxeA1hiFdeiW2q26hx8YE" width="720" /> </noscript>

We also added a footer to the modal dialog component and empty state for the tables that were previously introduced only in the React components.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/tHgkTvaUH-hvLnT5DqleGDqKzaXk573wAYj-wSGLIjPmzNkv3YxiGwHPdUid6mhgOl178CLbUaudbAPf0Ky-YGmStNyKACQsdfxTAHktN8gzaj7epWtT3frfRmwdIMctZq-0hz6S" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/JZmkprnqyTy5o9GSA0m39QoW3Gx0RzGIYfgaVbLzUBzi_JmaKh16173Nz31PpJUF513_jDZzo0h0FS5SVFVr7TGyNgm0e4400-Rs6pZWfOI_-h7aixcQCdoFCxOnQRa54Dspg3Bo" width="720" /> </noscript>

Tooltip improvements

We took the time this iteration to make a number of improvements to our tooltip pattern, from minor changes like ensuring they appear above all other visible elements on the page, to more significant changes involving the use of JavaScript.

Our current, default tooltip pattern is based purely in HTML and CSS, and works well in many situations, but has a few limitations. First, the actual tooltip needs to be a child of the element the tooltip should appear next to, otherwise we can’t reliably target the tooltip through CSS alone. 

We encountered a situation where it was preferable to have the tooltip element exist away from the parent element, so we created a new class – launching with the next Vanilla release – that allows a user to do that with a small amount of JavaScript to listen for certain trigger events.

By introducing JavaScript, we were also able to offer guidance on how and when tooltips should appear. The CSS only version has tooltips appear as soon as the user passes over the element, which may be distracting if the cursor was actually on the way to another point on the page. Because we want tooltips to appear when the page is being navigated by either a mouse or a keyboard, or both, it is also possible to make two tooltips appear at the same time.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/tQ30alnWAs4bfBvVnwbJ0X-L4FsK_3reUHFr7NvQbS32mltF37DV7kBW7mHVuotKetowcccwuuDqg7izILx92rWYWWcgGgadlvKn0rG4CQD3pX6RryUQt_v-7I18lzt7UV6Br8uh" width="720" /> </noscript>

Using a JavaScript solution allows us to wait a short time, 200 milliseconds, to ensure the user intends for either the cursor or the current focus of the page to be on the element in question, before showing the tooltip. 

We can also then ensure that only one tooltip is ever visible at a time, and make a tooltip disappear when a user has clicked the target element, an event we can be reasonably sure indicates that the user no longer has need of the tooltip and its information.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/sGapOrSp6LSnzXAR8pF9LoaQMz_GFPXI71DDpGK6MLNVBaGKlEevHt4Zs37TyoFfJFhcJs6wCPdKsehgZaovc2hxw16LuDH9d_yGNku4yRJ-ogvVoJmM02jU8htZKEdhfleMogmn" width="720" /> </noscript>

Logo section

We added a new logo section component that replaces the inline images component. Full blog post to follow.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/v4sN5sZHcMO_jWLqN7hZCNEQlfPcvclNSgKFRqwYx0GiVo4SExOEVVudCQ5E5btbrec0VCMAAG4aFvOR6EG9ljTYuoX03THgS4e8JTzMnymNoZL7qsIEEnHfUTvgKIZs31Sc7Mjn" width="720" /> </noscript>

Snapcraft and Charmhub

The Snapcraft team works closely with the Store team to develop and maintain the Snap Store site and the upcoming Charmhub site.

New look for Snapcraft Forum

The Snapcraft Forum navigation has been given a fresh look, and is now aligned with the other Canonical forums such as Ubuntu discourse, MAAS discourse and Charmhub discourse.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/0ydnNR4SjYNu0UOdeLc0cPWsbtl1Snrz9_VMlnYuUppUmgYIGOuuOoDK7lJW5ryDaJqSkE3V8aiG65N6tcDzJFuH-pD0fJ_qh7y-UWdwa7K4-z8zlB7eDfOw-55yGSF7s2VwRtko" width="720" /> </noscript>

Team posts:

We are hiring

  • Home based – EMEA
    Senior UX Designer  ›
    Be part of a team working on web applications for enterprise cloud services, IoT and embedded devices, bringing exciting new projects to life and improving existing ones.

  • Home based – EMEA
    Web Developer ›
    An exceptional opportunity for a web developer to work within a large team of UX and visual designers and developers building websites and apps.

With ♥ from Canonical web team.

12 April, 2021 08:23PM

Sujeevan Vijayakumaran: One year at GitLab: 7 things which I didn't expect to learn

One year ago I joined GitLab as a Solution Architect. In this blog post I do not want to focus on my role, my daily work or anything pandemic related. Also, there won’t be a huge focus in regard to all remote working. I rather want to focus on my personal experiences in regard to the work culture. I’ll focus on things which I certainly did not think about before I joined GitLab (or any other company before).

Before joining GitLab I worked for four German companies. As a German with a Sri-Lankan Tamil heritage I was always a minority at work. Most of the time it wasn’t an issue. At least that’s what I thought. At all those previous companies there were mostly white male and with very few (or even none) non-males especially in technical and leading roles. Nowadays, I realize what a huge difference a globally distributed company makes with people from different countries, cultures, background and gender.

There were sooo many small things which makes a difference and which opened my eyes.

People are pronouncing my name correctly

Some of you might (hopefully) think:

Wait that was an issue!?

Yes, yes it was. And it was super annoying. Working in a globally distributed companies means that the default behavior is: People are asking how to correctly (!) pronounce the full (!) name. It’s a simple question, and it directly shows respect even if you struggle to pronounce it correctly on the first time. My name is transcribed from Tamil to English. So the average colleagues simply tries to pronounce it in English, and it’s perfect and that includes the German GitLab colleagues. In previous jobs there were a lot of colleagues who didn’t ask, and I was “the person with the complicated name”, “you know who” or some even called me “Sushiwahn”. One former colleague referenced me to the customer in a phone call as “the other colleague”. That was not cool. If you wonder on how to pronounce my name: I uploaded a recording on my profile website on sujeevan.vijayakumaran.com. I should’ve done that way earlier.

The meaning/origin of my name

I never really cared about the meaning of my name. So many people have asked me if my name has a meaning or what the origin was. I didn’t know, and I also didn’t really care. My mum always simply told me “Your name has style”. My teammate Sri one day randomly dropped a message in our team channel:

If you break down your name into the root words, it basically translates to “Good Life (Sujeevan) Prince Of Victory (Vijayakumaran).

That blew my mind 🤯.

#BlackLivesMatter and #StopAsianHate

So many terrible things happened in the last year in the world. When these two movements appeared it was a big topic in the company. Even with messages in our #company-fyi channel which is normally used for company related announcements. While #BlackLivesMatter was covered in the German media, #StopAsianHate was not really covered in the German media at all.

Around the time of #BlackLivesMatter my manager asked in our team meeting how/if it affects us even if we - in our EMEA team – are far away from the US. I had the chance to share stories from my past which wouldn’t have happened for the average white person in a white country. This never happened in any other company I worked before. When the Berlin truck attack at a Christmas market (back in 2016) happened it was a big topic at lunchtime with the colleagues. When a racist shot German Migrants in Hanau in February 2020 it was not really a topic at work. At one of the attacks Germans were the victims in the other it were migrants. Both shootings happened before I joined GitLab. When there was a shooting in Vienna in November 2020 colleagues directly jumped into the local Vienna channel and asked if everyone is somewhat okay. See the difference!


We have a #lang-de Channel for German (the language not the country!) related content. There are obviously many other channels for other languages. What surprised me? There were way more people outside of Germany without a German background who are learning or trying to learn German. It’s a small thing but it’s cool! There were many discussions about word meanings and how to handle the German language. Personally that got me thinking if I should pick up learning French again.

Meanings of Emojis

There are a lot of emojis. Specially in Slack. At the beginning it was somewhat overwhelming, but I got used to it. One thing which confused me right after I joined was the usage of the 🙏🏻 emoji. Interestingly the name of the emoji is simply “folded hands”. But what is the meaning of it? When I first saw the use of it I was somewhat confused. For me as a Hindu it’s clearly “praying”. The second meaning which comes to my mind is use of it as a form of greeting – see Namaste. However, there are so many colleagues who use it for some sort of “thanks”. Or even “sorry”. Emojis have different meanings in different cultures!

Different People – Different Mindsets

Since GitLab is an all-remote company our informal communication is happening in Slack channels and in coffee chats in Zoom calls. In the first weeks I scheduled a lot of coffee chats to get to know my teammates and some other colleagues in other teams. The most useful bot (for me) in Slack is the Donut Bot which randomly connects two persons every two weeks. I don’t have to take care to randomly select people from different teams and department. And honestly I most likely would be somewhat biased when I would cherry-pick people to schedule coffee chats.

So every two weeks I get matched with some “random” person. This lowers the bar to talk to someone from some other department where I (shamefully) thought: “Oh that role sounds boring to me.” But if it sounds boring to me that’s the first sign that I should talk to them. Without the Donut Bot I would’ve most likely not talked to someone from the Legal department, just to give one small example. And there were also a lot of engineers who didn’t really talk to someone from Sales, like I am part of the Sales team. Even though we do not need to talk about work related stuff I generally learned something new when I leave the conversation at the end.

However, the more interesting part is to get to know all the different people in the different countries and continents with different cultures. There are many colleagues who left their home country and live somewhere else. The majority of these people are either in the group “I moved because of my (previous) work” or “I moved because of my partner”. The most surprising sentence came from a Canadian colleague though:

I’m thinking of relocating to Germany for a couple of years since it’s easily possible with GitLab. All my friends here are migrants and I really want to experience how it is to learn a new language and live in another country.

That was by far the most interesting reason I heard so far! Besides that my favorite question I ask those people who moved away from their home country is what they’re missing and what they would miss if they moved back. This also leads to some fascinating stories. Most of them are related to food, some to specific medicine and some reasons are even “I like the $specific_mentality over here which I would miss”.

I left out the more obvious parts of a globally distributed team like getting to know how life is in the not-so-rich countries of the world. Also, I finally understood what the difference between the average German is compared to the average Silicon Valley person. The latter is way more open to a visionary goal while the average German wants to keep their safe job for a long time (yes, even in IT).

Mental Health Awareness

We have a lot of content related to Mental Health which I still need to check out. It’s a super important topic on so many different levels. At all my previous employers this was not a topic at all. I might even say it’s generally a taboo topic. One thing which I definitely did not expect was the introduction of the Family and Friends day which was introduced in May 2020 shortly after I joined the company and it happened nearly every month since then and was introduced because of the COVID-Lockdowns. On that day (nearly) the whole company has a day off to spend time with their family and friends. My German friends reaction to that was something like:

Wait didn’t you join a hyper-growth startup-ish company? That doesn’t sound like late-stage capitalism what I would have expected!

In addition to that, there’s also a #mental-health-aware Slack channel where everyone can talk about their problems. I was really surprised to see so many team members to share their problems and what they are currently struggling with. I couldn’t have imagined that people share very personal stories in the company and that includes people sharing their experince with getting help from a therapist.

As someone who is somewhat an introvert who struggles to talk to a lot of people in big groups in real life this past year (and a few more months) has been relatively easy to handle in this regard as I only met four team members in person so far. However, the first in-person company event is coming up, and I’m pretty confident that getting in touch with a lot of mostly unknown people will be easier than at other companies I’ve worked for so far.

Things which I totally expected

There are still things which I expected to work as intended. Here’s a short list:

  • All Remote and working async works pretty damn good and I really don’t want to go back to an office
  • Spending company money is easy and definitely not a hassle
  • Not having to justify how and when I exactly work is a huge relief
  • Not being forced to request paid time off is an unfamiliar feeling at the beginning, but I got used to it pretty quickly
  • Working with people with a vision who can additionally identify with the company is great
  • No real barriers between teams and departments
  • Values matter

For me personally GitLab has set the bar for companies I might work for in the future pretty high. That’s good and bad at the same time ;-). If you want to read another story of “1 year at GitLab” I can highly recommend the blogpost of dnsmichi from a month ago.

12 April, 2021 07:15PM

hackergotchi for Purism PureOS

Purism PureOS

App Showcase: Drawing

Drawing is a simple app in the PureOS store to doodle on a digital canvas.

Image by: Michael Frankenstein

With Drawing, you can import and clip images or start from scratch and make unique artwork.

Drawing has you covered from the essential pencil tool that adds color to the more advanced filters that affect the entire picture.

Whether you need to edit an image or create one from scratch, Drawing is a handy tool for any screen size.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Order now

The post App Showcase: Drawing appeared first on Purism.

12 April, 2021 03:59PM by David Hamner

April 11, 2021

hackergotchi for VyOS



Hello, fellow VyOS community members, this is Christian again!

As promised in my last post about BGP L2VPN/EVPN support via VXLAN transport, this post is one of the announced follow-ups. Today's post is all about how to build a multi-tenant capable service provider network leveraging only open-source solutions.

To get this lab to work, you will need the latest VyOS 1.4 rolling release, as certain required features were only added as of the 2021-04-11 build.

11 April, 2021 04:14PM by Christian Pössinger (christian@poessinger.com)

April 10, 2021

hackergotchi for Qubes


Get paid to support Qubes development through automated testing! (six-month contract)

The Qubes OS Project is seeking an expert in automated testing. We use OpenQA and GitLab CI to test changes to the Qubes OS source code and automated building from source. We’re looking for someone who can help with improving both the automated tests themselves and the testing infrastructure.

This is a paid position on a six-month part-time contract with a budgeted rate of $30-50 USD per hour through the Internews BASICS project (Building Analytical and Support Infrastructure for Critical Security tools):


10 April, 2021 12:00AM

April 09, 2021

hackergotchi for Purism PureOS

Purism PureOS

The Simplicity of Making Librem 5 Apps

Getting started with developing applications for a mobile platform can be a challenging task, especially when it comes to building and testing the application on the mobile device itself.

The Librem 5 makes its application development workflow extremely simple.

  • You don’t need to worry about registering a developer account with some parent company.
  • You don’t need to register your testing devices and ask the permission to a parent company just to be able to build and run your applications on those devices.
  • You don’t need to “Jailbreak” your devices in order to access some restricted software or hardware features.
  • And the best part is that you don’t need to worry about cross platform compiling because you can use the development tools directly on the phone.

The “quick start” video below that I made for the Librem 5 developers documentation demonstrates how quickly you can get up and running with making your own GTK applications on a Librem 5.

In this video, I have attached a Librem 5 to an external keyboard, mouse and monitor through a USB-C hub, and I use GNOME Builder to quickly create a new GTK application project, build it and run it on both the big desktop monitor and the small mobile screen with just a drag and drop across the screens.

Yes, I do all that with the computing power of the Librem 5 only! There are no special effects nor a hidden desktop computer. I even did the screencast recording with an external device so it shows the real speed of the Librem 5 when driving a 32″ Full HD monitor.

The post The Simplicity of Making Librem 5 Apps appeared first on Purism.

09 April, 2021 04:46PM by François Téchené

hackergotchi for Grml developers

Grml developers

Michael Prokop: A Ceph war story

It all started with the big bang! We nearly lost 33 of 36 disks on a Proxmox/Ceph Cluster; this is the story of how we recovered them.

At the end of 2020, we eventually had a long outstanding maintenance window for taking care of system upgrades at a customer. During this maintenance window, which involved reboots of server systems, the involved Ceph cluster unexpectedly went into a critical state. What was planned to be a few hours of checklist work in the early evening turned out to be an emergency case; let’s call it a nightmare (not only because it included a big part of the night). Since we have learned a few things from our post mortem and RCA, it’s worth sharing those with others. But first things first, let’s step back and clarify what we had to deal with.

The system and its upgrade

One part of the upgrade included 3 Debian servers (we’re calling them server1, server2 and server3 here), running on Proxmox v5 + Debian/stretch with 12 Ceph OSDs each (65.45TB in total), a so-called Proxmox Hyper-Converged Ceph Cluster.

First, we went for upgrading the Proxmox v5/stretch system to Proxmox v6/buster, before updating Ceph Luminous v12.2.13 to the latest v14.2 release, supported by Proxmox v6/buster. The Proxmox upgrade included updating corosync from v2 to v3. As part of this upgrade, we had to apply some configuration changes, like adjust ring0 + ring1 address settings and add a mon_host configuration to the Ceph configuration.

During the first two servers’ reboots, we noticed configuration glitches. After fixing those, we went for a reboot of the third server as well. Then we noticed that several Ceph OSDs were unexpectedly down. The NTP service wasn’t working as expected after the upgrade. The underlying issue is a race condition of ntp with systemd-timesyncd (see #889290). As a result, we had clock skew problems with Ceph, indicating that the Ceph monitors’ clocks aren’t running in sync (which is essential for proper Ceph operation). We initially assumed that our Ceph OSD failure derived from this clock skew problem, so we took care of it. After yet another round of reboots, to ensure the systems are running all with identical and sane configurations and services, we noticed lots of failing OSDs. This time all but three OSDs (19, 21 and 22) were down:

% sudo ceph osd tree
-1       65.44138 root default
-2       21.81310     host server1
 0   hdd  1.08989         osd.0    down  1.00000 1.00000
 1   hdd  1.08989         osd.1    down  1.00000 1.00000
 2   hdd  1.63539         osd.2    down  1.00000 1.00000
 3   hdd  1.63539         osd.3    down  1.00000 1.00000
 4   hdd  1.63539         osd.4    down  1.00000 1.00000
 5   hdd  1.63539         osd.5    down  1.00000 1.00000
18   hdd  2.18279         osd.18   down  1.00000 1.00000
20   hdd  2.18179         osd.20   down  1.00000 1.00000
28   hdd  2.18179         osd.28   down  1.00000 1.00000
29   hdd  2.18179         osd.29   down  1.00000 1.00000
30   hdd  2.18179         osd.30   down  1.00000 1.00000
31   hdd  2.18179         osd.31   down  1.00000 1.00000
-4       21.81409     host server2
 6   hdd  1.08989         osd.6    down  1.00000 1.00000
 7   hdd  1.08989         osd.7    down  1.00000 1.00000
 8   hdd  1.63539         osd.8    down  1.00000 1.00000
 9   hdd  1.63539         osd.9    down  1.00000 1.00000
10   hdd  1.63539         osd.10   down  1.00000 1.00000
11   hdd  1.63539         osd.11   down  1.00000 1.00000
19   hdd  2.18179         osd.19     up  1.00000 1.00000
21   hdd  2.18279         osd.21     up  1.00000 1.00000
22   hdd  2.18279         osd.22     up  1.00000 1.00000
32   hdd  2.18179         osd.32   down  1.00000 1.00000
33   hdd  2.18179         osd.33   down  1.00000 1.00000
34   hdd  2.18179         osd.34   down  1.00000 1.00000
-3       21.81419     host server3
12   hdd  1.08989         osd.12   down  1.00000 1.00000
13   hdd  1.08989         osd.13   down  1.00000 1.00000
14   hdd  1.63539         osd.14   down  1.00000 1.00000
15   hdd  1.63539         osd.15   down  1.00000 1.00000
16   hdd  1.63539         osd.16   down  1.00000 1.00000
17   hdd  1.63539         osd.17   down  1.00000 1.00000
23   hdd  2.18190         osd.23   down  1.00000 1.00000
24   hdd  2.18279         osd.24   down  1.00000 1.00000
25   hdd  2.18279         osd.25   down  1.00000 1.00000
35   hdd  2.18179         osd.35   down  1.00000 1.00000
36   hdd  2.18179         osd.36   down  1.00000 1.00000
37   hdd  2.18179         osd.37   down  1.00000 1.00000

Our blood pressure increased slightly! Did we just lose all of our cluster? What happened, and how can we get all the other OSDs back?

We stumbled upon this beauty in our logs:

kernel: [   73.697957] XFS (sdl1): SB stripe unit sanity check failed
kernel: [   73.698002] XFS (sdl1): Metadata corruption detected at xfs_sb_read_verify+0x10e/0x180 [xfs], xfs_sb block 0xffffffffffffffff
kernel: [   73.698799] XFS (sdl1): Unmount and run xfs_repair
kernel: [   73.699199] XFS (sdl1): First 128 bytes of corrupted metadata buffer:
kernel: [   73.699677] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
kernel: [   73.700205] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
kernel: [   73.700836] 00000020: 62 44 2b c0 e6 22 40 d7 84 3d e1 cc 65 88 e9 d8  bD+.."@..=..e...
kernel: [   73.701347] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
kernel: [   73.701770] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
ceph-disk[4240]: mount: /var/lib/ceph/tmp/mnt.jw367Y: mount(2) system call failed: Structure needs cleaning.
ceph-disk[4240]: ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', u'xfs', '-o', 'noatime,inode64', '--', '/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.cdda39ed-5
ceph/tmp/mnt.jw367Y']' returned non-zero exit status 32
kernel: [   73.702162] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
kernel: [   73.702550] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
kernel: [   73.702975] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
kernel: [   73.703373] XFS (sdl1): SB validate failed with error -117.

The same issue was present for the other failing OSDs. We hoped, that the data itself was still there, and only the mounting of the XFS partitions failed. The Ceph cluster was initially installed in 2017 with Ceph jewel/10.2 with the OSDs on filestore (nowadays being a legacy approach to storing objects in Ceph). However, we migrated the disks to bluestore since then (with ceph-disk and not yet via ceph-volume what’s being used nowadays). Using ceph-disk introduces these 100MB XFS partitions containing basic metadata for the OSD.

Given that we had three working OSDs left, we decided to investigate how to rebuild the failing ones. Some folks on #ceph (thanks T1, ormandj + peetaur!) were kind enough to share how working XFS partitions looked like for them. After creating a backup (via dd), we tried to re-create such an XFS partition on server1. We noticed that even mounting a freshly created XFS partition failed:

synpromika@server1 ~ % sudo mkfs.xfs -f -i size=2048 -m uuid="4568c300-ad83-4288-963e-badcd99bf54f" /dev/sdc1
meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
synpromika@server1 ~ % sudo mount /dev/sdc1 /mnt/ceph-recovery
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
cache_node_purge: refcount was 1, not zero (node=0x1d3c400)
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x18800/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x18800/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0x24c00/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x24c00/0x1000
SB stripe unit sanity check failed
Metadata corruption detected at 0x433840, xfs_sb block 0xc400/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0xc400/0x1000
releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!releasing dirty buffer (bulk) to free list!found dirty buffer (bulk) on free list!bad magic number
bad magic number
Metadata corruption detected at 0x433840, xfs_sb block 0x0/0x1000
libxfs_writebufr: write verifer failed on xfs_sb bno 0x0/0x1000
releasing dirty buffer (bulk) to free list!mount: /mnt/ceph-recovery: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error.

Ouch. This very much looked related to the actual issue we’re seeing. So we tried to execute mkfs.xfs with a bunch of different sunit/swidth settings. Using ‘-d sunit=512 -d swidth=512‘ at least worked then, so we decided to force its usage in the creation of our OSD XFS partition. This brought us a working XFS partition. Please note, sunit must not be larger than swidth (more on that later!).

Then we reconstructed how to restore all the metadata for the OSD (activate.monmap, active, block_uuid, bluefs, ceph_fsid, fsid, keyring, kv_backend, magic, mkfs_done, ready, require_osd_release, systemd, type, whoami). To identify the UUID, we can read the data from ‘ceph --format json osd dump‘, like this for all our OSDs (Zsh syntax ftw!):

synpromika@server1 ~ % for f in {0..37} ; printf "osd-$f: %s\n" "$(sudo ceph --format json osd dump | jq -r ".osds[] | select(.osd==$f) | .uuid")"
osd-0: 4568c300-ad83-4288-963e-badcd99bf54f
osd-1: e573a17a-ccde-4719-bdf8-eef66903ca4f
osd-2: 0e1b2626-f248-4e7d-9950-f1a46644754e
osd-3: 1ac6a0a2-20ee-4ed8-9f76-d24e900c800c

Identifying the corresponding raw device for each OSD UUID is possible via:

synpromika@server1 ~ % UUID="4568c300-ad83-4288-963e-badcd99bf54f"
synpromika@server1 ~ % readlink -f /dev/disk/by-partuuid/"${UUID}"

The OSD’s key ID can be retrieved via:

synpromika@server1 ~ % OSD_ID=0
synpromika@server1 ~ % sudo ceph auth get osd."${OSD_ID}" -f json 2>/dev/null | jq -r '.[] | .key'

Now we also need to identify the underlying block device:

synpromika@server1 ~ % OSD_ID=0
synpromika@server1 ~ % sudo ceph osd metadata osd."${OSD_ID}" -f json | jq -r '.bluestore_bdev_partition_path'    

With all of this, we reconstructed the keyring, fsid, whoami, block + block_uuid files. All the other files inside the XFS metadata partition are identical on each OSD. So after placing and adjusting the corresponding metadata on the XFS partition for Ceph usage, we got a working OSD – hurray! Since we had to fix yet another 32 OSDs, we decided to automate this XFS partitioning and metadata recovery procedure.

We had a network share available on /srv/backup for storing backups of existing partition data. On each server, we tested the procedure with one single OSD before iterating over the list of remaining failing OSDs. We started with a shell script on server1, then adjusted the script for server2 and server3. This is the script, as we executed it on the 3rd server.

Thanks to this, we managed to get the Ceph cluster up and running again. We didn’t want to continue with the Ceph upgrade itself during the night though, as we wanted to know exactly what was going on and why the system behaved like that. Time for RCA!

Root Cause Analysis

So all but three OSDs on server2 failed, and the problem seems to be related to XFS. Therefore, our starting point for the RCA was, to identify what was different on server2, as compared to server1 + server3. My initial assumption was that this was related to some firmware issues with the involved controller (and as it turned out later, I was right!). The disks were attached as JBOD devices to a ServeRAID M5210 controller (with a stripe size of 512). Firmware state:

synpromika@server1 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.16.0-0092
Firmware Version = 4.660.00-8156

synpromika@server2 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.21.0-0112
Firmware Version = 4.680.00-8489

synpromika@server3 ~ % sudo storcli64 /c0 show all | grep '^Firmware'
Firmware Package Build = 24.16.0-0092
Firmware Version = 4.660.00-8156

This looked very promising, as server2 indeed runs with a different firmware version on the controller. But how so? Well, the motherboard of server2 got replaced by a Lenovo/IBM technician in January 2020, as we had a failing memory slot during a memory upgrade. As part of this procedure, the Lenovo/IBM technician installed the latest firmware versions. According to our documentation, some OSDs were rebuilt (due to the filestore->bluestore migration) in March and April 2020. It turned out that precisely those OSDs were the ones that survived the upgrade. So the surviving drives were created with a different firmware version running on the involved controller. All the other OSDs were created with an older controller firmware. But what difference does this make?

Now let’s check firmware changelogs. For the 24.21.0-0097 release we found this:

- Cannot create or mount xfs filesystem using xfsprogs 4.19.x kernel 4.20(SCGCQ02027889)
- xfs_info command run on an XFS file system created on a VD of strip size 1M shows sunit and swidth as 0(SCGCQ02056038)

Our XFS problem certainly was related to the controller’s firmware. We also recalled that our monitoring system reported different sunit settings for the OSDs that were rebuilt in March and April. For example, OSD 21 was recreated and got different sunit settings:

WARN  server2.example.org  Mount options of /var/lib/ceph/osd/ceph-21      WARN - Missing: sunit=1024, Exceeding: sunit=512

We compared the new OSD 21 with an existing one (OSD 25 on server3):

synpromika@server2 ~ % systemctl show var-lib-ceph-osd-ceph\\x2d21.mount | grep sunit
synpromika@server3 ~ % systemctl show var-lib-ceph-osd-ceph\\x2d25.mount | grep sunit

Thanks to our documentation, we could compare execution logs of their creation:

% diff -u ceph-disk-osd-25.log ceph-disk-osd-21.log
-synpromika@server2 ~ % sudo ceph-disk -v prepare --bluestore /dev/sdj --osd-id 25
+synpromika@server3 ~ % sudo ceph-disk -v prepare --bluestore /dev/sdi --osd-id 21
-command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdj1
-meta-data=/dev/sdj1              isize=2048   agcount=4, agsize=6272 blks
+command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdi1
+meta-data=/dev/sdi1              isize=2048   agcount=4, agsize=6336 blks
          =                       sectsz=4096  attr=2, projid32bit=1
          =                       crc=1        finobt=1, sparse=0, rmapbt=0, reflink=0
-data     =                       bsize=4096   blocks=25088, imaxpct=25
-         =                       sunit=128    swidth=64 blks
+data     =                       bsize=4096   blocks=25344, imaxpct=25
+         =                       sunit=64     swidth=64 blks
 naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
 log      =internal log           bsize=4096   blocks=1608, version=2
          =                       sectsz=4096  sunit=1 blks, lazy-count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0

So back then, we even tried to track this down but couldn’t make sense of it yet. But now this sounds very much like it is related to the problem we saw with this Ceph/XFS failure. We follow Occam’s razor, assuming the simplest explanation is usually the right one, so let’s check the disk properties and see what differs:

synpromika@server1 ~ % sudo blockdev --getsz --getsize64 --getss --getpbsz --getiomin --getioopt /dev/sdk

synpromika@server2 ~ % sudo blockdev --getsz --getsize64 --getss --getpbsz --getiomin --getioopt /dev/sdk

See the difference between server1 and server2 for identical disks? The getiomin option now reports something different for them:

synpromika@server1 ~ % sudo blockdev --getiomin /dev/sdk            
synpromika@server1 ~ % cat /sys/block/sdk/queue/minimum_io_size

synpromika@server2 ~ % sudo blockdev --getiomin /dev/sdk 
synpromika@server2 ~ % cat /sys/block/sdk/queue/minimum_io_size

It doesn’t make sense that the minimum I/O size (iomin, AKA BLKIOMIN) is bigger than the optimal I/O size (ioopt, AKA BLKIOOPT). This leads us to Bug 202127 – cannot mount or create xfs on a 597T device, which matches our findings here. But why did this XFS partition work in the past and fails now with the newer kernel version?

The XFS behaviour change

Now given that we have backups of all the XFS partition, we wanted to track down, a) when this XFS behaviour was introduced, and b) whether, and if so how it would be possible to reuse the XFS partition without having to rebuild it from scratch (e.g. if you would have no working Ceph OSD or backups left).

Let’s look at such a failing XFS partition with the Grml live system:

root@grml ~ # grml-version
grml64-full 2020.06 Release Codename Ausgehfuahangl [2020-06-24]
root@grml ~ # uname -a
Linux grml 5.6.0-2-amd64 #1 SMP Debian 5.6.14-2 (2020-06-09) x86_64 GNU/Linux
root@grml ~ # grml-hostname grml-2020-06
Setting hostname to grml-2020-06: done
root@grml ~ # exec zsh
root@grml-2020-06 ~ # dpkg -l xfsprogs util-linux
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name           Version      Architecture Description
ii  util-linux     2.35.2-4     amd64        miscellaneous system utilities
ii  xfsprogs       5.6.0-1+b2   amd64        Utilities for managing the XFS filesystem

There it’s failing, no matter which mount option we try:

root@grml-2020-06 ~ # mount ./sdd1.dd /mnt
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
root@grml-2020-06 ~ # dmesg | tail -30
[   64.788640] XFS (loop1): SB stripe unit sanity check failed
[   64.788671] XFS (loop1): Metadata corruption detected at xfs_sb_read_verify+0x102/0x170 [xfs], xfs_sb block 0xffffffffffffffff
[   64.788671] XFS (loop1): Unmount and run xfs_repair
[   64.788672] XFS (loop1): First 128 bytes of corrupted metadata buffer:
[   64.788673] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
[   64.788674] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[   64.788675] 00000020: 32 b6 dc 35 53 b7 44 96 9d 63 30 ab b3 2b 68 36  2..5S.D..c0..+h6
[   64.788675] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
[   64.788675] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
[   64.788676] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
[   64.788677] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
[   64.788677] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
[   64.788679] XFS (loop1): SB validate failed with error -117.
root@grml-2020-06 ~ # mount -t xfs -o rw,relatime,attr2,inode64,sunit=1024,swidth=512,noquota ./sdd1.dd /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/loop1, missing codepage or helper program, or other error.
32 root@grml-2020-06 ~ # dmesg | tail -1
[   66.342976] XFS (loop1): stripe width (512) must be a multiple of the stripe unit (1024)
root@grml-2020-06 ~ # mount -t xfs -o rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota ./sdd1.dd /mnt/
mount: /mnt: mount(2) system call failed: Structure needs cleaning.
32 root@grml-2020-06 ~ # dmesg | tail -14
[   66.342976] XFS (loop1): stripe width (512) must be a multiple of the stripe unit (1024)
[   80.751277] XFS (loop1): SB stripe unit sanity check failed
[   80.751323] XFS (loop1): Metadata corruption detected at xfs_sb_read_verify+0x102/0x170 [xfs], xfs_sb block 0xffffffffffffffff 
[   80.751324] XFS (loop1): Unmount and run xfs_repair
[   80.751325] XFS (loop1): First 128 bytes of corrupted metadata buffer:
[   80.751327] 00000000: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 62 00  XFSB..........b.
[   80.751328] 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
[   80.751330] 00000020: 32 b6 dc 35 53 b7 44 96 9d 63 30 ab b3 2b 68 36  2..5S.D..c0..+h6
[   80.751331] 00000030: 00 00 00 00 00 00 40 08 00 00 00 00 00 00 01 00  ......@.........
[   80.751331] 00000040: 00 00 00 00 00 00 01 01 00 00 00 00 00 00 01 02  ................
[   80.751332] 00000050: 00 00 00 01 00 00 18 80 00 00 00 04 00 00 00 00  ................
[   80.751333] 00000060: 00 00 06 48 bd a5 10 00 08 00 00 02 00 00 00 00  ...H............
[   80.751334] 00000070: 00 00 00 00 00 00 00 00 0c 0c 0b 01 0d 00 00 19  ................
[   80.751338] XFS (loop1): SB validate failed with error -117.

Also xfs_repair doesn’t help either:

root@grml-2020-06 ~ # xfs_info ./sdd1.dd
meta-data=./sdd1.dd              isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

root@grml-2020-06 ~ # xfs_repair ./sdd1.dd
Phase 1 - find and verify superblock...
bad primary superblock - bad stripe width in superblock !!!

attempting to find secondary superblock...
..............................................................................................Sorry, could not find valid secondary superblock
Exiting now.

With the “SB stripe unit sanity check failed” message, we could easily track this down to the following commit fa4ca9c:

% git show fa4ca9c5574605d1e48b7e617705230a0640b6da | cat
commit fa4ca9c5574605d1e48b7e617705230a0640b6da
Author: Dave Chinner <dchinner@redhat.com>
Date:   Tue Jun 5 10:06:16 2018 -0700
    xfs: catch bad stripe alignment configurations
    When stripe alignments are invalid, data alignment algorithms in the
    allocator may not work correctly. Ensure we catch superblocks with
    invalid stripe alignment setups at mount time. These data alignment
    mismatches are now detected at mount time like this:
    XFS (loop0): SB stripe unit sanity check failed
    XFS (loop0): Metadata corruption detected at xfs_sb_read_verify+0xab/0x110, xfs_sb block 0xffffffffffffffff
    XFS (loop0): Unmount and run xfs_repair
    XFS (loop0): First 128 bytes of corrupted metadata buffer:
    0000000091c2de02: 58 46 53 42 00 00 10 00 00 00 00 00 00 00 10 00  XFSB............
    0000000023bff869: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00000000cdd8c893: 17 32 37 15 ff ca 46 3d 9a 17 d3 33 04 b5 f1 a2  .27...F=...3....
    000000009fd2844f: 00 00 00 00 00 00 00 04 00 00 00 00 00 00 06 d0  ................
    0000000088e9b0bb: 00 00 00 00 00 00 06 d1 00 00 00 00 00 00 06 d2  ................
    00000000ff233a20: 00 00 00 01 00 00 10 00 00 00 00 01 00 00 00 00  ................
    000000009db0ac8b: 00 00 03 60 e1 34 02 00 08 00 00 02 00 00 00 00  ...`.4..........
    00000000f7022460: 00 00 00 00 00 00 00 00 0c 09 0b 01 0c 00 00 19  ................
    XFS (loop0): SB validate failed with error -117.
    And the mount fails.
    Signed-off-by: Dave Chinner <dchinner@redhat.com>
    Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
    Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
    Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>

diff --git fs/xfs/libxfs/xfs_sb.c fs/xfs/libxfs/xfs_sb.c
index b5dca3c8c84d..c06b6fc92966 100644
--- fs/xfs/libxfs/xfs_sb.c
+++ fs/xfs/libxfs/xfs_sb.c
@@ -278,6 +278,22 @@ xfs_mount_validate_sb(
                return -EFSCORRUPTED;
+       if (sbp->sb_unit) {
+               if (!xfs_sb_version_hasdalign(sbp) ||
+                   sbp->sb_unit > sbp->sb_width ||
+                   (sbp->sb_width % sbp->sb_unit) != 0) {
+                       xfs_notice(mp, "SB stripe unit sanity check failed");
+                       return -EFSCORRUPTED;
+               } 
+       } else if (xfs_sb_version_hasdalign(sbp)) { 
+               xfs_notice(mp, "SB stripe alignment sanity check failed");
+               return -EFSCORRUPTED;
+       } else if (sbp->sb_width) {
+               xfs_notice(mp, "SB stripe width sanity check failed");
+               return -EFSCORRUPTED;
+       }
        if (xfs_sb_version_hascrc(&mp->m_sb) &&
            sbp->sb_blocksize < XFS_MIN_CRC_BLOCKSIZE) {
                xfs_notice(mp, "v5 SB sanity check failed");

This change is included in kernel versions 4.18-rc1 and newer:

% git describe --contains fa4ca9c5574605d1e48

Now let’s try with an older kernel version (4.9.0), using old Grml 2017.05 release:

root@grml ~ # grml-version
grml64-small 2017.05 Release Codename Freedatensuppe [2017-05-31]
root@grml ~ # uname -a
Linux grml 4.9.0-1-grml-amd64 #1 SMP Debian 4.9.29-1+grml.1 (2017-05-24) x86_64 GNU/Linux
root@grml ~ # lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 9.0 (stretch)
Release:        9.0
Codename:       stretch
root@grml ~ # grml-hostname grml-2017-05
Setting hostname to grml-2017-05: done
root@grml ~ # exec zsh
root@grml-2017-05 ~ #

root@grml-2017-05 ~ # xfs_info ./sdd1.dd
xfs_info: ./sdd1.dd is not a mounted XFS filesystem
1 root@grml-2017-05 ~ # xfs_repair ./sdd1.dd
Phase 1 - find and verify superblock...
bad primary superblock - bad stripe width in superblock !!!

attempting to find secondary superblock...
..............................................................................................Sorry, could not find valid secondary superblock
Exiting now.
1 root@grml-2017-05 ~ # mount ./sdd1.dd /mnt
root@grml-2017-05 ~ # mount -t xfs
/root/sdd1.dd on /mnt type xfs (rw,relatime,attr2,inode64,sunit=1024,swidth=512,noquota)
root@grml-2017-05 ~ # ls /mnt
activate.monmap  active  block  block_uuid  bluefs  ceph_fsid  fsid  keyring  kv_backend  magic  mkfs_done  ready  require_osd_release  systemd  type  whoami
root@grml-2017-05 ~ # xfs_info /mnt
meta-data=/dev/loop1             isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=128    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Mounting there indeed works! Now, if we mount the filesystem with new and proper sunit/swidth settings using the older kernel, it should rewrite them on disk:

root@grml-2017-05 ~ # mount -t xfs -o sunit=512,swidth=512 ./sdd1.dd /mnt/
root@grml-2017-05 ~ # umount /mnt/

And indeed, mounting this rewritten filesystem then also works with newer kernels:

root@grml-2020-06 ~ # mount ./sdd1.rewritten /mnt/
root@grml-2020-06 ~ # xfs_info /root/sdd1.rewritten
meta-data=/dev/loop1             isize=2048   agcount=4, agsize=6272 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0, rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=25088, imaxpct=25
         =                       sunit=64    swidth=64 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1608, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root@grml-2020-06 ~ # mount -t xfs                
/root/sdd1.rewritten on /mnt type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota)

FTR: The ‘sunit=512,swidth=512‘ from the xfs mount option is identical to xfs_info’s output ‘sunit=64,swidth=64‘ (because mount.xfs’s sunit value is given in 512-byte block units, see man 5 xfs, and the xfs_info output reported here is in blocks with a block size (bsize) of 4096, so ‘sunit = 512*512 := 64*4096‘).

mkfs uses minimum and optimal sizes for stripe unit and stripe width; you can check this e.g. via (note that server2 with fixed firmware version reports proper values, whereas server3 with broken controller firmware reports non-sense):

synpromika@server2 ~ % for i in /sys/block/sd*/queue/ ; do printf "%s: %s %s\n" "$i" "$(cat "$i"/minimum_io_size)" "$(cat "$i"/optimal_io_size)" ; done
/sys/block/sdc/queue/: 262144 262144
/sys/block/sdd/queue/: 262144 262144
/sys/block/sde/queue/: 262144 262144
/sys/block/sdf/queue/: 262144 262144
/sys/block/sdg/queue/: 262144 262144
/sys/block/sdh/queue/: 262144 262144
/sys/block/sdi/queue/: 262144 262144
/sys/block/sdj/queue/: 262144 262144
/sys/block/sdk/queue/: 262144 262144
/sys/block/sdl/queue/: 262144 262144
/sys/block/sdm/queue/: 262144 262144
/sys/block/sdn/queue/: 262144 262144

synpromika@server3 ~ % for i in /sys/block/sd*/queue/ ; do printf "%s: %s %s\n" "$i" "$(cat "$i"/minimum_io_size)" "$(cat "$i"/optimal_io_size)" ; done
/sys/block/sdc/queue/: 524288 262144
/sys/block/sdd/queue/: 524288 262144
/sys/block/sde/queue/: 524288 262144
/sys/block/sdf/queue/: 524288 262144
/sys/block/sdg/queue/: 524288 262144
/sys/block/sdh/queue/: 524288 262144
/sys/block/sdi/queue/: 524288 262144
/sys/block/sdj/queue/: 524288 262144
/sys/block/sdk/queue/: 524288 262144
/sys/block/sdl/queue/: 524288 262144
/sys/block/sdm/queue/: 524288 262144
/sys/block/sdn/queue/: 524288 262144

This is the underlying reason why the initially created XFS partitions were created with incorrect sunit/swidth settings. The broken firmware of server1 and server3 was the cause of the incorrect settings – they were ignored by old(er) xfs/kernel versions, but treated as an error by new ones.

Make sure to also read the XFS FAQ regarding “How to calculate the correct sunit,swidth values for optimal performance”. We also stumbled upon two interesting reads in RedHat’s knowledge base: 5075561 + 2150101 (requires an active subscription, though) and #1835947.

Am I affected? How to work around it?

To check whether your XFS mount points are affected by this issue, the following command line should be useful:

awk '$3 == "xfs"{print $2}' /proc/self/mounts | while read mount ; do echo -n "$mount " ; xfs_info $mount | awk '$0 ~ "swidth"{gsub(/.*=/,"",$2); gsub(/.*=/,"",$3); print $2,$3}' | awk '{ if ($1 > $2) print "impacted"; else print "OK"}' ; done

If you run into the above situation, the only known solution to get your original XFS partition working again, is to boot into an older kernel version again (4.17 or older), mount the XFS partition with correct sunit/swidth settings and then boot back into your new system (kernel version wise).

Lessons learned

  • document everything and ensure to have all relevant information available (including actual times of changes, used kernel/package/firmware/… versions. The thorough documentation was our most significant asset in this case, because we had all the data and information we needed during the emergency handling as well as for the post mortem/RCA)
  • if something changes unexpectedly, dig deeper
  • know who to ask, a network of experts pays off
  • including timestamps in your shell makes reconstruction easier (the more people and documentation involved, the harder it gets to wade through it)
  • keep an eye on changelogs/release notes
  • apply regular updates and don’t forget invisible layers (e.g. BIOS, controller/disk firmware, IPMI/OOB (ILO/RAC/IMM/…) firmware)
  • apply regular reboots, to avoid a possible delta becoming bigger (which makes debugging harder)

Thanks: Darshaka Pathirana, Chris Hofstaedtler and Michael Hanscho.

Looking for help with your IT infrastructure? Let us know!

09 April, 2021 12:39PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The State of Robotics – March 2021

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/rGteqi_RWPOlpxg9YPN5Y3UOeZVWdMnzvjKYWBSV6G01tn0yqOIcNA-161qgqGm44xd24UfMm2APExlVPOmwsFzL542ShQ-xwxVC1GOyW9cD00toZ0pBVp8-52TMSV1M4X94iq7-" width="720" /> </noscript>

It’s never too late to learn. As any reinforcement learning agent, we get rewarded by the new knowledge that we acquire. Likewise, we learn by doing, by rolling up our sleeves and getting to work. (Do you want a hands-on book on Reinforcement Learning? Here is my personal favourite)

March has shown us great examples of this. From robots learning to encourage social participation to detect serious environmental problems, it was a learning month.  

Learning to become more human 

In a nutshell, human-robot interaction is a field that studies how to develop robots that are going to work closely with people. This is a fascinating field due to the opportunities it represents. For instance, robots can be used in different emotion recognition therapies with children with autism.

But this study from KTH Royal Institute of Technology illustrates perfectly what robots are able to learn to evoke involvement from people in social contexts. Using a Furhat robot researchers programmed the robot to lead a Swedish word game with participants whose proficiency in the Nordic language was varied. The robot’s face is optical and is created using a high resolution 180 degrees projector, together with face masks.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/4fFSZnFkyQnoOR4SbRnMpa4C-XRlo6ZFQaVkAEUByMxt4v9gz7WZGyfBE0bb4FWBA8qCOHKmgJvETFdjuEgP1dmmsECYPkWOrDbdwKcMYiw_IsobFVTb3t_tWPNpTRq85RYERHGT" width="720" /> </noscript>

Researchers found that by redirecting Furhat’s gaze to less proficient players, the robot was capable of eliciting involvement from even the most reluctant participants. This might look like a small study, but it shows how robots can very dynamically influence how people participate, react, and take decisions. Additionally, it contributes to the use of robots in educational settings.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/B4qfXOLNkXmkXvjPXFpCe62S1gQS0JW_i-518KkFUGTn_AaO3eVFYXA0_ntAVae5VkLijrvpTmJ5CfEB3tAE5gQHQXuTgmQNoBJ8DnZf9rlS23N-iGGtOoamHJWBxUqYO7CFdIk4" width="720" /> </noscript>

“Robot gaze can modify group dynamics — what role people take in a situation,” Ronald Cumbal, researcher at KTH, says. “Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute.”

So don’t think that robots are just machines doing repetitive tasks. Given that we want to incorporate robots into our social world, you will likely find more studies that explore robot’s acceptance in different social environments. 

Learning to explore the universe

Last month we learned about Perseverance and Ingenuity. But NASA keeps developing new robots to explore Mars. Led by NASA JPL’s Team CoSTAR, they presented the results of the first Martian Analog testing with autonomous quadruped, referred to as Au-Spot.

Perseverance is a wheeled rover. This limits the robot to flat, gently-sloping terrains and agglomerate regolith. Rovers cannot tolerate instability and operate within a low-risk envelope (i.e., low-incline driving to avoid toppling). 

Here is where legged robots have an advantage. NASA’s ‘Mars Dog’ is a four-legged robot capable of navigating through hard-to-access planetary surfaces. The robot has unique failure-recovery behaviours, providing a major breakthrough in planetary navigation. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/WqhqKWK1abrNOS3CPw-3Hs1NMFOo40R9QsoFFLOgpojyaaWKMa3mp6dt7RiQ2OhzLjhCXOZ8tENi3GD7xcE2a1a355qBdwFMDsC3fxqql1JFCYzm6t-ihHxeko0OtRK7lltJxfF5" width="720" /> </noscript>

The system comprises a spot robot powered by NASA JPL’s “NeBula” AI package which endows robots with a belief system and higher-levels of autonomy. Spot is equipped with a deep cave exploration payload including an arm. 

Mars Dogs operate in synergy, exhibiting collaborative mobility behaviours to accomplish diverse missions that cannot be fulfilled by a single robot. The aim is for these robots to explore the Mars subsurface, where evidence of past life may persist. Ultimately identifying Mars as a potential shelter to future human inhabitants.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/0oFWkHMaNhaqi7GPiEcPSH7cDaWARLu9ZdehoY-gXdwFAXVirrn1MAH1bsQjcbYVyAyQGCMaO9mSZ7_pQLy9kNniLGgGWD8eIGJ4eneHHynb4LBqiX9yMcfyvvmApe1Z8qUjairB" width="720" /> </noscript>

Learning to signal environmental disruptions

DraBot is an electronics-free soft-robot, shaped like a dragonfly that uses air pressure, microarchitectures and self-healing hydrogels to watch for changes in pH, temperature and oil. 

Developed by engineers at Duke University, DraBot skims across water and reacts to environmental conditions. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/tyso2OFLbZsic4ynWdccW6l_rntnrrus0lHoRwbKLEw0RCrmf11UIWM8oq5I91zDD7rnwVFSo0qxaFdA0fMr3iF9FVEC9mcDfPCDqDgy7_b_qVkzz8EhCtLiB2EdDSP_xnceWgtR" width="720" /> </noscript>

You might have heard of soft robots before. They are a growing trend in the industry due to their versatility. For DraBot, the soft principle allows the robot to handle delicate objects, such as biological tissues, that metal or ceramic components would damage. It also helps robots float or squeezes into tight spaces where rigid frames would get stuck. For human interactions, soft robotics is the key in physical intelligent developments increasing the safety of corrobots that physically interact with people. 

DraBot is 2.25 inches long with a 1.4-inch wingspan. It was made by pouring silicone into an aluminium mold and baking it. The team used soft lithography to create interior channels and connected them with flexible silicone tubing.

Movement is created by controlling the air pressure in these silicone interior channels. The channels carry air into the front wings, where it escapes through a series of holes pointed directly into the back wings. 

  • If both back wings are down, the airflow is blocked, and DraBot goes nowhere. 
  • If both wings are up, the airflow is open, and DraBot goes forward.

The team also designed balloon actuators under each of the back wings close to DraBot’s body. If the balloons are inflated, the wings curl upward. By changing which wings are up or down you are now controlling the direction. 

Finally, to detect the pH of water, DraBot uses a self-healing hydrogel painted in one set of wings. Hydrogel is responsive to changes in the surrounding water’s pH. If the water becomes acidic, one side’s front wing fuses with the back wing. This will make the robot spin in a circle, changing its trajectory and signalling researchers of this environmental change. 

While DraBot is a proof-of-principle, it could be the precursor to more soft robots that will become environmental sentinels for monitoring a wide range of environmental signs. 

Learning to be a vacuum…

Well, it looks like Roomba owners have complained their devices appear “drunk” following a software update.

It seems like the devices were moving in strange directions, constantly recharging or not charging at all.

The devices’ maker iRobot has acknowledged its update had caused problems for “a limited number” of its i7 and s9 Roomba models. Adding that a fix would take “several weeks” to roll out worldwide.

This is another example of why we need to have a plan for updating and rolling back a fleet of robots. Cyberdyne avoided all of these problems in their CLO2 cleaning robots with snaps, ROS and a Brand Store. 

Learning from the best 

A great engineer, mentor and friend. Someone that started working on Unity 8, moved to snapcraft and snapd, and finished guiding the robotics team. All-terrain, genius coder and a magnificent roboticist. 

KUDOS to you and thanks for all. 

Here you can find the best robotics tutorials and webinars from one that will always be part of the team:

ROS Kinetic End Of Life

ROS 1 Kinetic is reaching its end of support, together with Ubuntu Xenial. Do you want to know what your options are? Have a look at our latest blogs.

Migrating could be difficult. Maybe you need more time? Or maybe you are concerned about instability of newer releases? Don’t worry. We have something just for you. Learn about ROS ESM, a hardened and long-term supported ROS system for robots and its applications.


We learn something new every day. Even if we don’t want to 😉 In March, we learned what robotics promises, from social to soft, from updates challenges to planning for the moon. 

But we also want to learn from you! Our readers. We’d love to hear about your ROS and or robotics-related project or startup and feature it next month on our blog. Send a summary to
robotics.community@canonical.com, and we’ll be in touch. Thanks for reading.

09 April, 2021 10:00AM

April 08, 2021

Podcast Ubuntu Portugal: Ep 137 – Cura

Num exercício de reavaliação sobre se se altera ou não o nome deste podcast para Streaming, vinho tinto, cerveja e outras cenas ou se mantemos, demos uma volta pelos assuntos mundanos que assolam os vossos anfitriões preferidos.

Já sabem: oiçam, subscrevam e partilhem!

  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/machine-learning-zero-to-hero-manning-publications-books?parner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

08 April, 2021 09:45PM

hackergotchi for Xanadu developers

Xanadu developers

Cambiar el User-agent de Squid

Para ocultar el User-agent del servicio Squid solo hace falta agregar estas lineas al archivo de configuración:

request_header_access User-Agent deny all
request_header_replace User-Agent Mozilla/5.0 (iPhone; CPU OS 11_0 like Mac OS X) AppleWebKit/604.1.25 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1

Luego debe ejecutar el siguiente comando como ROOT:

squid -k reconfigure

Espero que esta información les sea útil, saludos…

08 April, 2021 08:56PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Kubernetes 1.21 available from Canonical

Today, Canonical announces full enterprise support for Kubernetes 1.21, from cloud to edge. Canonical Kubernetes support covers MicroK8s, Charmed Kubernetes and kubeadm. Starting with 1.21, moving forward Canonical commits to supporting N-2 releases as well as providing extended security maintenance (ESM) and patching for N-4 releases in the stable release channel. This allows customers to get new features and product updates for all upstream supported versions and access extended security updates from Canonical for versions no longer supported by the upstream, thus aligning with all major cloud providers for enterprise hybrid cloud Kubernetes deployments.

“Canonical Kubernetes is about removing complexity around Kubernetes operations from cloud to edge. We bring certified Kubernetes distributions to allow users to bootstrap their Kubernetes journey, as well as a large tooling ecosystem and automation framework combination, for businesses to reap the K8s benefits and focus on innovation in the growing cloud-native landscape. Our users benefit from the latest features of Kubernetes, as soon as they become available upstream”, commented Alex Chalkias, Product Manager for Kubernetes at Canonical.

MicroK8s is a lightweight, zero-ops, conformant Kubernetes for edge and IoT. 1.21 expands MicroK8s’ tooling catalogue with support, among others, for the latest version of the NVIDIA GPU operator, the popular multi-cloud storage solution OpenEBS, and the OpenFaaS serverless platform. MicroK8s enables developers to iterate rapidly by simplifying their Kubernetes experience and offers the security and robustness necessary in production deployments.

Charmed Kubernetes is an enterprise-scale, composable Kubernetes ideal for multi-cloud deployments and compatible with both cloud services and legacy application architectures. With release 1.21, Charmed Kubernetes users benefit from support for Calico eBPF, allowing users to test the latest Linux kernel networking capabilities in Kubernetes. New Charmed operators for DNS and the Kubernetes dashboard are also available. Charmed Operators wrap applications and services around code alongside metadata and other dependencies to automate lifecycle operations. Charmed Kubernetes and its ecosystem is driven by operators for a streamlined Kubernetes and container deployment and operations experience.

What’s new in Kubernetes 1.21

All upstream Kubernetes 1.21 features are available in MicroK8s and Charmed Kubernetes. Additionally, the following features are new in Canonical Kubernetes 1.21. For the full list of features, you can refer to the Charmed Kubernetes and MicroK8s release notes.

MicroK8s 1.21 highlights

  • New OpenEBS add-on for container attached storage. Try it using `microk8s enable openebs`
  • New OpenFaaS add-on for serverless development. You can try it with `microk8s enable openfaas`
  • GPU support is now offered via the NVidia operator. Check here for known issues.
  • Kubectl `microk8s kubectl apply -f ` deployments now work with local files on Windows and MacOS
  • Update to support distributions with iptables-nft
  • Support for remote builds. Try building the snap with `snapcraft remote-build –build-on=amd64,arm64`
  • Version updates for Containerd, CoreDNS, Fluentd, Helm, Ingress, Jaeger, KEDA, Linkerd and Prometheus

Charmed Kubernetes 1.21 highlights

  • CoreDNS operator
  • Kubernetes dashboard operator
  • Calico eBPF support
  • Conformance with CNTT guidelines

Notable changes in upstream Kubernetes 1.21

The following are the most significant changes in upstream Kubernetes 1.21. For the full list of changes, you can read the changelog.

Memory manager

Memory allocation can be crucial for the performance of some applications, such as databases. Also, memory should be used wisely, both for the sake of the application and the entire cluster’s resources. The new Memory manager guarantees memory allocation via a dedicated QoS class.

Scheduler features

In Kubernetes, not all workloads are the same. The scheduler is the entity that deploys the workloads on nodes. In 1.21, a developer is able to define nominated nodes for workloads and node affinity into a deployment. The two scheduler features add flexibility and control and make it easier to manage larger-scale deployments.

ReplicateSet downscaling

Autoscaling is one of Kubernetes’s greatest features. Nevertheless, there have been issues in the past with downscaling after a load spike passed. There are now two new strategies for downscaling: semi-random and cost-based. These remove the need for manual checks before downscaling a deployment. This means that Kubernetes became friendlier to workloads that require high availability.

Indexed job

Jobs can now be associated with an index so that the job controller can check the annotation when creating pods. This enhancement simplifies deploying highly parallelisable workloads into Kubernetes -a very interesting addition, especially for HPC use cases.

Network policy port ranges

This greatly simplifies configuration files when users want to define network policies for multiple consecutive ports. Instead of having separate policies for each port, now a single network policy can be applied to a range of ports.

Depreciation of Pod Security Policy

Pod Security Policies (PSPs) restrict what can be done within the scope of a deployment, such as setting execution limits to a list of users or granting resource access for things like network or volumes. PSP have been in beta for a while now, with no sign of effort made to take the feature stable state. 

As a result, PSP are being marked as deprecated in Kubernetes 1.21 and will be completely removed in Kubernetes 1.25. Moving forward users should consider Open Policy Gatekeeper (OPA) for policy enforcement. Canonical Kubernetes will support OPA in its distributions and is looking forward to discussing with users to ensure all their policy requirements can be met.

Canonical Kubernetes channels

08 April, 2021 07:32PM

hackergotchi for Purism PureOS

Purism PureOS

Librem 14 Rave

Now that shipping of the Librem 14 to customers is imminent we should talk about some more details and enhancements we made.

Like we mentioned before the outside dimensions are almost the same as the Librem 13 was, so the Librem 14 measures: 322mm x 222mm x 17mm.
The total weight including the 4 cell battery, two SODIMMs and one M.2 SSD is about 1490gr (I am living in Germany, you have to get along with metric units 🙂 ).

A Walk Around

Let’s have a walk around.

From left to right: “Kensington lock”, HDMI, type-A USB, microSD card reader, type-C USB

On the left hand side there are four connectors: HDMI, type-A USB3.1, microSD card reader (via USB3) and one type-C USB3.1. Also on the left side is a so-called “Kensington lock” hole for one of those laptop anti-theft locks.

From left to right: 3.55mm headphone jack, type-C USB with PD and DP, type-A USB, RJ45 Ethernet, DC in, power LED

On the right hand side we have a 4mm barrel connector for 19V DC input (rated up to 120W), a gigabit Ethernet RJ45 jack with a neat flip down cover, another type-A USB3.1, a 3.5mm headphone jack and finally a full function type-C port.

Supporting Extra Screens

The full function type-C port is something new and that we are super happy about! This not only supports USB 3.1 data but also power delivery to charge the laptop and can also support the so-called type-C Display Port alt mode to attach to an external screen! So together with the HDMI 2.0 port we can now support three screens in total, the internal 14″ LCD, HDMI and type-C, all at the same time.

Three screens in GNOME display settings

Another new feature of the Librem 14 is a power state LED next to the DC input barrel connector. We implemented this so that you can see the laptop’s power state even when the LCD lid is closed, e.g. when you put it in your backpack. This LED on the outside reflects the same states as the power LED on the inside next to the hardware kill switches (HKS). But before looking at these let’s first have a look inside.

A Look Inside

The bottom case plate can be removed after removing 9 screws holding it. The bottom plate (also called D-shell) is additionally held in place by a number of plastic frame snaps. These are actually an enhancement compared to the former Librem 13 and 15 since these help to hold the bottom plate in place and shape at all times. So after carefully clicking these out you get access to the guts.

Main PCB, CPU heat pipe, fans, battery, speakers

Towards the bottom sits the pretty large 4 cell battery with the speakers to the left and right. This covers pretty much the whole space underneath the hand rest. Above that sits the brand new Purism Librem 14 main board. Center piece is the Core i7 10710U CPU covered by the copper heat pipe leading to the two fans left and right. Between CPU and the battery are the two SODIMM slots – two for faster dual channel RAM access and up to 64GB memory! To the bottom right corner of the PCB you can see the two M.2 SSD slots — and here is the problem with the 4-cell battery, the second SSD slot is blocked by it, only one is usable. Once we get 3-cell batteries we can offer a choice, either 4-cell and one M.2 SSD or 3-cell and two M.2 SSDs. But right now there is only one M.2 SSD possible. And finally to the bottom left corner of the PCB there is the M.2 WiFi/BT card.

New M.2 Slot Features

The M.2 slot for WiFi/BT also has some new features–you will probably not need them but well, for the tinkerers and future compatibility we added them anyway! So, what’s new is that we have a couple of new interfaces connected on the M.2 socket. An UART from the chipset (PCH) is conected so you can use an M.2 card with serial UART interface. The PCM audio interface is connected to the I2S interface of the chipset, some Bluetooth cards use this for Bluetooth audio (SCO). And we have SDIO connected to the chipset so that you can use M.2 cards with an SDIO interface. So to summarize all interfaces that are now supported on the WiFi/BT M.2 socket: PCIe, USB, UART, SDIO, I2S/PCM [4].

BIOS and EC Chips

For those interested, the BIOS flash chip containing Coreboot/PureBoot is the small SOIC-8 chip located right of the left fan, the flash chip containing the Librem EC firmware is located beneath the M.2 WiFi/BT card. Right next to the BIOS chip you can also see two small DIP switches (circled in red):

Main PCB, top side, BIOS and EC flash + write protect DIP switch circled in red

These are connected to the write protect pins of the BIOS and EC flash chips! With these you will be able to write protect the chips so that software can not write to them anymore. We still need to add software support for write protect so this is still work in progress. But the hardware is there! And for completeness, here is also the quite boring bottom side of the PCB:

Main PCB bottom side (the larger black chip to the bottom right is the embedded controller)

Opening the Lid

Now let’s open up the LCD lid:

Let’s go from bottom to top. First of all there is the large multi-touch touchpad, perfect for all kinds of tasks. Above that is the custom Purism keyboard with Purism key and a customized key layout, especially as it relates to special keys. Instead of cramming in tiny keys for page-up/-down, home and end we went for an approach using the Fn key, so Fn-Up serves as page-up etc. In the top row are the usual multi media keys (F7 rev, F8 play/pause, F9 fwd, F10 mute, F11 vol-, F12 vol+) along with the LCD brightness F6 down and F7 up. The keyboard backlight can be toggled with Fn-F4. And here we have a novelty for the Purism laptops, this can not just be toggle on or off! We now can support multiple brightness levels, right now we have implemented four so that you can tune it to your liking and / or ambient condition.

Hardware Kill Switches

Towards the top we have the Purism signature hardware kill switches, now with a nice silver chamfer around them. We placed the HKS on the keyboard side (instead of the side as in Librem 13 / Librem 15) to better protect the switch levers. Next to the HKS we now have two LEDs to also visually signal the state of the devices. And here I need to elaborate a bit more, because there is more to it than meets the eye.

So first of all the working of the kill switches changed a bit. The camera / microphone kill switch still severs power to the integrated web cam. But since we now have integrated digital microphones which provide much better audio quality the kill switch now also severs the power supply to the digital microphones. The 3.5mm headphone jack also supports headsets with microphones[1] and the kill switch will also cut off this.

The more interesting change is for the WiFi/BT kill switch. With the L13/L15 we used the DISABLE signals on the M.2 slot to hardware disable the WiFi/BT M.2 For this to work you have to rely on the card inserted to honor these signals. With the Atheros card we ship we are sure this is happening, but we can not guarantee this for other cards. So we changed that approach and we now cut power to the M.2 slot altogether! This will result in the USB BT device being “unplugged” and the PCIe WiFi device dropping from the PCIe bus, only to get hot plug added back again when being re-enabled again. So the big change here is that we do not rely on the M.2 module honoring the DISABLE signal but we cut power to it, so there is no way that it can get re-enabled by anything, except by your finger flipping the switch!

Controlling the WiFi LED

Next to the HKS we now have LEDs signaling their state. The LED next to the camera / microphone HKS will be on when camera and microphone are enabled and off otherwise. It is pretty much hardwired to the power supply of the camera and microphones. The LED next to the WiFi/BT LED is a bit different. This one is not only hardwired to the switch state but when the switch is on it can also be controlled by the EC. In default mode it will be on when WiFi/BT is enabled (powered) and off otherwise. With the ACPI driver [2] that we adapted for the EC [3] this LED can now also be controlled by software! It becomes a regular Linux LED:


Note: Before you start to freak about the following commandline shell examples, there will be reasonable defaults, you do not have to do anything unless you want to take over control and customize your hardware’s behavior to your wishes.

Like all Linux LEDs the LED can be assigned to a so called trigger, i.e. a Linux kernel driver that can automatically change the state of the LED based on certain events. By default the Librem EC ACPI driver will assign the “rfkill” trigger to the LED, which means that if the radio is switched off from Linux using the rfkill framework (e.g. by disabling it from the graphical user interface) the LED will also turn off! But there are more cool things you can do here, there are more triggers.

One trigger I personally like a lot is the ‘netdev’ trigger. With this trigger you can configure a network interface to monitor and if the RX or TX (or both) queue are triggering a ‘blink’ of the LED. A simple script like this:

modprobe ledtrig-netdev
echo netdev > /sys/class/leds/librem_ec\:airplane/trigger 
echo wls6 > /sys/class/leds/librem_ec\:airplane/device_name
echo 1 > /sys/class/leds/librem_ec\:airplane/rx 
echo 1 > /sys/class/leds/librem_ec\:airplane/tx

will let your WiFi/BT LED next to the WiFi/BT HKS blink whenever there is traffic on the WiFi interface. I like this a lot since it gives me an idea if I am still connected, data is still flowing and it also gives an idea about how much data. Cool, isn’t it? But you can also use it for all the other triggers that the kernel offers or control it by your very own program or script, just by writing 0 or 1 to:


Controlling the Notification LED

If you think this is fun and cool, wait for what we have next, the notification LED! It is located literally right next to the WiFi/BT LED. I talked about it a bit in our post about the EC firmware development, now it is real and working. The notification LED is in fact a triple LED with red, green and blue (RGB). Each color can be controlled individually in 255 brightness steps – not just 0 or 1. So theoretically you have 255*255*255 colors to choose from! In practice there are fewer colors since not all LEDs have a visible brightness at low levels. In particular blue is comparably dark so the color yield is a bit less. But this is pretty normal for RGB LEDs and is also rooted in the perceived brightness through the human eye, among other things. To give you an idea: to get something pretty close to a neutral white you need to set red:90, blue:200 and green:255. The three colors can be accessed through the LED interface in the sys filesystem:


and brightness can vary from 0 to 255, so

echo 255 > /sys/class/leds/red\:status/brightness

will turn on the red LED to full brightness. The idea behind the notification LED is the same as what mobile phones, like the Librem 5, have: an LED to signal something while the display is off or something else is occupying the display, so that the user can see that something tries to get her or his attention. We have implemented this for the Librem 5 already and this will then also work on the Librem 14! Or you can choose to use the LEDs in other creative ways! Since access is super easy by shell script or simple program, I am sure we will see a lot of creative uses for them.

Also the notification LED colors can of course be used with triggers, with all the triggers the kernel offers, e.g. what about a nice red heartbeat, getting faster with CPU load:

modprobe ledtrig-heartbeat
echo heartbeat > /sys/class/leds/red\:status/trigger

Or the green LED in such a cool glow dimming pattern:

modprobe ledtrig-pattern
echo pattern > /sys/class/leds/green\:status/trigger
echo 0 1000 255 1000 > /sys/class/leds/green\:status/pattern

So cool!

Controlling the Keyboard Backlight

And there are more things you can control from user space the very same way, like the keyboard backlight:


You can write the actual brightness into that virtual file and the keyboard backlight will change. Since this interface is a common interface in Linux, user interfaces like GNOME pick them up, i.e. you get feedback on the screen when the keyboard backlight is toggled by the hotkey (Fn+F4) and the keyboard backlight will get switched off when the screen saver kicks in and switches off the LCD! Very nice. And GNOME remembers the backlight brightness between reboots too.

Controlling the Battery

In our last blog post we also talked about the battery charge controller and that we can set some threshold from user space. Here you go:


If the battery percentage falls below the start threshold and then a new charge is started, charging will stop when the battery reaches the end threshold percentage. On my Librem 14 I currently use a script and set this to:

# set default battery thresholds
echo 40 > /sys/class/power_supply/BAT0/charge_control_start_threshold
echo 95 > /sys/class/power_supply/BAT0/charge_control_end_threshold

The system fans can not be controlled from user space yet, right now, but they can at least be monitored a bit:


We will work further on it.

The ACPI driver is on its way into PureOS as a DKMS package and we will do our best to get this into upstream Linux kernel so the DKMS will not be necessary mid-term.

Battery Life

Now with the final product in hand we can also answer another FAQ: What is the battery life? Well, of course this always depends on a lot of factors, like display brightness, if programs keep the CPU or GPU busy etc. etc. So it is pretty hard to give definitive answer to that question. But I think I can provide you with at least two data points that should give you a good idea.

With about 60% LCD brightness, WiFi connected and otherwise pretty much idle I get an estimated (!) battery life of more than 10 hours! Does this sound vague? Just an estimate? Well, yes, it always will be, your mileage will vary a lot actually depending on your use case. But I can add a second data point. I usually switch off the power strip on my desk when I leave my office–just to be sure, no rogue electronics, no unexpected “surprises” in the morning, an engineer’s desk can be a mess (and mine for sure is) so better be safe than sorry. So one night I did just that, but totally forgot that my development Librem 14 was booted up sitting there, LCD off, Ethernet connected and mostly idle. I recognized my negligence the next morning when I returned and to my surprise it was still alive! It sat there patiently all night for over 15 hours and still had 20% juice! So these approximate 10h battery time with LCD and light load seem pretty realistic to me and I am super happy about that!


Bringing the Librem 14 to life and into your hands has been quite an adventure! And a long one too… much longer than we planned for and wanted. First Covid crushed all plans, then a CPU shortage which delayed the main board verification, general silicon shortage making sourcing parts a pain and finally issues in sourcing decent LCDs. And to top it all off this is the most customized laptop we ever built with a lot of Purism special features. Doing something the first time always has a certain risk to it, will it work out as expected?

After all these months of hard work it is with incredible joy to see all these tiles falling into place, the product taking shape and all we have planned and dreamed of becoming a reality!

And let me add in closing a brief personal anecdote. My first contact with Purism was in 2016. I was taking part at GUADEC, held in Germany that year. At that time I was, yet again, hacking on some laptop I bought some weeks before and trying to make it work as well as possible with Linux. It was so annoying having to work around tiny paper cuts in the proprietary BIOS and embedded controller which prevented some really basic things, like proper battery readings. This was not the first time I went through that pain, it was a usual thing for me every time I had to get myself a new laptop. They usually, mostly worked well but every time there were these paper cuts here and there. It was super frustrating because usually these things are trivial fixes, if you would just have access to the BIOS source code, or the EC or … you name it. I was fed up with this proprietary stuff.

And there came Purism, fighting for opening up that stuff and creating consumer devices as open and as free as they possibly can be. I had to get in touch with them!

So here we are, about five years later and I am so proud to be part of this Purism team, just having finished yet another product that heals many of these paper cut wounds. The Librem 14 offers pretty much everything that I wanted back then and I can not really describe the feeling I have right now. All these things I ever wanted to have in such a machine but never could. Now we are here. So awesome!

I very much hope you will like it as much as I do!


[1] Headset microphone and headset plug-in detection is not yet working. The wiring is there but there is still work that needs to be done on the software side with the codec.

[2] https://source.puri.sm/nicole.faerber/librem-ec-acpi-dkms

[3] https://source.puri.sm/coreboot/librem-ec

[4] Some of these may need additional software to work.

The post Librem 14 Rave appeared first on Purism.

08 April, 2021 05:14PM by Nicole Faerber

hackergotchi for VyOS


VyOS Project February/March 2021 Update

Hi everyone! It's time for the spring update.

08 April, 2021 03:44PM by Erkin Batu Altunbas (e.altunbas@vyos.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S14E05 – Newspaper Scoop Carrots

This week we’ve been spring cleaning and being silly on Twitter. We round up the news from the Ubuntu community and discuss our favourite stories from the tech news.

It’s Season 14 Episode 05 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

08 April, 2021 03:00PM

Ubuntu Blog: How to make your first snap

Snaps are a way to package your software so it is easy to install on Linux. If you’re a snap developer already or you’re a part of the Linux community, and you care about how software is deployed, and you’re well versed in how software is packaged, and are tuned into the discussions around packaging formats, then you know about snaps and this article isn’t for you. If you’re anyone who is not all of those things, welcome. Let me tell you how I packaged my first snap to make it easier for people to install on Linux.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/de46/snapcraft_green-red_hex.png" width="720" /> </noscript>

Why bother?

Before we get into this, let’s touch briefly on why you might want to bother packaging your application as a snap. If you don’t care about why, skip ahead. Linux distributions change almost as fast as applications are developed. Whether you’re running the latest Fedora release or a years-old Ubuntu release, you should be able to have your favourite applications at your fingertips, and should be able to try the latest and greatest software on release day. Likewise, your user’s choice of Linux distribution should not be a blocker to getting your software in the hands of as many people as you want. Software packaging and distribution can be complex and tiresome.

This is the biggest problem that snaps address. Snaps can be easily installed on any Linux distribution that uses systemd – which is most of them – and developers can integrate packaging snaps with their own CI/CD pipelines. You can take advantage of the snapcraft extensions for specific kinds of middleware, and know that most any language your application is written in is supported in snaps.  Once an application is ‘snapped’ it can be available in minutes, across almost all Linux distributions, with a built-in way to keep them up to date, maintained, and promoted to users.

Whether you want to snap your own application or you want to contribute to the ecosystem of applications already out there, snapping the application is worth it, under certain circumstances…

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7c89/emily-morter-8xAA0f9yQnE-unsplash.jpg" width="720" /> </noscript>
Photo by Emily Morter on Unsplash

Where to begin?

My experience with software development is very limited so I didn’t start by snapping my own application. Instead, I went looking for an app I liked that hadn’t already been snapped – and snapped it. I started my search on GitHub Trending. This is a good list of applications and projects that are obviously active, so might take contributions, but you really could choose anything. Of course, it does help if the application is your own or you’re an avid user, that way you likely give more of a sh*t.  

What makes sense to snap?

One of the most common misconceptions around snaps is that Canonical is trying to replace other ways of packaging software, even when the experience is worse. This simply isn’t true. There are lots of places/instances where it makes no sense to package software as a snap. Snaps are not a replacement for debs, and they are not a competitor to docker containers, despite using containerisation technologies. They are designed to solve different sets of problems that aren’t solved elsewhere. These solutions, and differences, are documented elsewhere, but here are some tips to avoid snapping something that doesn’t need to be a snap:

1. Check the Ubuntu archive – If a Linux application is relatively unknown or unpopular, but is interesting to you, you should make a snap of it. If it’s already in the Ubuntu archive as a deb and has lots of users and is happy, there’s no need to snap it. 

Side note: If you’re using Ubuntu you can check whether the app is already in the archive by attempting to run it in the CLI without having it installed. If it’s in the archive, you’ll be prompted to install it as an apt package.

2. Libraries (libs) don’t snap well. Snaps are a way of packaging applications, they shouldn’t be used to package libs unless for a special case.

3. Modern languages – My first snap is an application written in Rust. There are numerous other examples of snaps of applications written in other languages, too. Modern languages don’t come with the same baggage or preconceptions about packaging because the developers are less concerned about packaging. This means, if you’re snapping an app in a modern language you’re more likely to get an appreciative developer on the other end. 

However, when I started looking it took me a while to find something that I 1. Understood, given my inexperience 2. Wasn’t already a snap or 3. Fit the criteria. So instead I searched for popular Linux command line tools. (I looked for command line tools because they say they’re easier to snap than a GUI app.) After some poking around I found ‘Googler’, a tool that lets you search Google from the command line. I found a deb and an rpm package in the upstream GitHub repo, but no snap. A good start. 

Before I got stuck in, as one last check, I searched ‘googler snap’ (kind of meta huh), to see if the upstream had anything against snaps. They didn’t. But I did find that it was already a snap -_- just not upstream. So I added the project to a doc to remind me it might be worth getting in touch to tell them about the snap in the future, and my search continued.

This happened a few times with a few different apps. Either the snap already existed, or on a couple of occasions, the upstream was less than friendly towards snaps, so I didn’t bother. Snaps are pretty mature by this point and so it’s not surprising that there is a lot of stuff out there already snapped. Or that they have their detractors. But neither are good reasons to stop developers benefitting from snaps when they can. Eventually, I found viu. An image and GIF viewing application. Woo.The end

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/cfd7/Screenshot-from-2021-04-08-10-19-08.png" width="720" /> </noscript>

Can it be built?

Before I got too excited I did some quick checks to see if it can actually be built, easily. Assuming you’ve met all the criteria we talked about earlier, you can almost certainly make it a snap. But given my level of understanding I had a quick run through the snapcraft docs to make sure I wasn’t getting into something horribly complicated. The docs are full of other terms and much deeper technical ‘stuff’ but it was easy to see with a side by side with the viu repo that the complicated stuff wasn’t relevant to me.

Getting started

First, I forked everything in the GitHub repository for the application I wanted to snap. Then I cloned that repository onto my computer, and created a branch. This is standard GitHub workflow stuff but if you’ve not done it before, fear not, GitHub has great documentation too. I did this so that I could work on a snap of the application locally, do my testing and mucking around and then, when ready, I could submit a pull request to the upstream project. Of course, if you’re snapping your own application you can do whatever you like. 

Snapcraft YAML

Once cloned and branched I opened a terminal with the VSCode (visual studio code) snap and `cd`’d into the new local version of the repo to make a new directory:

$ mkdir viu-snap

They say good practice is to name the directory ‘application-name-snap’ so it’s easy to find and when you end up making heaps and heaps of snaps, (I know you will) you’re used to the syntax.

Then I went into the folder and ran snapcraft init to create a template in the snap/snapcraft.yaml:

$ cd viu-snap

$ snapcraft init

If you were to go into that new snapcraft.yaml file you’ll see it looks something like this:

name: my-snap-name # you probably want to 'snapcraft register <name>'
base: core18 # the base snap is the execution environment for this snap
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: |
  This is my-snap's description. You have a paragraph or two to tell the
  most important story about your snap. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the snap

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

    # See 'snapcraft plugins'
    plugin: nil

This is all template stuff that you edit away to make a snap. If you’re looking for more details than I give here, there is lots more information about different parts of the file in the snapcraft documentation and in various blog posts. Let’s start with the metadata (the stuff at the top). 


If you’re snapping an application that is not yours, it’s best to replace the metadata with data from wherever you found the application, in mycase, the upstream git repo. name is the published name of your snap so it’s best to make sure it’s descriptive if the application is yours, and correct if it’s not. Typically you want this to be more than three characters. I ran into issues with viu as you’ll see later.

Choose a base

The default base is core18. That means when we build the snap it is done inside an Ubuntu 18.04 LTS VM. When users install the snap they need the core18 snap which installs for them automatically. For viu I used the base of core20 so it was built in an Ubuntu 20.04 LTS VM. I didn’t give this too much thought; you can always change to another base later if it causes problems.

Next, I added adopt-info to specify that the version should come from the viu part (we’ll get to parts in a moment), so I replaced ‘version’ with adopt-info. The summary and description are both copied from the upstream git repo. At this point my YAML looked like this:

name: viu # you probably want to 'snapcraft register <name>'
base: core20 # the base snap is the execution environment for this snap
adopt-info: viu
summary: Simple terminal image viewer # 79 char long summary
description: |
 A small command-line application to view images from the terminal written
 in Rust. It is basically the front-end of viuer. It uses either iTerm or
 Kitty graphics protocol, if supported. If not, lower half blocks (▄ or
 \u2584) are displayed instead.

Confinement and architectures

The next line in the YAML is grade, this doesn’t affect the stability of the snap itself or interact much once it’s published, it’s so you can signal to yourself and to users that no matter what channel your snap is in whether it’s stable or not. I just set this to stable and carried on.

I wanted this to be strictly confined, so I set that next in the snapcraft.yaml. When things went wrong later I changed this for debugging purposes but it turned out that confinement wasn’t the issue. There are two options here, strictconfinement, where you specify plugs, and classic confinement, where the application being snapped behaves pretty much the same as any other application on the host.

Then we can add what architectures we want to build for, this is a new stanza that I added after the description:

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

 - build-on: amd64
 - build-on: arm64
 - build-on: armhf
 - build-on: ppc64el


They say the parts of a snap are the most important parts [sic]. This is where we define how to build the software we’re putting inside the snap. viu is a Rust application that is a supported language in snapcraft, so I specified that plugin.

Then I needed to tell snapcraft the location of the source code, i.e the original GitHub repo that I forked earlier. I think it’s better to specify that one rather than your local version, even for testing, in case you mess something up in the local version. And finally, I needed to list the libraries and dependencies needed to build the application.

viuonly has one part, the viu project itself, but I still needed to specify a version. When snaps build they need a version number to apply to the file so the name becomes: snapname_version_arch.snap. I did this by adding an override-pull: section which specifies a script to run at build time. This is what I was left with:

   plugin: rust
   source: https://github.com/atanunq/viu.git
   override-pull: |
     snapcraftctl pull
     snapcraftctl set-version "$(git describe --tags)"

The script here uses snapcraftctl pull to tell snapcraft to pull from the source and then I used snapcraftctl set-version to call back to adopt-info earlier and set that to “$(git describe --tags)”, which takes the version specified at the source. It took me a few read-throughs to understand this too. I could have just added a ‘version: something’ line in the metadata, but this seemed better.

An unconfined snap or a snap with ‘classic’ confinement would just search for these dependencies on my host system or pull them down from the internet. Because I defined my snap as strictly confined, it can’t do that, and so I needed to list all of the packages it needs so snapcraft knows what specifically to bundle into the snap.

In the YAML we specify these as build-packages: and stage-packages:. This was the hardest part of the whole process. I had to go into the projects repo and poke around to see if there’s anything special going on and find the packages they use. Some developers lay all of this stuff out in their README.md files, and some don’t. viu did not.

Digging for packages

When I was hunting around the internet for the packages I needed I asked for a lot of help. I was told that when developing applications developers make a lot of choices about dependencies and packages and paths in the file system. Unless you developed the application yourself, you obviously weren’t privy to those kinds of decisions. So I had to decide to take the time to understand the application more deeply or just work the puzzle. Assuming its not all laid out in the README.md, here are the tips I accumulated to find the right packages:

  • Check if it’s a deb – If the application is in the archive and you want to make it a snap for one of the reasons we talked about earlier then you have a good start. Investigate the deb, see what packages it pulls in and investigate if they’re in Ubuntu (they might have a different name).
  • Check the AUR (Arch User Repository) – If the application you want to snap is in Arch Linux then you might just have all the info you need. Search arch <app name> in google and if it’s in Arch it’ll be on top of the search, go in there, go to source files on the RHS, click the PKGBUILD if there is one, and you’ll be able to deduce the dependencies and things to include as stage packages.
  • Check for travis.yml – If the application’s CI process uses travis then it has a travis.yml file which is a YAML file that contains the programming language used, the build and test environments and the packages/dependencies needed for building.

These files might contain more or less than what you need but are good places to start. In the case of viu I used the travis.yml file which specified all the build packages I would need.

For stage packages, I again poked around the repo. Imagemagick is listed in the install instructions so I stuck that in. And then I checked the AUR and in there it showed imagemagick and another dependency, “libxslt” so I stuck that in stage-packages too.

Note: I don’t know exactly what each of the build or stage packages actually do, I can make a guess but unless I run into issues I don’t necessarily need to know. I’m hoping I’ll pick that up as I do more of these. What I’m saying is don’t worry if you don’t know all of the nuances of packages. Of course if you’re snapping your own application you likely do already know, well done you.

And we’re left with this. Woo:

   plugin: rust
   source: https://github.com/atanunq/viu.git
   override-pull: |
     snapcraftctl pull
     snapcraftctl set-version "$(git describe --tags)"
     - build-essential
     - libcanberra-dev
     - imagemagick
     - libxslt

At this point this might not be correct, I might have picked the wrong packages or missed some, but I’ll find out in the next step. If we have missed something then the search continues, but if I have some things correct it becomes easier to find what’s missing.

Apps and interfaces

At this point, the snap has everything it needs to complete a build, so you could try that, but it probably won’t work. I pointed snapcraft at the binaries and the right packages but because I made this a strictly confined (strict) snap it doesn’t have access to the outside world. This is so no one can get in and interfere with the application. I allowed the snap to talk to the outside world with an apps stanza where we expose specific ‘parts’ of the snap to the host.

First I specified the application, then the location of the binary inside that application, and gave it all the interfaces it needs to work. Interfaces in the YAML are called plugs, they are designed such that they ‘plug’ into your snap to safely interact with the outside world. More info can be found in the snapcraft documentation of course. Once I do this a few times I imagine there are a few that are used more than others and I’ll be able to make a good guess at what the app needs without needing the docs. Finally, we are left with this:

name: viu # you probably want to 'snapcraft register <name>'
base: core20 # the base snap is the execution environment for this snap
adopt-info: viu
summary: Simple terminal image viewer # 79 char long summary
description: |
 A small command-line application to view images from the terminal written
 in Rust. It is basically the front-end of viuer. It uses either iTerm or
 Kitty graphics protocol, if supported. If not, lower half blocks (▄ or
 \u2584) are displayed instead.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

 - build-on: amd64
 - build-on: arm64
 - build-on: armhf
 - build-on: ppc64el

   command: bin/viu
     - network
     - home
     - removable-media
     - alsa
     - pulseaudio

   plugin: rust
   source: https://github.com/atanunq/viu.git
   override-pull: |
     snapcraftctl pull
     snapcraftctl set-version "$(git describe --tags)"
     - build-essential
     - libcanberra-dev
     - imagemagick
     - - libxslt

What fun. Now, don’t get too excited, get a little bit excited, but not too much, if you get this far, you may or may not be done. But you might be. This might just work and the rest is easy. Deep breaths.

Build and test

To build and test make sure you’re in the right directory – /application-name-snap, otherwise things don’t work and you get confused and have to ask stupid questions to busy people before you realise your mistake. Once there, run:

snapcraft --debug --shell-after
  • --debug means that if the build fails snapcraft would leave me ‘inside’ the shell of the VM that’s doing the building. That way if it failed I could poke around and do some debugging. This also means if I wanted to try and build the snap again I could just run snapcraft again without the extra options to save (a lot of) time since it wouldn’t need to start up and shell in all over again.
    • Note: The folder you’re working in is mapped to the Multipass VM so it’s possible to continue editing the YAML outside of the VM if you see issues and then re-running snapcraft picks up any saved changes.
  • --shell-after does the same thing except it leaves me inside the shell after its done building if it succeeds in case I want to poke around anyway.

Snapcraft uses Multipass by default to spin up the VM in which to build your snap. There are other options, for example there is also support for LXD. So far I’ve only used Multipass because it’s already there and it’s easy but if you’re more familiar with these things or want to try something else, there you go. It’s worth noting that with Multipass, the VM stays up for a while before it shuts down, but it doesn’t die, this caused me confusion. You can run multipass list to check. LXD doesn’t have the same problem because it spins up and down much faster. It only gets thrown away when you do snapcraft clean.

If the build is successful, snapcraft outputs something like this, but with more fluff:

Launching a VM.
Launched: snapcraft-viu-image-viewer
Preparing to …
Unpacking … 
Setting up … 
Reading package lists... Done
Building dependency tree       
Suggested packages:
Recommended packages:
The following NEW packages will be installed:
Pulling viu
Building viu
Rust is installed now. Great!
Staging viu 
+ snapcraftctl stage
Priming viu 
+ snapcraftctl prime
Snapping |                                                                                                                        
Snapped viu-image-viewer_v1.3.0-12-g4160c8b_amd64.snap
snapcraft-viu-image-viewer #

You can see I’m still in the shell here, so since I didn’t want to do any rooting around I ran CTRL+D and got out. Because the VM doesn’t go away right away if I were to run snapcraft --debug --shell-after again it wouldn’t take as long to run.

If you try this and are unsuccessful, it means there’s an issue. Duh. Hopefully, you get a nice helpful error message or maybe there’s an obvious error in the YAML file. If the issue isn’t obvious compare your YAML syntax to mine at the end of this article and open that good old documentation. If you still can’t find the answer, don’t panic, the snapcraft forum is always alive with folks who can help.  

Install and test

Once the build finishes successfully we see a file that looks something like this in the working directory:


That’s the snap, ready and raring to go. To test it I first needed to install it locally:

snap install viu_v1.3.10-gOdba818_amd64.snap --dangerous

--dangerous signals that this snap has NOT gone through the typical Snap Store review but I’m okay with installing it because I built it.

This command can be run from anywhere. If for any reason I hadn’t made my snap strictly confined, maybe I was debugging or it’s just not suitable for strict confinement, I could add the --classic flag. This signals that what I’m installing is an unconfined application which has free reign over everything else on the system. (There are a number of other flags you can find by running snap install --help).

Once installed I could test it. For viu, there was a caveat. I needed to have the Kitty or ITerm emulators installed for it to work properly. But here is an example of it working in the Kitty terminal:

rhys@rhys-desktop:~$ viu pikapika.gif
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/oJtgX19f5LupsKjFrFM5cTJJcinX8ZDCQXWLbA2YLoyGPKBALNTxOnNwZwIrRUJ2Slu52Tnz9i8txusRIyY68ilY0SIxdzQpNpzfc0IMv5kk6Q_pC4VjiaslVaapKfBIfnIVcEvE" width="720" /> </noscript>

This is where, if the application is not yours, you should exercise some due diligence. Test it properly, poke it, kick the tyres, jump up and down on it with combat boots. Just make sure that it does all the things it’s supposed to do. If you run into errors, take some time to figure them out and correct the YAML. Because once that’s done and you’re confident the thing works, you can register and publish it!

Snap registration and publication

Before publishing it we need to register the snap name with the store. This is a process to make sure there’s no repeats and so that if there are problems with copyrights etc it’s easily fixable. To register a name I ran:

snapcraft register viu

When I ran this for viu it spat out an issue. The snap store has reserved or at least does not allow three letter named snaps. This is to encourage applications to be more descriptive but also to avoid conflicts in other snap related commands. If the upstream were to want the viu snap name we could make a case and create the viu three letter alias, but as I am not the upstream I simply made the snap name ’viu-image-viewer’ instead. Not ideal, but if I was the upstream or the application developer this wouldn’t have been necessary. Shrug.

Once there are no issues and the name is registered I uploaded the snap to the store. The way it’s recommended to do this is to first run the super nifty remote-build feature in the working directory. If it’s your first time doing this you have to sign in or create an account on launchpad where all the building happens. I had made one of these before for other things so I just moved onto the building bit.

This can take some time, especially if there’s a queue:

$ snapcraft remote-build
All data sent to remote builders will be publicly available. Are you sure you want to continue? [y/N]: y
snapcraft remote-build is experimental and is subject to change - use with caution.
Building snap package for amd64, arm64, armhf, ppc64el, and s390x. This may take some time to finish.
Build status as of 2021-02-03 11:52:34.926743:
arch=amd64 state=Uploading build
arch=arm64 state=Currently building
arch=armhf state=Currently building
arch=s390x state=Currently building
arch=ppc64el state=Currently building

remote-build does what it says on the tin and, after a little while, (it varies depending on how busy the system is) I was left with a list of snaps. You can check the que or the resulting snaps by going to your launchpad account and checking.

For example, I went to this URL: `https://launchpad.net/~rhys-davies/+snaps` and could see stuff working away. The reason remote build is great is because it does the building in the cloud and just delivers you the binary. So I don’t have to slow my machine down doing all the building myself. And then you end up with a list of snaps: 


These could then be uploaded with this little loop Alan shared in his blog post:

$ for f in viu_v1.1.0_*.snap; do snapcraft upload $f --release=candidate; done

This is a for loop that looks at each file in the directory that starts with viu_v1.1.0_ and ends with .snap, runs snapcraft uploadon it and puts it into the ‘candidate’ release channel. The release channels are different places to publish snaps to. The idea is that you put the snaps that are ready for general use in the stable channel, for user testing in the edge channel, and for dedicated testing in the candidate channel.  The last line specifies that snapcraft is using the review-tools command to do some checks on the snap before it gets uploaded. This is a separate snap I installed with:

snap install review-tools

The Snap Store would run these checks when I uploaded the snaps anyway but if the store found a problem it would take some time to let me know and I’d have to go through the process all over again. It’s easier to find out before things get uploaded just by having this installed.

Assuming there are no issues, each snap gets uploaded to the candidate channel. Since the remote builder feature gave me a snap for each architecture I specified earlier I could then go and test that the app works on any other platforms or distributions that I’m interested in. Ask your friends, ask around on the forum, maybe you can ask the upstream? And once you’re happy with that too you can promote the snap to the stable channel for the whole world to see.

The Snap Store

Before making your snap stable you should do some due diligence to test the thing properly, and there’s a couple of housekeeping notes they are recommended, too. 

1. Linking everything up to a GitHub repo and connecting it to the build process. This way each time the application is updated in GitHub a new release is built and pushed to the edge channel in the store. To do this, I logged into my snapcraft developer account, selected the viu snap and went to the ‘Builds’ tab and clicked the big green button:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/d826/Screenshot-from-2021-03-09-14-48-51.png" width="720" /> </noscript>

From here I was able to select the repo I forked earlier and link it up to the snapcraft build system. With this set up I could review and in turn promote these pushes in my developer account. At any time I can now click the ‘Trigger new build’ button and force a new build. The first time you do it looks something like this:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ef39/Screenshot-from-2021-03-09-14-55-24.png" width="720" /> </noscript>

2. Take the time to give your snap a nice landing page. You can do this in your developer account by going to the ‘Listing’ and ‘Publicise’ tabs. Here are some examples of snaps with good landing pages, that’s what you’re aiming for. If you have the time. It’s completely not mandatory. Good looking snaps are also more likely to get “featured” on the front page of snapcraft.io. A snap that is featured in the snap store grows users significantly.

You can fill out all the basics, add some images and videos of the snap working or link the app up to your GitHub or social accounts for users to have a look at too. There are lots of things you can do here that will make your snap more successful and more likely to attract more users. But there’s a whole other article about that so I won’t go into detail here. 

The end of my first snap

That’s it. Time for some cake. If you followed this process for your own application, congratulations, users can now install and run your application on most any Linux distribution with a single command. If you’ve done what I’ve done and snapped someone else’s application then we both have one last thing to do. File an issue and make a pull request with the upstream project to let them know. Hopefully, they’ll be happy and might even take over maintenance of the application and adopt it into their workflow. Of course, they are not obliged to at all.

Everything we’ve covered here feels like a lot, I know it’s a long bloody article, but hopefully, you can see it’s mostly straightforward. The hardest part is debugging but if you have a good understanding of the application or time enough to find help on the forum, even that can be easy. Heck, if I can do it, believe me, you can too. Happy snapping.

This blog was originally posted on Rhys’ personal blog in case you’re interested in seeing the ever so slightly different version over there.

Here’s the final YAML:

name: viu-image-viewer # you probably want to 'snapcraft register <name>'
base: core20 # the base snap is the execution environment for this snap
adopt-info: viu
summary: Simple terminal image viewer # 79 char long summary
description: |
  A small command-line application to view images from the terminal written
  in Rust. It is basically the front-end of viuer. It uses either iTerm or
  Kitty graphics protocol, if supported. If not, lower half blocks (▄ or 
  \u2584) are displayed instead.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

  - build-on: amd64 
  - build-on: arm64 
  - build-on: armhf
  - build-on: ppc64el 

    command: bin/viu
      - network
      - home
      - removable-media
      - alsa
      - pulseaudio

    plugin: rust 
    source: https://github.com/atanunq/viu.git
    override-pull: |
      snapcraftctl pull
      snapcraftctl set-version "$(git describe --tags)"
      - build-essential
      - libcanberra-dev
      - imagemagick
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/b51c/pikapika.gif" width="720" /> </noscript>

08 April, 2021 01:53PM

April 06, 2021

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 News Summary: March 2021

Progress on Many Fronts

We continued to ship Librem 5s throughout March all according to plan and also continue to hunt down CPUs. As we get more hardware confirmed we will contact the next group of Librem 5 backers with their shipping estimate and at the moment our pipeline is full well into May.

Camera and Hardware-accelerated Video Support

We made a lot of progress in March on the software front. Probably the most exciting news is that after a lot of work from the team to write kernel drivers, we have gotten both the front (“selfie”) and back cameras working! With the drivers functional we can now get raw images from the camera sensors. The focus now shifts outside of the kernel and into “userspace” software to post-process those raw images, correct colors and brightness, and provide a default camera app.

What better way to announce to the Internet that the Librem 5 camera is working than with a cat picture?

A classic cat picture, taken from a Librem 5A classic cat picture, taken with a Librem 5

We have also added support for hardware-accelerated video playback using the iMX8mq’s Hantro VPU. By using the VPU instead of the CPU we save power and free up the main CPU for other tasks.

Userspace Software Improvements

Phosh (the desktop shell for the Librem 5) got additional features in March including a volume overlay, swipeable dialogs, geoclue support (adding location services), and shutdown dialogs. Phoc also released version 0.7.0 with massive stability improvements and snap-to-edge support for windows when the Librem 5 is docked.

We have also been working on SIP support to our Calls application so you can use a SIP provider instead of a cellular provider to place calls over the Internet. While the work isn’t yet complete we have made great progress toward supporting SIP, and are now working on adding this support into the user interface to make it convenient.

March also saw Nautilus, the default file manager used by GNOME, add adaptive features so it functions well on the Librem 5. You can see this in action in our sneak peek for the next version of PureOS on the Librem 5.

Blog Posts

We also published a number of blog posts and videos about the Librem 5 throughout the month of March:

What’s Next

We have completed the process of rounding up components for Librem 5 USA and once everything has arrived we hope to start production of the Librem 5 USA PCBA in April. We will also continue to ship through our Librem 5 backlog in April and hope to provide an update on the next round of Librem 5 shipments.

The post Librem 5 News Summary: March 2021 appeared first on Purism.

06 April, 2021 05:33PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Weekly News #204

Please welcome new host, Moss Bliss


DigiKam 7.2 released

4MLinux 36.0 released

Malicious changes detected in the PHP project Git repository

New version of Cygwin 3.2.0, the GNU environment for Windows

SeaMonkey 2.53.7 Released

Nitrux 1.3.9 with NX Desktop is Released

Parrot 4.11 Released with Security Checker Toolkit

Systemd 248 system manager released

GIMP 2.10.24 released

Deepin 20.2 ready for download

Installer added to Arch Linux installation images

Ubuntu 21.04 beta released

Full Circle Magazine
Host: @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust

06 April, 2021 05:02PM

Ubuntu Blog: Hardened ROS with 10 year security from Open Robotics and Canonical

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/y2z7ndiDdKs3lU5mG88-OsNCn4AoGS9EBnDB4E_I-o9lrBh6n9CGFZFq8TPOY3jNp7HTtrRJUP1IHp24jA0KE2NqM_2unirHn3Bjbia2G0Lkr3-ggzrF3X_SWF7nusax5pC8LkV0" width="720" /> </noscript>

Canonical ROS ESM customers now can access a long-term supported ROS and Ubuntu environment by the Ubuntu and ROS experts. Learn more about ROS ESM.

6 April 2021: Canonical and Open Robotics announced today a partnership for Robot Operating System (ROS) Extended Security Maintenance (ESM) and enterprise support, as part of Ubuntu Advantage, Canonical’s service package for Ubuntu. ROS support will be made available as an option to Ubuntu Advantage support customers. As a result, users already taking advantage of critical security updates and Common Vulnerabilities and Exposures (CVE) fixes now have a single point of contact to guarantee timely and high quality fixes for ROS. 

Together, the two companies support the robotics community by making ROS robots and services easier to build and package, simpler to manage, and more reliable to deploy.

 “With ROS deployed as part of so many commercial products and services, it’s clear that our community needs a way to safely run robots beyond their software End-Of-Life dates. Canonical’s track record delivering ESM, together with our deep understanding of the ROS code base, make this partnership ideal. Ubuntu Linux has been central to the ROS project from the beginning, when we released ROS Box Turtle on Ubuntu Hardy over a decade ago” says Brian Gerkey, CEO of Open Robotics. “We’re excited to be part of this offering that will enable users to access quality support from both organizations.”

“Canonical’s Ubuntu has been the primary platform for ROS since inception. Now open robots are rapidly and fundamentally changing what is possible in the physical world” said Mark Shuttleworth, CEO of Canonical. “We are delighted to deepen our engagement with Open Robotics to secure the robots of the future, and simplify the lives of those responsible for creating and operating them”.

ROS ESM is Canonical’s offering for ROS developers that enables them to access a hardened and long-term supported ROS system for robots and its applications on Ubuntu. ROS ESM provides backports for critical security updates, Common Vulnerabilities and Exposures (CVE) fixes, and critical bug fixes for your ROS environment. By enabling Canonical’s ESM repositories you will get trusted and stable binaries for your ROS and Ubuntu base OS distribution.  ROS ESM is available for End-Of-Life distributions as well as Long-Term Support versions of ROS. 

ROS is an open-source framework that helps researchers and developers build and reuse code between robotics applications. More than just software, ROS is also a global open-source community of engineers, developers and academics who contribute to making robots better, more accessible and available to everyone. ROS has been adopted into some of the biggest names in robotics. 

About Canonical

Canonical is the company behind Ubuntu, the leading OS for container, cloud, and hyperscale computing. Through its open-source tools such as snaps for packaging your ROS project and Ubuntu Core to enhance security for mission-critical robots, Canonical has been supporting the management and upgrading of robot software. This is a common and significant problem faced by the robotics community. Learn more here>.

About Open Robotics 

Working with the global community, Open Robotics offers two open platforms: ROS and Gazebo. ROS (Robot Operating System) is a software development kit to build robot applications. Gazebo is a widely used tool for accurately and efficiently simulating robots. Open Robotics is the hub of the global robotics community. Open Robotics creates open software and hardware platforms for robotics and uses those platforms to solve important problems and help others to do the same. Learn more here>.

06 April, 2021 01:16PM

Ubuntu Blog: What is ROS Extended Security Maintenance?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/rWF2KxpHxX-b9JDKO-adRCE7Aw4ds7bJDXNpLtRJujKTbGB4WlAbRgIgV2OuS0cSkBZKHQS5426tyuITydAaRjSC51xfSw2SfxhtsvRCACYAGQqwZscOGvbzO7MmZv6QryIr128m" width="720" /> </noscript>

Developing robots is not like building apps or IoT devices. Robots balance complex features such as scene awareness, social intelligence, physical intelligence, communication, dialogue, learning from interaction, memory, long-term autonomy, safe failure… the list goes on and on. 

As a result, robotics startups can take years to get to a minimum viable product (MVP). As code develops and packages change, the Robot Operating System (ROS) needs to be continuously patched and updated. This is time consuming and detracts from your robotics development, but running unpatched and unmaintained versions of ROS exposes your robot, company and customers to serious risk. 

Once deployed, robots are expected to last years on-site, meaning robotics companies either need to factor in OS and software upgrade into their maintenance plans, or run on unsupported software. This also affects those developing services for robots such as fleet management solutions, navigation or computer vision systems. 

As a result, whether in production or deployment, robots will inevitably live beyond the standard support lifecycle of the software powering it. Whether that’s Ubuntu, ROS or other dependencies (such as Python 2), your system will reach its end-of-life, that is, the end of updates, patches and maintenance. As an example, ROS Kinetic and Ubuntu Xenial End-Of-Life are upon us. 

Canonical’s ROS Extended Security Maintenance (ESM) precisely addresses this issue. As part of Ubuntu Advantage subscription, and delivered in partnership with Open Robotics, ROS ESM gives you a hardened and long-term supported ROS system for robots and its applications. 

Even if your ROS distribution hasn’t reached its End-Of-Life (EOL), you can count on backports for critical security updates and common vulnerabilities and exposures (CVE) fixes for your ROS environment. In addition, by enabling Canonical’s ESM repositories you will get trusted and stable binaries for your ROS and Ubuntu base OS distribution. Finally, our Ubuntu Advantage Advanced and Standard subscribers can now access enterprise support to report ROS bugs to guarantee high quality and timely fixes.       

How does ROS ESM work? 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/InDhEQcw246QJwuYcw27lQlYUhcbb-MN4TO5Tqel6shJg5crSpEo4XPLc0UBY7IeFlNhxh9dtO2RF-qb6pEtQHumgXEGfhK9Ijhxoj79tHZFUFIuNRBPr2XuJJjZdF4543OccyEM" width="720" /> </noscript>

As part of Ubuntu Advantage, ROS ESM builds upon the world-class infrastructure used by Canonical to deliver security updates for the Ubuntu base OS and critical infrastructure components. 

At Canonical, we develop security and update personal package archives (PPAs) for a number of service packages in the Ubuntu Main Repository and the Ubuntu Universe Repository. This includes available high and critical CVE fixes and security updates. For instance, at the time of writing, we have more than 5,000 CVE fixes for Xenial alone. These fixes reside in our ESM repository and are available to any Ubuntu Advantage client through subscription tokens. 

With ROS ESM, we have also included security and updates PPAs for core ROS packages. We will continue to backport critical security updates and bug fixes for ROS, for EOL and non-EOL distributions starting with ROS 1 Kinetic. 

The three main benefits of ROS ESM 

<noscript> <img alt="" height="250" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_182,h_250/https://ubuntu.com/wp-content/uploads/b69e/ESM-for-ROS_2.png" width="182" /> </noscript>

A hardened ROS environment 

It’s not unusual for upstream ROS components to break backward compatibility – API breakages, much less ABI. To retain stability and provide users with a resilient workspace, we patch security flaws, but we also eliminate API/ABI breakage from updates while fixing high and critical fixes and bugs. With ROS ESM, developers get curated packages that meet Canonical’s high standards for stability and interoperability. 

<noscript> <img alt="" height="246" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_271,h_246/https://ubuntu.com/wp-content/uploads/0612/ESM-for-ROS_10.png" width="271" /> </noscript>

A long-term secure system for your robot

Since its inception in 2004, Ubuntu has been built on a foundation of enterprise-grade, industry-leading security practices. Canonical never stops working to keep Ubuntu at the forefront of safety and reliability. We are now extending our security commitment to the robotics field. ROS ESM provides backported security fixes for ROS well after the distribution is no longer supported upstream. Get security updates for ROS and the Ubuntu base OS, ensuring your entire stack is up-to-date, protecting your robot and customer. 

<noscript> <img alt="" height="236" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_219,h_236/https://ubuntu.com/wp-content/uploads/c334/ESM-for-ROS_5_NEW.png" width="219" /> </noscript>

ROS single point of contact for
enterprise support

As part of Ubuntu Advantage, ROS ESM provides a single point of contact for all the software in ESM, including ROS, as opposed to trying to figure out where to log a bug or propose a fix and hoping it might get eyes at some point. Save engineering time and effort by contacting Canonical and Open Robotics for all the support you and your robot deserve. All in one place!     

Get ROS ESM now

Whether your ROS distribution is reaching its End-Of-Life, or you are not receiving the updates and fixes your system requires, ROS ESM is here to make your work easier.  

Get ESM for ROS

06 April, 2021 01:13PM

Harald Sitter: Reuse Licensing Helper

It’s boring but important! Stay with me! Please! 😘

For the past couple of years Andreas Cord-Landwehr has done excellent work on moving KDE in a more structured licensing direction. Free software licensing is an often overlooked topic, that is collectively understood to be important, but also incredibly annoying, bureaucratic, and complex. We all like to ignore it more than we should.

If you are working on KDE software you really should check out KDE’s licenses howto and maybe also glance over the comprehensive policy. In particular when you start a new repo!

I’d like to shine some light on a simple but incredibly useful tool: reuse. reuse helps you check licensing compliance with some incredibly easy commands.

Say you start a new project. You create your prototype source, maybe add a readme – after a while it’s good enough to make public and maybe propose for inclusion as mature KDE software by going through KDE Review. You submit it for review and if you are particularly unlucky you’ll have me come around the corner and lament how your beautiful piece of software isn’t completely free software because some files lack any sort of licensing information. Alas!

See, you had better use reuse…

pip3 install --user reuse

reuse lint: lints the source and tells you which files aren’t licensed

reuse download --all: downloads the complete license files needed for compliance based on the licenses used in your source (unfortunately you’ll still need to manually create the KDEeV variants)

If you are unsure how to license a given file, consult the licensing guide or the policy or send a mail to one of the devel mailing lists. There’s help a plenty.

Now that you know about the reuse tool there’s even less reason to start projects without 100% compliance so I can shut up about it 🙂

06 April, 2021 01:03PM

hackergotchi for SparkyLinux



There is a new application available for Sparkers: Ventoy

What is Ventoy?

Ventoy is an open source tool to create bootable USB drive for ISO/WIM/IMG/VHD(x)/EFI files.
With ventoy, you don’t need to format the disk over and over, you just need to copy the image files to the USB drive and boot it. You can copy many image files at a time and ventoy will give you a boot menu to select them.
x86 Legacy BIOS, IA32 UEFI, x86_64 UEFI, ARM64 UEFI and MIPS64EL UEFI are supported in the same way.
Both MBR and GPT partition style are supported in the same way.

– 100% open source
– Simple to use
– Fast (limited only by the speed of copying iso file)
– Can be installed in USB/Local Disk/SSD/NVMe/SD Card
– Directly boot from ISO/WIM/IMG/VHD(x)/EFI files, no extraction needed
– No need to be continuous in disk for ISO/IMG files
– MBR and GPT partition style supported (1.0.15+)
– x86 Legacy BIOS, IA32 UEFI, x86_64 UEFI, ARM64 UEFI, MIPS64EL UEFI supported
– IA32/x86_64 UEFI Secure Boot supported (1.0.07+)
– Persistence supported (1.0.11+)
– Windows auto installation supported (1.0.09+)
– RHEL7/8/CentOS/7/8/SUSE/Ubuntu Server/Debian … auto installation supported (1.0.09+)
– FAT32/exFAT/NTFS/UDF/XFS/Ext2(3)(4) supported for main partition
– ISO files larger than 4GB supported
– Native boot menu style for Legacy & UEFI
– Most type of OS supported, 650+ iso files tested
– Linux vDisk boot supported
– Not only boot but also complete installation process
– Menu dynamically switchable between List/TreeView mode
– “Ventoy Compatible” concept
– Plugin Framework
– Injection files to runtime environment
– Boot configuration file dynamically replacement
– Highly customizable theme and menu
– USB drive write-protected support
– USB normal use unaffected
– Data nondestructive during version upgrade
– No need to update Ventoy when a new distro is released

Installation (Sparky 5 & 6):

sudo apt update
sudo apt install ventoy

or via APTus -> System-> Ventoy icon.

sudo Ventoy2Disk
sudo VentoyWeb
sudo VentoyWebDeepin
sudo CreatePersistentImg


License: GNU General Public License v3.0
Web: github.com/ventoy/Ventoy


06 April, 2021 12:33PM by pavroo

hackergotchi for Pardus


Pardus 19.5 için yeni güncellemeler yayınlandı

Pardus 19.5 için yeni güncellemeler yayınlandı. Yapılan değişiklikleri gözlemlemek için Pardus 19 yüklü sisteminizi güncel tutmanız yeterlidir.

Uçbirim penceresinden şu komutu çalıştırarak güncellemeleri yükleyebilirsiniz:

sudo apt update && sudo apt full-upgrade -yq

Başlıca Değişiklikler

  • Öntanımlı internet tarayıcısı Firefox sürümü 78.9′a yükseltildi.
  • Öntanımlı e-posta istemcisi Thunderbird sürümü 78.9′a yükseltildi.
  • Kernel sürümü 4.19.0-16′ya yükseltildi.
  • Öntanımlı ofis uygulaması LibreOffice güncellendi.
  • Python güvenlik güncellemeleri yayınlandı.
  • Pardus Mağaza’ya yeni uygulamalar eklendi.
  • Kurulu sisteme 50‘nin üzerinde paket ve yama içeren güncelleştirmeler getirildi.
  • Depoda 500‘ün üzerinde paket güncellendi.
Paket Adı Yeni Sürüm Eski Sürüm
avahi-daemon 0.7-4+deb10u1 0.7-4+b1
debian-archive-keyring 2019.1+deb10u1 2019.1
firefox-esr-l10n-tr 78.9.0esr-1~deb10u1 78.8.0esr-1~deb10u1
firefox-esr 78.9.0esr-1~deb10u1 78.8.0esr-1~deb10u1
fonts-opensymbol 2:102.10+LibO6.1.5-3+deb10u7 2:102.10+LibO6.1.5-3+deb10u6
gir1.2-javascriptcoregtk-4.0 2.30.6-1~deb10u1 2.30.5-1~deb10u1
gir1.2-webkit2-4.0 2.30.6-1~deb10u1 2.30.5-1~deb10u1
groff-base 1.22.4-3+deb10u1 1.22.4-3
intel-microcode 3.20210216.1~deb10u1 3.20201118.1~deb10u1
iputils-ping 3:20180629-2+deb10u2 3:20180629-2+deb10u1
libavahi-client3 0.7-4+deb10u1 0.7-4+b1
libavahi-common3 0.7-4+deb10u1 0.7-4+b1
libavahi-common-data 0.7-4+deb10u1 0.7-4+b1
libavahi-core7 0.7-4+deb10u1 0.7-4+b1
libavahi-glib1 0.7-4+deb10u1 0.7-4+b1
libbsd0 0.9.1-2+deb10u1 0.9.1-2
libcpupower1 4.19.181-1 4.19.171-2
libcurl3-gnutls 7.64.0-4+deb10u2 7.64.0-4+deb10u1
libcurl4 7.64.0-4+deb10u2 7.64.0-4+deb10u1
libjavascriptcoregtk-4.0-18 2.30.6-1~deb10u1 2.30.5-1~deb10u1
libldb1 2:1.5.1+really1.4.6-3+deb10u1 2:1.5.1+really1.4.6-3
liblirc-client0 0.10.1-6.3~deb10u1 0.10.1-6.2~deb10u1
libnss-myhostname 241-7~deb10u7 241-7~deb10u6
libnss-systemd 241-7~deb10u7 241-7~deb10u6
libopenjp2-7 2.3.0-2+deb10u2 2.3.0-2+deb10u1
libpam-systemd 241-7~deb10u7 241-7~deb10u6
libpq5 11.11-0+deb10u1 11.10-0+deb10u1
libpython3.7-minimal 3.7.3-2+deb10u3 3.7.3-2+deb10u2
libpython3.7 3.7.3-2+deb10u3 3.7.3-2+deb10u2
libpython3.7-stdlib 3.7.3-2+deb10u3 3.7.3-2+deb10u2
libreoffice-avmedia-backend-gstreamer 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-base-core 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-base-drivers 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-base 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-calc 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-common 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-core 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-draw 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-gnome 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-gtk3 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-impress 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-java-common 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-l10n-tr 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-librelogo 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-math 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-nlpsolver 0.9+LibO6.1.5-3+deb10u7 0.9+LibO6.1.5-3+deb10u6
libreoffice 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-report-builder-bin 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-report-builder 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-script-provider-bsh 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-script-provider-js 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-script-provider-python 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-sdbc-firebird 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-sdbc-hsqldb 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-sdbc-postgresql 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-style-colibre 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-style-elementary 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-style-tango 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libreoffice-wiki-publisher 1.2.0+LibO6.1.5-3+deb10u7 1.2.0+LibO6.1.5-3+deb10u6
libreoffice-writer 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
libssl1.1 1.1.1d-0+deb10u6 1.1.1d-0+deb10u5
libsystemd0 241-7~deb10u7 241-7~deb10u6
libtiff5 4.1.0+git191117-2~deb10u2 4.1.0+git191117-2~deb10u1
libudev1 241-7~deb10u7 241-7~deb10u6
libwebkit2gtk-4.0-37 2.30.6-1~deb10u1 2.30.5-1~deb10u1
linux-compiler-gcc-8-x86 4.19.181-1 4.19.171-2
linux-cpupower 4.19.181-1 4.19.171-2
linux-image-amd64 4.19+105+deb10u11 4.19+105+deb10u9
linux-kbuild-4.19 4.19.181-1 4.19.171-2
linux-libc-dev 4.19.181-1 4.19.171-2
openssl 1.1.1d-0+deb10u6 1.1.1d-0+deb10u5
python3.7-minimal 3.7.3-2+deb10u3 3.7.3-2+deb10u2
python3.7 3.7.3-2+deb10u3 3.7.3-2+deb10u2
python3-uno 1:6.1.5-3+deb10u7 1:6.1.5-3+deb10u6
systemd 241-7~deb10u7 241-7~deb10u6
systemd-sysv 241-7~deb10u7 241-7~deb10u6
thunderbird-l10n-tr 1:78.9.0-1~deb10u1 1:78.8.0-1~deb10u1
thunderbird 1:78.9.0-1~deb10u1 1:78.8.0-1~deb10u1
udev 241-7~deb10u7 241-7~deb10u6
uno-libs3 6.1.5-3+deb10u7 6.1.5-3+deb10u6
ure 6.1.5-3+deb10u7 6.1.5-3+deb10u6

06 April, 2021 11:36AM

Yüz binden fazla tahta Pardus ETAP kullanacak

Fatih Projesi kapsamında, 2021-2022 eğitim-öğretim yılında hizmete sunulacak 28.000 adet etkileşimli tahta daha Pardus ETAP kullanacak.

MEB Yenilik ve Eğitim Teknolojileri Genel Müdürlüğü, Arçelik ile 19 Mart 2021 tarihinde etkileşimli tahta alımı sözleşmesi imzaladı. Sözleşme ile Milli Eğitim Bakanlığı’na bağlı okullardaki derslik, laboratuvar ve atölyelerde öğrenci, öğretmen ile idarecilerin Pardus ETAP yüklü etkileşimli tahtalardan faydalanmaları mümkün olacak.

Pardus ETAP ile sınıf içinde etkileşimli eğitim

Milli Eğitim Bakanlığı ile TÜBİTAK arasında 24 Ekim 2019 tarihinde imzalanan Yazılım geliştirme ve Danışmanlık Sözleşmesi kapsamında, aralarında Pardus ETAP’ın da bulunduğu çeşitli yazılım geliştirmeleri yapılmaya devam ediliyor.

TÜBİTAK ULAKBİM ile Yenilik ve Eğitim Teknolojileri Genel Müdürlüğü’nün işbirliği neticesinde geliştirilen Pardus ETAP 19 sürümü, yeni temin edilen etkileşimli tahtalarla birlikte, ülke genelindeki 100 bin etkileşimli tahtada kullanılıyor olacak.

Pardus Etkileşimli Tahta Arayüzü Projesi (ETAP), TÜBİTAK ULAKBİM bünyesindeki Pardus ekibi tarafından tamamen yerli kaynaklarla geliştirilerek, Pardus İşletim Sistemi üzerine inşa edildi. ETAP, gönüllü MEB öğretmenlerinin geri bildirimleriyle iyileştirilmeye ve sınıf içi etkileşimi artırarak eğitime katkı sağlamaya devam ediyor.

Pardus ETAP üzerinde uygulamalar, Pardus Mağaza kullanılarak kolaylıkla kuruluyor. Pardus ETAP, USB anahtar ile EBA’da parolasız oturum açmayı olanaklı kılıyor. Pardus ETAP Panel üzerinden erişilen Ekran Karartma özelliği sayesinde ekranın belirli bir kısmı veya tamamı karartılarak tüm dikkat istenilen bilgiye ya da öğretmen üzerine çekilebiliyor.

Pardus ETAP için geliştirilmiş dokunma eylemleri sayesinde kullanıcılar etkileşimli tahtaların dokunmatik yüzeylerinden en etkin şekilde faydalanabiliyor. Pardus ETAP, aynı anda birden fazla tahtada çalışma olanağı sunarak, öğretmenlerin etkileşimli tahtalardan en verimli şekilde faydalanabilmelerini sağlıyor.

06 April, 2021 09:58AM

April 05, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Kees Cook: security things in Linux v5.9

Previously: v5.8

Linux v5.9 was released in October, 2020. Here’s my summary of various security things that I found interesting:

seccomp user_notif file descriptor injection
Sargun Dhillon added the ability for SECCOMP_RET_USER_NOTIF filters to inject file descriptors into the target process using SECCOMP_IOCTL_NOTIF_ADDFD. This lets container managers fully emulate syscalls like open() and connect(), where an actual file descriptor is expected to be available after a successful syscall. In the process I fixed a couple bugs and refactored the file descriptor receiving code.

zero-initialize stack variables with Clang
When Alexander Potapenko landed support for Clang’s automatic variable initialization, it did so with a byte pattern designed to really stand out in kernel crashes. Now he’s added support for doing zero initialization via CONFIG_INIT_STACK_ALL_ZERO, which besides actually being faster, has a few behavior benefits as well. “Unlike pattern initialization, which has a higher chance of triggering existing bugs, zero initialization provides safe defaults for strings, pointers, indexes, and sizes.” Like the pattern initialization, this feature stops entire classes of uninitialized stack variable flaws.

common syscall entry/exit routines
Thomas Gleixner created architecture-independent code to do syscall entry/exit, since much of the kernel’s work during a syscall entry and exit is the same. There was no need to repeat this in each architecture, and having it implemented separately meant bugs (or features) might only get fixed (or implemented) in a handful of architectures. It means that features like seccomp become much easier to build since it wouldn’t need per-architecture implementations any more. Presently only x86 has switched over to the common routines.

SLAB kfree() hardening
To reach CONFIG_SLAB_FREELIST_HARDENED feature-parity with the SLUB heap allocator, I added naive double-free detection and the ability to detect cross-cache freeing in the SLAB allocator. This should keep a class of type-confusion bugs from biting kernels using SLAB. (Most distro kernels use SLUB, but some smaller devices prefer the slightly more compact SLAB, so this hardening is mostly aimed at those systems.)

Adrian Reber added the new CAP_CHECKPOINT_RESTORE capability, splitting this functionality off of CAP_SYS_ADMIN. The needs for the kernel to correctly checkpoint and restore a process (e.g. used to move processes between containers) continues to grow, and it became clear that the security implications were lower than those of CAP_SYS_ADMIN yet distinct from other capabilities. Using this capability is now the preferred method for doing things like changing /proc/self/exe.

debugfs boot-time visibility restriction
Peter Enderborg added the debugfs boot parameter to control the visibility of the kernel’s debug filesystem. The contents of debugfs continue to be a common area of sensitive information being exposed to attackers. While this was effectively possible by unsetting CONFIG_DEBUG_FS, that wasn’t a great approach for system builders needing a single set of kernel configs (e.g. a distro kernel), so now it can be disabled at boot time.

more seccomp architecture support
Michael Karcher implemented the SuperH seccomp hooks, Guo Ren implemented the C-SKY seccomp hooks, and Max Filippov implemented the xtensa seccomp hooks. Each of these included the ever-important updates to the seccomp regression testing suite in the kernel selftests.

stack protector support for RISC-V
Guo Ren implemented -fstack-protector (and -fstack-protector-strong) support for RISC-V. This is the initial global-canary support while the patches to GCC to support per-task canaries is getting finished (similar to the per-task canaries done for arm64). This will mean nearly all stack frame write overflows are no longer useful to attackers on this architecture. It’s nice to see this finally land for RISC-V, which is quickly approaching architecture feature parity with the other major architectures in the kernel.

new tasklet API
Romain Perier and Allen Pais introduced a new tasklet API to make their use safer. Much like the timer_list refactoring work done earlier, the tasklet API is also a potential source of simple function-pointer-and-first-argument controlled exploits via linear heap overwrites. It’s a smaller attack surface since it’s used much less in the kernel, but it is the same weak design, making it a sensible thing to replace. While the use of the tasklet API is considered deprecated (replaced by threaded IRQs), it’s not always a simple mechanical refactoring, so the old API still needs refactoring (since that CAN be done mechanically is most cases).

x86 FSGSBASE implementation
Sasha Levin, Andy Lutomirski, Chang S. Bae, Andi Kleen, Tony Luck, Thomas Gleixner, and others landed the long-awaited FSGSBASE series. This provides task switching performance improvements while keeping the kernel safe from modules accidentally (or maliciously) trying to use the features directly (which exposed an unprivileged direct kernel access hole).

filter x86 MSR writes
While it’s been long understood that writing to CPU Model-Specific Registers (MSRs) from userspace was a bad idea, it has been left enabled for things like MSR_IA32_ENERGY_PERF_BIAS. Boris Petkov has decided enough is enough and has now enabled logging and kernel tainting (TAINT_CPU_OUT_OF_SPEC) by default and a way to disable MSR writes at runtime. (However, since this is controlled by a normal module parameter and the root user can just turn writes back on, I continue to recommend that people build with CONFIG_X86_MSR=n.) The expectation is that userspace MSR writes will be entirely removed in future kernels.

uninitialized_var() macro removed
I made treewide changes to remove the uninitialized_var() macro, which had been used to silence compiler warnings. The rationale for this macro was weak to begin with (“the compiler is reporting an uninitialized variable that is clearly initialized”) since it was mainly papering over compiler bugs. However, it creates a much more fragile situation in the kernel since now such uses can actually disable automatic stack variable initialization, as well as mask legitimate “unused variable” warnings. The proper solution is to just initialize variables the compiler warns about.

function pointer cast removals
Oscar Carter has started removing function pointer casts from the kernel, in an effort to allow the kernel to build with -Wcast-function-type. The future use of Control Flow Integrity checking (which does validation of function prototypes matching between the caller and the target) tends not to work well with function casts, so it’d be nice to get rid of these before CFI lands.

flexible array conversions
As part of Gustavo A. R. Silva’s on-going work to replace zero-length and one-element arrays with flexible arrays, he has documented the details of the flexible array conversions, and the various helpers to be used in kernel code. Every commit gets the kernel closer to building with -Warray-bounds, which catches a lot of potential buffer overflows at compile time.

That’s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.10.

© 2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

05 April, 2021 11:24PM

The Fridge: Ubuntu Weekly Newsletter Issue 677

Welcome to the Ubuntu Weekly Newsletter, Issue 677 for the week of March 28 – April 3, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

05 April, 2021 10:14PM

April 04, 2021

hackergotchi for ARMBIAN


Jetson Nano

In case of troubles booting, read this topic.

04 April, 2021 03:09PM by Igor Pečovnik

hackergotchi for OSMC


Happy Easter from OSMC

We'd like to wish all of our users a Happy Easter. We hope that our users make the most of this time off and stay safe during this difficult period.

We're working hard on getting a stable version of Kodi v19 (Matrix) out and will make test builds available shortly.

Best wishes from everyone at OSMC

04 April, 2021 02:13PM by Sam Nazarko

April 02, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Kubuntu Hirsute Hippo (21.04) Beta released

KDE Plasma desktop 5.21 on Kubuntu 21.04

The beta of Hirsute Hippo (to become 21.04 in April) has now been released, and is available for download.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Hirsute Hippo are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

We STRONGLY advise testers to read the Kubuntu 21.04 Beta release notes before installing, and in particular the section on ‘Known issues‘.

Kubuntu is taking part in #UbuntuTestingWeek from 1st to 7th of April, details of which can be found in our Kubuntu 21.04 Testing Week blog post, and in general for all flavours on the Ubuntu Discourse announcement.

You can also find more information about the entire 21.04 release (base, kernel, graphics etc) in the main Ubuntu Beta release notes and announcement.

02 April, 2021 06:02PM

hackergotchi for Pardus


LibreOffice Yardım Masasını kullanarak bize sorunlarınızı bildirebilirsiniz

TÜBİTAK tarafından yürütülmekte olan Açık Belge Biçimi internet sitesi üzerinden Açık Belge Biçimi ve LibreOffice ile ilgili sorunlarınızı bildirebileceğiniz yeni bir başvuru sayfası oluşturulmuştur. Dilerseniz Açık Belge Biçimi sitemizi ziyaret ederek, dilerseniz buraya tıklayarak yeni bir kayıt oluşturabilirsiniz.

TÜBİTAK Pardus ekibi, geliştirilmesine katkı sağladığı LibreOffice ofis yazılımını tercih ediyor. LibreOffice’i Pardus, Ubuntu gibi GNU/Linux dağıtımları yanında Windows ve MacOS üzerinde de kullanabilirsiniz.

Açık Belge Biçimini tam destekleyen ve tamamen ücretsiz bir açık kaynak kodlu ofis paketi olan LibreOffice’i bugün kullanmaya başlayarak Açık Belge Biçimiyle tanışın.

Açık Belge Biçimi ile ilgili daha fazla bilgi almak ve yardım masasına erişmek için Açık Belge Biçimi Türkiye sayfasını ziyaret edebilirsiniz. LibreOffice ile ilgili daha geniş bilgi almak için proje sayfamıza gidebilir, ya da LibreOffice Türkiye sayfasını inceleyebilirsiniz.

02 April, 2021 04:32PM

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 01/2021

Foto vom Buchregal

Bücher, die ich in 2021 bisher gelesen habe:

  • Sei Kein Mann, von JJ Bola. Auf 149 Seiten schreibt der Autor und Aktivist JJ Bola über Einflüsse aus nichtwestlichen Traditionen, aus Popkultur und der LGBTQ+-Community und zeigt, wie vielfältig Männlichkeit sein kann. Kluge Worte.
  • Die Katze des Dalai Lama, von David Michie. David Michie hat eine mehrteilige Bücherreihe “die Katze des Dalai Lama” geschrieben, und ist ist der erste Teil dieser Reihe. Auf 268 Seiten buddhistische Gedanken aus dem Blickwinkel einer Katze vermittelt. Das geschieht kurzweilig und ohne belehren zu wollen. Sprachlich sehr anspruchslos und das amerikanische scheint durch, aber ein schöner Einstieg in das Thema Buddhismus.
  • Die Bienen und das Unsichtbare, von Clemens J. Setz. Ich bin als Setz-Fan mit den höchsten Erwartungen an die 407 Seiten gegangen, und wurde nicht enttäuscht. Ein fantastisches Buch, vor allem für all jene, die sich für (konstruierte) Sprachen begeistern können. Klare Leseempfehlung.
  • QualityLand (helle Edition), von Marc-Uwe Kling. Unterhaltsam geschriebene 381 Seiten, die ein ernstes Thema beleuchten. Die Idee mit den Nachnamen ist gut, es gibt Ausflüge zu den drei Gesetzen der Robotik von Asimov und Gedanken zu Freiheit, aber sprachlich hat mich das Buch leider nicht angesprochen, und vieles war mir zu offensichtlich und konstruiert.
  • Tyll, von Daniel Kehlmann. 474 fantastische Seiten über Tyll Ulenspiegel im 17. Jahrhundert. Nach “Die Vermessung der Welt” hatte ich hohe Erwartungen an Kehlmann, und dieser historischer Roman ist auch wieder ganz nach meinem Geschmack – klare Leseempfehlung.
  • Till Eulenspiegel, von Clemens J. Setz. Dieses Buch wollte ich erst nach dem Buch von Kehlmann (siehe oben) lesen. Diese 149 Seiten verarbeiten das Thema von Till Eulenspiegel in einem völlig anderen Stil, aber sehr unterhaltsam und ebenfalls sehr lesenswert.
  • Einspruch!, von Ingrid Brodnig. Besonders im familiären Umfeld neigen einige Debatten doch besonders hitzig zu werden, und Brodnig gibt dafür wichtige Strategien und Tipps mit auf den Weg. Die 144 Seiten sind schnell gelesen, nur schade, dass es keinen Index bzw. Überblick über die angesprochenen Effekte und Phänomene gibt.
  • Ich bleibe hier, von Marco Balzano. Auf 286 Seiten geht es um das Leben in Südtirol von 1925-1950, einen Stausee der Felder und Häuser überfluten wird, Faschismus und Widerstand. Kurzweilig.
  • QualityLand 2.0, von Marc-Uwe Kling. Diese 428 Seiten hätte ich ohne Lesegruppe nicht gelesen. Ein paar schöne Gedankenspiele sind zugegebenermaßen dabei, aber insgesamt war es auch in diesem Teil zu oft vorhersehbar und zu plakativ. Leider nicht mein Geschmack.
  • Das Buch vom Süden, von André Heller. Julian Passauer wächst im Dachgeschoss von Schloss Schönbrunn nach dem 2. Weltkrieg auf, und als LeserIn begleitet man ihn bei seiner Sehnsucht nach dem Süden. Der Duktus von Heller ist unverkennbar, und es finden sich einige sehr schöne und sinnliche Stellen. Aber mir fehlten Tiefgang und ausgebildete Charaktere, so haben mich diese 336 Seiten leider nicht begeistern können.
  • Ich, der Roboter, von Isaac Asimov. Dieser 1950 veröffentlichte Roman beinhaltet auf 303 Seiten neun miteinander versponnene Kurzgeschichten. Ich wollte die Grundregeln der Robotik (AKA Robotergesetze) einmal im “Original” lesen, und es ist zumindest eines der wenigen SF-Werke, die ich bis zum Schluss durchgehalten habe. Obwohl die Hauptfigur eine Frau spielt und das Buch für seine Zeit vermutlich sogar progressiv ist, sind die Geschlechterrollen streckenweise trotzdem nur schwer zu ertragen. Sprachlich gerade noch OK, einige Gedankenspiele und Fragestellungen waren aber doch sehr nett.
  • Because Internet, von Gretchen McCulloch. Ein Buch mit 404 Seiten zum Thema Internet muss eigentlich per Definition gut sein – und das Buch liefert. Die besonders gute Bewertung von Russ Allbery in seiner Rezension hat mich dazu veranlasst, das Buch zu lesen – und ich bereue es kein bisschen. Obwohl ich gemäß der Klassifizierung von McCulloch als “Old Internet Person” gelte, war mir so manches geschichtliches Wissen neu. Der Untertitel “Understanding the New Rules of Language” deutet es bereits an, dass es auch viel um Regeln in der Sprache geht, aber es geht auch um Memes, Emojis und den Wandel von Sprache. Besonders für Kritiker von Emojis eigentlich eine Pflichtlektüre, aber auch für Leute die viel “remote” kommunizieren und Sprachverliebte eine ganz klare Leseempfehlung.
  • 1984, George Orwell. Ich habe vor vielen Jahren 1984 in einer deutschen Übersetzung angefangen zu lesen, die ich aber – aus sprachlichen Gründen – nicht durchgehalten habe. Das Buch ist heuer in mehreren neuen Übersetzungen erschienen, siehe u.a. Wolff bei DTV, Schönfeld bei Insel/Suhrkamp, Fischer bei Nikol, Heibert bei S. Fischer, Haefs bei Manesse und Strümpel bei Anaconda. Ich habe ich mich mit der Übersetzung von Eike Schönfeld erneut an diesen Klassiker gewagt, und diesmal musste ich es nicht bereuen. Eine wunderbare und zeitgemäße Übersetzung, die auf 382 Seiten ein dem Panoptismus entsprechendes Stimmungsbild liefert.
  • Mädchen, Frau etc., von Bernardine Evaristo. Was für eine Wucht von einem Buch. Sprachlich, stilistisch und inhaltlich absolut überzeugend. Ich bin stark versucht, das Original in Englisch zu lesen, obwohl es an der deutschen Übersetzung (507 Seiten) absolut nichts auszusetzen gibt. Unbedingte Leseempfehlung.
  • Komplett Gänsehaut, Sophie Passmann. Auf 173 Seiten reflektiert Passmann uber den Mittelstand, das Bürgertum und Dinge die man nicht besitzt (“Es geht immer um das, was fehlt.“). Es dürfte wohl ein Risotto- und Pizza-Trauma geben, und das ein oder andere mal kann man sich in diesem Generationen-Essay doch ertappt fühlen.
  • Mein Algorithmus und Ich, Daniel Kehlmann. Schlanke 63 Seiten, auf denen Daniel Kehlmann über seinen Besuch im Silicon Valley und seinem Spiel mit der Künstlichen Intelligenz CTRL reflektiert. Schöne Kaffee-Lektüre.
  • Dicht, Stefanie Sargnagel. Ein Coming-of-Age-Roman auf 248 Seiten, in dem es um viel Schabernack geht. Bisher hat mir noch kein Buch so viel Lust auf Dosenbier gemacht, vom Feinsten.
  • Bot – Gespräch ohne Autor, Clemens Setz. Die 160 Seiten sind ungeordnete Texte aus dem elektronischen Tagebuch von Setz. Ein interessantes Format, mit dem ich Anfangs so meine Schwierigkeiten hatte, das aber zunehmend sympathischer wurde.
  • Zwischen Ruhm und Ehre liegt die Nacht, Andrea Petković. Auf 267 Seiten erfährt man in mehreren Kurzgeschichten einiges über die Lebensgeschichte der Tennis-Profispielerin und Literatur-Freundin Petković. Eine schöne Reflexion einer Spitzensportlerin auf ihre Karriere, die Schattenseiten des Profi und Promi-Lebens im Tenniszirkus. Andrea Petković betreibt übrigens unter “racquetbookclub” auf Instagram einen Lesezirkel.
  • Für mich soll es Neurosen regnen, Peter Wittkamp. Wittkamp versucht auf 316 Seiten die Abgrenzung, wann aus einer Macke ein Zwang wird, erzählt von seinem Ausflug in die Psychiatrie und allerlei rund um seine Zwänge. Der Schreibstil versucht sich locker flockig und amüsant zu geben, die inflationäre Verwendung von “ab und an” könnte aber auf einen weiteren Zwang des Autors hindeuten.
  • Die Vogelstraußtrompete, Clemens J. Setz. 84 Seiten kurzweilige Prosa.
  • Faserland, Christian Kracht. Diese 165 Seiten wollte ich als Vorbereitung für das 2021 erschienene Eurotrash von Kracht lesen. Durchaus faszinierend.

02 April, 2021 04:21PM

hackergotchi for SparkyLinux


Sparky news 2021/03

The 3rd monthly Sparky project and donate report of 2021:
* Linux kernel updated up to version 5.11.11 & 5.12-rc4
* Sparky 2021.03 of the semi-rolling line released
* added to repos: Sparky System, MultiOS-USB, Polo file manager, Filmulator-GUI, broadcom-bt-firmware (thanks to Darek)
* Sparky 2021.03 Special Editions: GameOver, Multimedia & Rescue released

Many thanks to all of you for supporting our open-source projects, specially in this difficult days. Your donations help keeping them and us alive.

Don’t forget to send a small tip in April too, please.

Anja G.
€ 30
Olaf T.
€ 10
Krzysztof M.
PLN 50
Andrzej T.
PLN 100
Wojciech H.
PLN 10
Paolo R.
€ 50
Peter W.
€ 20
Krzysztof S.
PLN 60
Wolfgang L.
€ 12
Nicholas F.
€ 10.75
Sebastian K.
PLN 50
Andrzej M.
Paweł S.
PLN 7.77
Rudolf L.
€ 10
Aleksander G.
PLN 45
Karl A.
€ 1.66
Marek B.
PLN 10
Alexander F.
€ 10
Stefan L.
$ 5
$ 50
$ 2.5
$ 5
Mitchel V.
$ 50
Tom C.
$ 15
Gernot P.
$ 10
Frederick M.
€ 20
Spencer F.
€ 10
Jacek G.
PLN 40
Stanisław G.
PLN 20
William D.
€ 25
Aymeric L.
€ 10
Dariusz M.
€ 10
Władysław K.
PLN 20
Andrzej P.
PLN 10
Maciej P.
PLN 22
Krzysztof T.
PLN 100
Ralf A.
€ 10
Jorg S.
€ 5
€ 244.41
PLN 549.77
$ 137.5

* Keep in mind that some amounts coming to us will be reduced by commissions for online payment services. Only direct sending donations to our bank account will be credited in full.

* Miej na uwadze, że kwota, którą przekażesz nam poprzez system płatności on-line zostanie pomniejszona o prowizję dla pośrednika. W całości wpłynie tylko ta, która zostanie przesłana bezpośrednio na nasze konto bankowe.

02 April, 2021 03:17PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Catching Up Driving

Usually in the realm of Ubuntu when it comes to a “daily driver” we are often talking about computers. Over the past couple days my 2005 Subaru Forester decided to fail on me. Harsh climate, roads that are not well maintained in an economically disadvantaged area, and more helped bring about the end of being able to drive that nice station wagon.

I don’t really ask for much in a car. I don’t really need much in a car. When it comes to the “entertainment system” I end up listening to AM radio for outlets like WJR, WTAM, CKLW, KDKA, and CFZM. On the FM side I have been listening to WERG quite a bit. In the midst of all that I probably forgot to mention the local station WWOW. A simple radio for me goes quite a long way.

In the new vehicle there has been the option for using Android Auto. This is a new thing to me. I’ve only ever had the opportunity to drive a vehicle equipped with such tech this week.

Android Auto is certainly different and something I will need to get used to. Fortunately we live in a time of change. I’m still trying to wrap my head around the idea of having VLC available to me on the car dashboard.

This is definitely not the way I expected to start the fourth month of 2021 but this has been a year of surprises. I’ve got some ISO testing to get back to if I can manage to avoid other non-computer things breaking…

Tags: Automobiles

02 April, 2021 04:16AM

April 01, 2021

hackergotchi for Ubuntu


Ubuntu 21.04 (Hirsute Hippo) Final Beta released

The Ubuntu team is pleased to announce the Beta release of the Ubuntu 21.04 Desktop, Server, and Cloud products.

21.04, codenamed “Hirsute Hippo”, continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This Beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu Budgie, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours.

The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of 21.04 that should be representative of the features intended to ship with the final release expected on April 22nd, 2021.

Ubuntu, Ubuntu Server, Cloud Images

Hirsute Beta includes updated versions of most of our core set of packages, including a current 5.11 kernel, and much more.

To upgrade to Ubuntu 21.04 Beta from Ubuntu 20.10, follow these instructions:


The Ubuntu 21.04 Beta images can be downloaded at:

http://releases.ubuntu.com/21.04/ (Ubuntu and Ubuntu Server on x86)

This Ubuntu Server image features the next generation Subiquity server installer, bringing the comfortable live session and speedy install of the Ubuntu Desktop to server users.

Additional images can be found at the following links:

http://cloud-images.ubuntu.com/daily/server/hirsute/current/ (Cloud Images)
http://cdimage.ubuntu.com/releases/21.04/beta/ (Non-x86)

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20210401 or higher) should be considered a Beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 21.04 Beta can be found at:



Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Beta images can be downloaded at:


Lubuntu is a flavor of Ubuntu which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu base.

The Beta images can be downloaded at:

Ubuntu Budgie

Ubuntu Budgie is community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core.

The Beta images can be downloaded at:


UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Beta images can be downloaded at:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Beta images can be downloaded at:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflow: audio, graphics, video, photography and publishing.

The Beta images can be downloaded at:


Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.

The Beta images can be downloaded at:

Regular daily images

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit https://ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: https://ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


Originally posted to the ubuntu-announce mailing list on Thu Apr 1 20:26:42 UTC 2021 by Łukasz ‘sil2100’ Zemczak, on behalf of the Ubuntu Release Team

01 April, 2021 10:33PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Ubuntu Studio 21.04 Beta (Hirsute Hippo) Released

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 21.04, codenamed Hirsute Hippo.

While this beta is reasonably free of any showstopper DVD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 21.04 is released on April 22, 2021.

Please note: Due to the change in desktop environment, directly upgrading to Ubuntu Studio 21.04 from 20.04 LTS is not supported and will not be supported.  However, upgrades from Ubuntu Studio 20.10 will be supported. See the Release Notes for more information.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/21.04/beta/

Full updated information is available in the Release Notes.

New Features

Ubuntu Studio 20.04 includes the new KDE Plasma 5.21 desktop environment. This is a beautiful and functional upgrade to previous versions, and we believe you will like it.

Agordejo, a refined GUI frontend to New Session Manager, is now included by default. This uses the standardized session manager calls throughout the Linux Audio community to work with various audio tools.

Studio Controls is upgraded to 2.1.4 and includes a host of improvements and bug fixes.

BSEQuencer, Bshapr, Bslizr, and BChoppr are included as new plugins, among others.

QJackCtl has been upgraded to 0.9.1, and is a huge improvement. However, we still maintain that Jack should be started with Studio Controls for its features, but QJackCtl is a good patchbay and Jack system monitor.

There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.

Known Issues

Official Ubuntu Studio release notes can be found at https://wiki.ubuntu.com/HirsuteHippo/Beta/UbuntuStudio

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/HirsuteHippo/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://wiki.ubuntu.com/HirsuteHippo/ReleaseNotes

Please Test!

If you have some time, we’d love for you to join us in testing. Testing begins…. NOW!

01 April, 2021 09:58PM

Lubuntu Blog: Lubuntu 21.04 (Hirsute Hippo) BETA testing

We are pleased to announce that the beta images for Lubuntu 21.04 have been released! While we have reached the bugfix-only stage of our development cycle, these images are not meant to be used in a production system. We highly recommend joining our development group or our forum to let us know about any issues. Ubuntu Testing Week Ubuntu, […]

01 April, 2021 09:53PM

Podcast Ubuntu Portugal: Ep 136 – Galinha

Dando o merecido destaque às novidades do catálogo da Nitrokey, os assuntos deste epuisódio passaram ainda pelos testes de streaming do Constantino e os bash scripts preguiçosos do Carrondo.

Já sabem: oiçam, subscrevam e partilhem!

  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/books/learn-you-more-code-no-starch-press-books?partner=PUP
  • https://www.humblebundle.com/books/stuff-that-kids-love-adams-media-books?partner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal


Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

01 April, 2021 09:45PM

Ubuntu Podcast from the UK LoCo: S14E04 – Shrug Material Bits

This week we’ve been switching to Brave and KDE. We discuss what Alan’s birthday present could be and go over all your wonderful feeback.

It’s Season 14 Episode 04 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

01 April, 2021 02:00PM

Ubuntu Blog: Ceph Pacific 16.2.0 is now available

April 1st 2021 – Today, Ceph upstream released the first stable version of ‘Pacific’, a full year after the last stable release ‘Octopus’. Pacific focuses on usability and cross-platform integrations, with exciting features such as iSCSI and NFS promoted to stable or major dashboard enhancements. This makes it easier to integrate, operate and monitor Ceph as a unified storage system. Ceph packages are built for Ubuntu 20.04 LTS and Ubuntu 21.04  to ensure a uniform experience across clouds.

You can try the Ceph Pacific beta by following these instructions, and your deployment will automatically upgrade to the final release as soon as it’s made available from Canonical. 

What’s new in Ceph Pacific?

As usual, the Ceph community grouped the latest enhancements into five different themes, which are in descending order of significance for changes in usability, quality, performance, multi-site usage and ecosystem & integrations.


The highlight of Pacific is the cross-platform availability of Ceph with a new native Windows RBD driver and the iSCSI and NFS gateways becoming stable. These allow a wide variety of platforms to take advantage of Ceph: from your Linux native workloads to your VMware clusters to your Windows estate, you can leverage scalable software-defined storage to drive infrastructure costs down. 

It is also worth mentioning that the Ceph dashboard now includes all core Ceph services and extensions – i.e. object, block, file, iSCSI, NFS Ganesha – as it evolves to a robust and responsive management GUI in front of the Ceph API. It also provides new observability and management capabilities to manage the Ceph OSDs, multisite deployments, enforce RBAC, defiance security policies and more.

A new host maintenance mode reduces unexpected outages, as the cluster is informed when a node is about to go under maintenance. Cephadm, the orchestrator module, got a new exporter/agent mode that increases performance when monitoring large clusters. Other notable usability enhancements in Pacific include a simplified status output and progress bar for the cluster recovery processes, MultiFS marked stable, and the MDS-side encrypted file support in CephFS.


RADOS is, as usual, the focal point when it comes to quality improvements to make Ceph more robust and reliable. Placement groups can be deleted significantly faster, and this has a smaller impact on client workloads. On CephFS, a new feature bit allows turning required file system features on or off, preventing any older clients that do not support required features from being rejected. Lastly, enhanced public dashboards based on Ceph’s telemetry feature are now available, giving users insights about the use of Ceph clusters and storage devices in the wild, helping drive data-based design and business decisions.


The RADOS Bluestore backend now supports RocksDB sharding to reduce disk space requirements, a hybrid allocator lowers memory use and disk fragmentation, and work was done to bring finer-grained memory tracking. The use of mclock scheduler and extensive testing on SSDs helped improve QoS and system performance. Ephemeral pinning, improved cache management and asynchronous unlink/create improve performance, scalability and reduce unnecessary round trips to the MDS for CephFS.

Ceph Crimson got a prototype for the new SeaStore backend, alongside a compatibility layer to the legacy BlueStore backend. New recovery, backfill and scrub implementations are also available for Crimson with the Pacific release. Ceph Crimson is the project to rewrite the Ceph OSD module to better support persistent memory and fast NVMe storage.


The snapshot-based multi-site mirroring feature in CephFS means automatic replication of any snapshot from a source cluster to remote clusters that are bigger than any directory. Similarly, the pre-bucket multi-site replication feature in RGW, which was given significant stability enhancements, allows for async data replication at a site or zone level while federating multiple sites at once.

Ecosystem & integrations

Enhancing the user experience while onboarding to Ceph is the focus of the ecosystem theme, with ongoing projects to revamp the documentation and the ceph.io website while removing instances of racially charged terms. Support of ARM64 is also in progress with new CI, release builds and testing workflows and Pacific will be the first Ceph release to be available, although initially with limited support, on ARM.

On the integrations front, Rook is now able to operate stretch clusters in two datacenters with a MON in a third location and can manage CephFS mirroring using CRDs. The container storage interface allows Openstack Manila to integrate with container and cloud platforms bringing enhanced management and security capabilities in CephFS and RBD. 

Ceph Pacific available on Ubuntu

Try Ceph Pacific now on Ubuntu to combine the benefits of a unified storage system with a secure and reliable operating system. You can install the Ceph Pacific beta from the OpenStack Wallaby Ubuntu Cloud Archive for Ubuntu 20.04 LTS or using the development version of Ubuntu 21.04 (Hirsute Hippo).

Canonical supports all Ceph releases as part of the Ubuntu Advantage for Infrastructure enterprise support offering. Canonical’s Charmed Ceph packages the upstream Ceph images in wrappers called charms, which add lifecycle automation capabilities thus significantly simplifying Ceph deployments and day-2 operations, thanks to the Juju model-driven framework. Charmed Ceph Pacific will be released in tandem with the Canonical Openstack Wallaby release in late April 2021. 

Learn more about Canonical Ceph storage offerings

Suggested resources for Ceph

01 April, 2021 07:00AM

Ubuntu Blog: Kubernetes across clouds: Ubuntu at NVIDIA GTC 2021

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8c15/Title-Cards-18.png" width="720" /> </noscript>

NVIDIA GTC is back again and we’re thrilled to be talking all things Kubernetes with you, on April 12-16! This year too, the conference will be hosted virtually and registration is free, which means even more of us can get together to share knowledge and ideas at the #1 AI conference and workshop!

Ubuntu and Canonical will be hosting two original GTC sessions, centered around Kubernetes. Whether you’re interested in Kubernetes on workstations, to cloud(s), or the edge, we’ll be showing you how Canonical’s work around MicroK8s and micro-clouds can make Kubernetes simpler for you.

Browse sessions

Canonical and Ubuntu’s Kubernetes sessions at GTC 2021

Simplifying Kubernetes across the clouds: MicroK8s on NVIDIA Tech Stack [SS33138]

Although Kubernetes revolutionised the software life cycle, its steep learning curve still discourages many users from adopting it. MicroK8s is a production-grade, low-touch Kubernetes that abstracts the complexity and can address use cases from workstations to clouds to the edge. We’ll highlight the details of MicroK8s’ simplicity and robustness and demonstrate the different usage scenarios, running it on NVIDIA DGX, EGX, DPU and Jetson hardware using real applications from NVIDIA marketplace.

RSVP for the session

The power of micro-clouds: a new class of compute for the edge [SS33238]

Is edge computing reversing years of cloud investment? What constitutes a strong edge strategy?

Iteration of cloud technologies has made automation simpler than ever and created the right primitives to solve all kinds of problems. The reality, however, is more distributed than cloud computing is. Devices can be anywhere, in motion, or even sent in someplace to never come back. These challenges apply to edge data centers (with NVIDIA EGX boxes) as well as smart devices (with NVIDIA Jetsons) with increasing GPU usage for low latency analytics, inferencing, and visualization, while offloading security and storage to DPUs.

With micro clouds, it becomes possible to bring cloud capabilities to the broadest range of devices at the edge, accelerating innovation and underpinning operations.

This talk will cover Canonical’s micro cloud ingredients for the edge, an opinionated open source stack, from Ubuntu to MicroK8s, that gives developers the same virtualization and containerization capabilities from cloud to edge

RSVP for the session

We hope to see you there!

01 April, 2021 06:05AM

March 31, 2021

hackergotchi for Purism PureOS

Purism PureOS

Snitching on Phones That Snitch On You

Our phones are our most personal computers, and the most vulnerable to privacy abuses. They carry personal files and photos, our contact list, and our email and private chat messages. They also are typically always left on and always connected to the Internet either over a WiFi or cellular network. Phones also contain more sensors and cameras than your average computer so they can not only collect and share your location, but the GPS along with the other sensors such as the gyroscope, light sensor, compass and accelerometer can reveal a lot more information about a person than you might suspect (which is why we designed the Librem 5 with a “lockdown mode” so you can turn all of that off).

One of the problems with the security measures implemented in Android and iOS is that they restrict the user as much, if not more, than they restrict an attacker. Ultimately Google and Apple control what your phone can and can’t do, not you. While these security measures are marketed as making your phone a strong castle you live inside, that’s only true if you hold the keys. As I mentioned in my article Your Phone is Your Castle:

If you live inside a strong, secure fortification where someone else writes the rules, decides who can enter, can force anyone to leave, decides what things you’re allowed to have, and can take things away if they decide it’s contraband, are you living in a castle or a prison? There is a reason that bypassing phone security so you can install your own software is called jailbreaking.

You not only don’t have much say over what Google or Apple do on your phone, but also these security measures mean you can’t see what the phone is doing behind the scenes. While you might suspect your phone is snitching on you to Google or Apple, without breaking out of that jail it’s hard to know for sure.

Your Phone Snitches On You

It turns out if you did break out of jail and monitored your phone, you’d discover your phone is snitching on you, constantly. A research paper just published by Douglas J. Leith at Trinity College in Dublin Ireland says it all in the abstract (emphasis mine):

We investigate what data iOS on an iPhone shares with Apple and what data Google Android on a Pixel phone shares with Google. We find that even when minimally configured and the handset is idle both iOS and Google Android share data with Apple/Google on average every 4.5 mins. The phone IMEI, hardware serial number, SIM serial number and IMSI, handset phone number etc. are shared with Apple and Google. Both iOS and Google Android transmit telemetry, despite the user explicitly opting out of this. When a SIM is inserted both iOS and Google Android send details to Apple/Google. iOS sends the MAC addresses of nearby devices, e.g. other handsets and the home gateway, to Apple together with their GPS location. Users have no opt out from this and currently there are few, if any, realistic options for preventing this data sharing.

I should note that both Google and Apple dispute some of the findings and methodology in this paper which you can read in reporting by Ars Technica. Yet I should note they don’t seem to dispute that they do this (because they claim it’s essential for the OS to function), they only quibble over how much they do it, how much bandwidth is used, and how much the user can opt out of this telemetry. Even more telling is Google’s defense in the article, which perfectly summarizes how they view the world:

The company [Google] also contended that data collection is a core function of any Internet-connected device.

Just to underscore the point, we aren’t talking about the massive privacy issues with apps on your phone that snitch on you to app vendors, instead this study focused just on what the OS itself does, often in the background while idle, or while doing simple things like inserting a SIM card or looking at settings. Also, the data that is being shared uniquely identifies you (including your IMSI and phone number, IP and location) and your hardware (IMEI, hardware serial number, SIM serial number).

How to Snitch On Your Phone

The Librem 5 runs PureOS and not Android nor iOS, and Purism is a Social Purpose Company that puts protecting customer privacy in our corporate charter. We treat data like uranium, not gold, and don’t collect any telemetry by default on the Librem 5 phone just like we don’t on our other computers. The only connection a Librem 5 makes to Purism servers is to check for software updates and you can change that by pointing to one of our mirrors or you can disable the automatic checks entirely. In that communication all we get is a web log of an IP address and any software you may have downloaded, the same information you share when you visit any other website. We do not capture unique identifying data (like IMEI or other hardware serial numbers) that links that traffic to you and your phone.

In general the Librem 5 only talks to the Internet when you start an application that needs it. All of the applications we install by default respect your privacy and applications within PureOS do as well. Because everything in PureOS is free software, if an application wanted to violate your privacy they’d have to do it out in the open in the source code, and if someone didn’t like it, they could fork the code and publish a version without that telemetry.

That said, there are some applications you can install like Firefox that do collect telemetry by default. While you could audit the source code to look for anything sketchy, it would be even better if you could just monitor all of the outgoing network connections your applications make and block any you don’t like. While we think you should trust us when we say Purism doesn’t spy on you, we also think you should be able to verify our claims and protect yourself. This is where a tool like OpenSnitch comes in.


OpenSnitch is inspired by a similar program on MacOS called Little Snitch and it acts as a firewall for a desktop user. Unlike traditional firewall tools that were designed for servers and mostly concerned with incoming connections, OpenSnitch works on the principle that the larger threat on desktops isn’t incoming connections (since desktops rarely have open ports anyway) but outgoing connections. On a desktop an attacker trying to connect to a vulnerable network service is a relatively low threat. A much larger threat is an application that gets compromised (or added sketchy features that haven’t been caught in a code audit) that starts making unauthorized connections out to the attacker’s servers.

While OpenSnitch isn’t yet packaged for PureOS, I’ve been evaluating it on my Librem 5 for a few weeks now. Even though I’m running the regular desktop version of OpenSnitch, it works surprisingly well on the Librem 5 and while the interface is complicated with lots of tabs and tables, it actually fits well on the screen already.

Main OpenSnitch window, displaying outgoing traffic

OpenSnitch monitors all new outgoing network connections and alerts you when something new shows up it doesn’t already have a rule for. The alert shows which application is making the connection, where it is connecting, and on which port. You can then choose to allow or deny the connection, and whether to apply this rule forever, until the next reboot, or for a number of minutes. There is also a 15 second countdown timer that will deny the connection after it times out. The idea here is to protect your computer from unauthorized outbound connections when the computer is unattended.

OpenSnitch warning about Firefox connecting on localhost port 8080

You can also click the + button and fine-tune the rule. This can be handy if you want to allow a program to access DNS regardless of what it’s looking up, so you can just select port 53. You can even restrict a rule so it only applies to a particular user on the system.

OpenSnitch is a really powerful tool but software like this requires a lot of time spent training the firewall, and can sometimes cause odd app errors until you realize the firewall is just doing it’s job. It would definitely benefit from a set of “known good” baseline rules you could apply so you only get prompted for the real outliers. Because of this I don’t know that it’s something the average user would want to install by default, but it’s definitely something useful for people facing more extreme threats.

This would also be a great tool for an IT organization to deploy throughout a fleet of computers along with custom rules that factor in their known good services. It would add an additional layer of protection that would be relatively seamless for their employees.

A Phone That’s On Your Side

A phone that snitches on you and sends a trove of personally-identifying data back to the vendor every few minutes, even if it’s idle, is not on your side. A phone that’s on your side helps you snitch on them. A phone that’s on your side honors your opt-out requests and ideally requires you to opt-in to anything that risks your privacy. A phone that’s on your side doesn’t collect your data, it protects it.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Order now

The post Snitching on Phones That Snitch On You appeared first on Purism.

31 March, 2021 09:01PM by Kyle Rankin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Announcing Ubuntu on Windows Community Preview – WSL 2

We are thrilled to release the Ubuntu on Windows Community Preview,  a special build of Ubuntu for the Windows Subsystem for Linux (WSL) that serves as a sandbox for experimenting with new features and functionality. Over the past year, we have proudly hosted two WSL conferences known as WSLConf. WSLConf was initially intended to be an event where the early adopters of WSL could share best practices. As interest and engagement spread, the now global conference has turned into a hub for innovation, collaboration, and ideas. The new Ubuntu on Windows Community Preview is our way of thanking the community and providing a space for us to collectively shape the future of Ubuntu on WSL.

How to get the Ubuntu on Windows Community Preview

The Ubuntu on Windows Community Preview will only be available through this link to the Microsoft store. You will not be able to find the Community Preview just by searching in the Microsoft Store.

Before you Start

To ensure you have the latest version of the Ubuntu on Windows Community Preview, you will need to ‘reset’ the app in Windows Settings. Note this will permanently delete any files contained on Ubuntu on Windows Community Preview and unpack the latest cached image over it. If you want to keep a backup of your existing image, use the wsl.exe –export feature.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/6AxMR7qrSQ6d8TBHPeyMLWFKE2jatCK2zC8jBGNKCd3wkdFnEnZeqdMs4CmzuHVvp8D6Plcoa61JnJXgbLK4PByNVH0XbwC_HgYf7Nh6dt0qoyggr53KtADaHxDjkYohIgCEHSDW" width="720" /> </noscript>

When you reset Ubuntu on Windows Community Preview, you will reset to the version cached, which could be a different version than the one you had before. Also note this background caching could involve a higher level of bandwidth usage than normal and therefore is not appropriate for metered or limited bandwidth users.

The Ubuntu on Windows Community Preview is for advanced users of WSL who are interested in helping test and enhance new features coming to WSL. It is not recommended as a “daily driver” unless you are ok handling the occasional hiccup, which is inevitable in this kind of environment.

The stable LTS version will be preserved for the new users and enterprise users and still provide a joyous experience.

As you play around, please report any feedback or bugs to us on Launchpad:

  1. For Ubuntu WSL Out-Of-Box Experience, report at: Ubuntu WSL OOBE (launchpad.net)
  2. For Ubuntu WSL Integrations, report at: Ubuntu WSL Integration (launchpad.net)
  3. For wslu, report at: wslu package: Ubuntu (launchpad.net)

What’s in the Preview?

This preview build is designed to test our new image creation tooling and out-of-box-experience (OOBE) based on the latest development branch of Ubuntu.  We are testing a new feature designed for smoother onboarding, an enhanced out-of-box experience, better integration with the new ubuntuwsl tool, and testing the new official Ubuntu theme for Windows Terminal. It will also be frequently updated and used to test other features we bring to Ubuntu on WSL in the future.

Dive into the Community Preview now to begin testing these new features:

  1. Ubuntu WSL Out-Of-Box experience: An user-friendly setup interface for the first run based on `subiquity`(the same installer used by Ubuntu server), that allows you to perform additional customization of your environment after your first run. 
  2. Ubuntu WSL commandline interface `ubuntuwsl`: A tool that allows you to readily configure your WSL distro.
  3. Windows Terminal Fragment Extension: A work-in-progress extension that provides an entry on Windows Terminal

UWCP is also built on our new tool, Ubuntu Cooker, which allows the automation of the build process with one single command.

Here’s a glimpse into what you’ll see in the Ubuntu on Window’s Community Preview:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/84SMhfsOlE4I18WTCXZWBNZ-MLhBQLFGe0uiyMB4FmGNe3oL1JYUZVzvbwRggYNHw4WLYlvYc2SJ9UzeO5cFmyNZ2ks6zpliyijkzTubMBHTURIQOr7i9uEYV75bSUKUiNIidlEF" width="720" /> </noscript>

Windows Terminal Theme/Integration (In Preview Only)

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/90XE9tYcwZffIb9p6RQ8aAc7CRJGTtv9gevzWEDXPEID8TKzRy_qfd7sU5XC9fCetu8KefCsmJ1DrTjzLWH21Sk7CNFu_ofxWgDUFXbbSwL5Tjxt6TbAqnofJ2LGDr64UnfVetwM" width="720" /> </noscript>

Ubuntu Out-Of-Box Experience

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/CvKsIUYB-SxGOb8J4MaSCehK0EXRym84JvsjrY4K1Q4GLQJquZHtlOMO6-XrG7C0WfUFIFyJD4bXplAGzXlZ06eqw27ME6l66ktEgdYDCysZ8y8IIxoXBdf-SGxH8JFjrHT3uCvb" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/k4FSL_qR8GBZPdaQd31wbLB1tKF_2-SsW5F-92VA08_h9lEHrvBv9wkzTcSnMWi7wnnhRkgYVsFYxX7Kt88J1-zB_-Qwb00Sn4bNXZUHW8csVD1w2KuusaJUjmzvBOKyHUDDDr62" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/RmtQDFgJJnkM-SC_Ed-hakiJWSs_Llyvzglo8G6yYRI6Y-IDBCcbqXq3jqFLfLiR9fBqeJeiIGv173HDrE6-pV5rgroiw2EDJ-wpZfY8BXQez0KSUCKsO1L8ITWXUIT5I_HENIGH" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/ppbs2jJtcy1oEYUVJOfd44KvRqlwEmMkWUiFNUv3g3Ar1VUaA0SwrDPs5qT6E4GYYh0ZDS-3LZhlsTlpMKFdOdf11sk7OEJYZ9NDssEZdoB0AZ59x2oKr6-WQqaF2taRzd4x1jeP" width="720" /> </noscript>

The ubuntuwsl utility tool — Example use case

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/4tQ3YA17rOHcU2wm_iOG7neF-qcLbuQIvZOaPayKzgppKmxB2-41RZPr7QXMmMqHZ1ZZCsMICczOL6swst2HupUmAH1oh9BLy9Nvp1llx0y6YyzGGWN54nPtHfjs1HclZWWAnsAu" width="720" /> </noscript>

The ubuntuwsl utility tool — Text-base UI

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/lwvM5vjsD-fRerSgRfr8kLDKrM05uTrS9qkPIiJs1z1rCAfjleXiiI1QIgYKkOubhziDsU-XCleM1muAye-HQTDfPjjKawG56yxXrW688TQFO99Wa3jVM7X-GcHF7zDhiQyBq4qp" width="720" /> </noscript>

Why we built this and why it’s so important

Ubuntu has historically only provided Long Term Support (LTS) releases for the WSL in the Microsoft Store.  For enterprise and new users of WSL, this is ideal because these versions are optimised for the best experience out of the box. To make it easy to find, we have created a generic Ubuntu on Windows application that will always contain the most recent LTS.  Enterprises are encouraged to leverage our LTS versions where they get security updates for five years without having to think about release upgrades. For more advanced users, we have had interim releases available for side-loading from our cloud image repository.   These interim releases have been fully tested and are supported for 9 months. Since these are side-loaded images from the cloud image repo, they lack some important assets necessary for us to test new features and ideas with the broader community. 

In order to match the speed of innovation in open-source and test new features with the community at a faster pace and shorter cadence, we have created the Ubuntu on Windows Community Preview. We are thrilled to be expanding the WSL community and providing mechanisms for innovation and ideas to flourish. Please join us there and help us create the future of Ubuntu on the Windows Subsystem for Linux.

31 March, 2021 03:30PM

Timo Jyrinki: MotionPhoto / MicroVideo File Formats on Pixel Phones

Google Pixel phones support what they call ”Motion Photo” which is essentially a photo with a short video clip attached to it. They are quite nice since they bring the moment alive, especially as the capturing of the video starts a small moment before the shutter button is pressed. For most viewing programs they simply show as static JPEG photos, but there is more to the files.

I’d really love proper Shotwell support for these file formats, so I posted a longish explanation with many of the details in this blog post to a ticket there too. Examples of the newer format are linked there too.

Info posted to Shotwell ticket

There are actually two different formats, an old one that is already obsolete, and a newer current format. The older ones are those that your Pixel phone recorded as ”MVIMG_[datetime].jpg", and they have the following meta-data:

Xmp.GCamera.MicroVideo                       XmpText     1  1
Xmp.GCamera.MicroVideoVersion XmpText 1 1
Xmp.GCamera.MicroVideoOffset XmpText 7 4022143
Xmp.GCamera.MicroVideoPresentationTimestampUs XmpText 7 1331607

The offset is actually from the end of the file, so one needs to calculate accordingly. But it is exact otherwise, so one simply extract a file with that meta-data information:

# Extracts the microvideo from a MVIMG_*.jpg file

# The offset is from the ending of the file, so calculate accordingly
offset=$(exiv2 -p X "$1" | grep MicroVideoOffset | sed 's/.*\"\(.*\)"/\1/')
filesize=$(du --apparent-size --block=1 "$1" | sed 's/^\([0-9]*\).*/\1/')
extractposition=$(expr $filesize - $offset)
echo offset: $offset
echo filesize: $filesize
echo extractposition=$extractposition
dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

The newer format is recorded in filenames called ”PXL_[datetime].MP.jpg”, and they have a _lot_ of additional metadata:

Xmp.GCamera.MotionPhoto                      XmpText     1  1
Xmp.GCamera.MotionPhotoVersion XmpText 1 1
Xmp.GCamera.MotionPhotoPresentationTimestampUs XmpText 6 233320
Xmp.xmpNote.HasExtendedXMP XmpText 32 E1F7505D2DD64EA6948D2047449F0FFA
Xmp.Container.Directory XmpText 0 type="Seq"
Xmp.Container.Directory[1] XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[1]/Container:Item/Item:Mime XmpText 10 image/jpeg
Xmp.Container.Directory[1]/Container:Item/Item:Semantic XmpText 7 Primary
Xmp.Container.Directory[1]/Container:Item/Item:Length XmpText 1 0
Xmp.Container.Directory[1]/Container:Item/Item:Padding XmpText 1 0
Xmp.Container.Directory[2] XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item XmpText 0 type="Struct"
Xmp.Container.Directory[2]/Container:Item/Item:Mime XmpText 9 video/mp4
Xmp.Container.Directory[2]/Container:Item/Item:Semantic XmpText 11 MotionPhoto
Xmp.Container.Directory[2]/Container:Item/Item:Length XmpText 7 1679555
Xmp.Container.Directory[2]/Container:Item/Item:Padding XmpText 1 0

Sounds like fun and lots of information. However I didn’t see why the “length” in first item is 0 and I didn’t see how to use the latter Length info. But I can use the mp4 headers to extract it:

# Extracts the motion part of a MotionPhoto file PXL_*.MP.mp4

extractposition=$(grep --binary --byte-offset --only-matching --text \
-P "\x00\x00\x00\x18\x66\x74\x79\x70\x6d\x70\x34\x32" $1 | sed 's/^\([0-9]*\).*/\1/')

dd if="$1" skip=1 bs=$extractposition of="$(basename -s .jpg $1).mp4"

UPDATE: I wrote most of this blog post earlier. When now actually getting to publishing it a week later, I see the obvious ie the ”Length” is again simply the offset from the end of the file so one could do the same less brute force approach as for MVIMG. I’ll leave the above as is however for the ❤️ of binary grepping.

(cross-posted to my other blog)

31 March, 2021 11:06AM by TJ (noreply@blogger.com)

Ubuntu Blog: Ubuntu for machine learning with NVIDIA RAPIDS in 10 min

TLDR; If you just want a tutorial to set up your data science environment on Ubuntu using NVIDIA RAPIDS and NGC Containers just scroll down. I would however recommend reading the reasoning behind certain choices to understand why this is the recommended setup.

Cloud or local setup

Public clouds offer a great set of solutions for data professionals. You can set up a VM, container, or use a ready-made environment that presents you with a Jupyter notebook. They are also great in terms of productizing your solution and exposing an inference endpoint. Nevertheless, every data scientist needs a local environment.

If you are starting your career it’s better to understand exactly how all the pieces are working together, experiment with many tools and frameworks, and do it in a cost-effective way.

If you are an experienced professional you will always meet a customer which cannot put their data on a public cloud, ie. for compliance or regulatory reasons.

Additionally, I like to be able to take my work with me on a trip, and sometimes I’m not within range of a fast internet connection. Having your own machine makes a lot of sense.

Why Ubuntu for data professionals

In terms of operating systems for your local environment, you have a choice of Linux, Windows, and Mac.

We can drop Mac immediately, because it does not have an option to include NVIDIA GPU, and you need to have it for any serious model training. If you really like the ecosystem you can cut it with a MacBook and eGPU enclosure but it’s not portable anymore.

Another option would be Windows, and using WSL you can have a decent development environment where all tools and libraries work well but this option is still a niche in the machine learning community for now, and most of the community runs, tests and writes tutorials based on Linux.

Ubuntu is the most popular Linux distribution amongst data professionals because it’s easy to use, and lets you focus on your job, instead of tinkering with the OS. Ubuntu is also the most popular operating system on public clouds so whatever you develop locally you can easily move to production without worrying about compatibility issues.

Canonical is working with many partners to make AI/ML experience best for developers. You can find out more about it on Ubuntu website.

Hardware and software

The next thing to look into is hardware. Of course, this depends on your budget, but the reasonable setup I could recommend is:

  • i7/i9 Intel CPU or Ryzen 7/9 from AMD 
  • At least 16GB of RAM, preferred 32GB or more
  • NVIDIA GPU – there are RTX or Quadro devices for professional workstations but a gaming GPU from 20XX or 30XX series would be good as well
  • Nice screen and keyboard – they impact your health a lot so don’t save on this and go for high quality

Thanks to Canonical’s collaboration with NVIDIA GPU drivers are cryptographically signed and make your setup much more secure.

For all the basic software development tools Ubuntu gets you covered, using apt install or snap install you can have your favorite IDE or editor one command away. As always I recommend emacs, especially using doom emacs configuration framework.

There are countless libraries and tools for machine learning. If you want a full suite of them that is well integrated, tested, and available in production environments you should go with RAPIDS

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/rVMmFDCRNKb4Z9f__MvArllOYCfleidSSxcGfPxsFzDIPP_N_w0aKheDX7lukeJ_XctrseWEqJV_4HifCMdk-F-I1SA7mAMW5kU0lhcsv2kZeHcQIC_nIJcS5uNOA4aqi0_BvNUU" width="720" /> </noscript>

With RAPIDS you get:

cuDF – This is a data frame manipulation library based on Apache Arrow that accelerates loading, filtering, and manipulation of data for model training data preparation. The Python bindings of the core-accelerated CUDA DataFrame manipulation primitives mirror the Pandas interface for seamless onboarding of Pandas users.

cuML – This collection of GPU-accelerated machine learning libraries will eventually provide GPU versions of all machine learning algorithms available in Scikit-Learn.

cuGRAPH – This is a framework and collection of graph analytics libraries

Anaconda or NGC containers

Next choice is how to manage your environment. When you play with data for a longer time you will quickly get into a scenario where you have two projects (ie. work and pet project) that require different versions of Python, CUDA or Tensorflow. The two most effective ways to tackle this issue are Anaconda or containers.

I would recommend familiarizing yourself with both of them. For learning new things, doing some simple exploratory data analysis with a new plotting library I prefer to use conda, as it’s quick, low footprint and convenient.

If I even suspect that a project might go to production I prefer to use containers, as they are portable between my machine, customer’s private K8s cluster and a public cloud. 

You can of course make your own container image and it’s a great skill to have, but you can also find ready made, well tested container images in NVIDIA NGC.

Moving to production

When you have a solution working and you start thinking about moving to production then you need to familiarize yourself with ML Ops. Best way to do it is joining the MLOps community slack.

Setup instructions

These instructions are valid for the latest Ubuntu LTS release, which is 20.04.

After installing Ubuntu 20.04 operating system we need to install the drivers for NVIDIA GPU

First check if your GPU is detected correctly

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/LY-nHXm_k5eUEPQQy9iqnd-ovR-nXN9qTw0Uf9OvFb8YoldE-lgn4XRKLDC2iKzB4nB0EvIGEt7TzhmWYvll5EsMdlPB2H8TON56p2_QV7tvBv2VHCF5447wO6o7i_WvS8W1nh05" width="720" /> </noscript>

Then install the drivers. You don’t need to worry about figuring out which version is right for you, Ubuntu installer will take care of this for you.

sudo ubuntu-drivers autoinstall 

sudo reboot

You can check if everything is correctly installed using nvidia-smi command.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/gEyvLxcNFBsnpBDWxpmRHLC5_-OkEluA2PiuclGksya1MJCRrIpOTVYDBfm3s7-OW9MxpOeH5V2N9dJ3wflPph6oqnljrkNIR5hynJmIeNxHO6J58_WdPk9HROtMEmuKGsPZH49M" width="720" /> </noscript>

Option 1: Conda based RAPIDS environments

wget -P /tmp https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh

bash /tmp/Anaconda3-2020.02-Linux-x86_64.sh

Answer the installer questions and create the environment:

conda create -n rapids-0.18 -c rapidsai -c nvidia -c conda-forge \
-c defaults rapids-blazing=0.18 python=3.7 cudatoolkit=11.0

conda activate rapids-0.18

You can run a jupyter notebook then which will open your browser.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/dwDC1Tbk1iWTG37Fm_PXW6YqdcfJ86mzr-CBaFSKV2pUsACzL_U3g8ZL2om1EwdRLsQKxTiEjdkweoinICA_Wvw4NColWlQJkS-dCiYSa28Uam1coEktvZdO3hrqQ-0Aswh9ZxY7" width="720" /> </noscript>

Option 2: Docker containers with RAPIDS from NVIDIA

Install docker

sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

sudo add-apt-repository \
“deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \

sudo apt-get update

sudo apt-get install -y docker-ce docker-ce-cli containerd.io

sudo usermod -aG docker $USER

Install nvidia-docker2

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)

curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add –

curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

curl -s -L https://nvidia.github.io/nvidia-container-runtime/experimental/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list

sudo apt-get update

sudo apt-get install -y nvidia-docker2

sudo systemctl restart docker

Check if the installation works correctly:

docker run –rm –gpus all nvidia/cuda:11.0-base nvidia-smi

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Y4dmH8oB1nWdH7OdfiX8j-pKQkB5emn2SWEpcR2E_ysEu5kk5V4abaxiGgUevnKxkU3gyP8Q11NEZr7z5wIldbwtB6sfCvnnpWkLzLsKkALQ5YqyJTp4YExl90WHZNl5K9JST7S_" width="720" /> </noscript>

Now you can use the full set of NVIDIA containers from https://ngc.nvidia.com/catalog/ . Let’s assume that we have a project using Tensorflow library and python 3. Then you need to run 

docker pull nvcr.io/nvidia/tensorflow:20.12-tf2-py3

mkdir ~/shared_dir

docker run –gpus all –shm-size=1g –ulimit memlock=-1 –ulimit stack=67108864 -it –rm -v ~/shared_dir:/container_dir nvcr.io/nvidia/tensorflow:20.12-tf2-py3

And your environment is ready

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/NdJ3clvUOlFHa1yQ7Kh-zIRC3vddWIqWlN_LyYnVQO1b1VRunrRYLQt_yscAFTvVocuivjPeXlHP7xyZnuYW1f4FmIwO8gtPV-pTvVF05-3QNQyiBJOsImsy2XKaxGXqiiL_c03R" width="720" /> </noscript>

As you can see it’s very easy and straightforward to set up an environment for data projects. In five minutes from finishing Ubuntu installation you can land in a notebook or IDE and start being productive. This is not the final say from us. We will continue to work with our partners and by the end of the year we will get even better experience.

In case of any issues of suggestions you can find me and the rest of Ubuntu team on Discourse.

31 March, 2021 10:00AM

Ubuntu Blog: How to choose the best enterprise Kubernetes solution

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/742a/Title-Cards-13.png" width="720" /> </noscript>

While containers are known for their multiple benefits for the enterprise, one should be aware of the complexity they carry, especially in large scale production environments. Having to deploy, reboot, upgrade or apply patches to patches to hundreds and hundreds of containers is no easy feat, even for experienced IT teams. Different types of Kubernetes solutions have emerged to address this issue.

However, navigating these solutions to pick the right one is often challenging, as there is no true ‘one size fits all’ . Each route you take to adopting Kubernetes comes with its pros and cons; this gets even trickier when you consider that what might be a deal breaker for one organisation, might not be an issue for another, depending on each business’s specific profile.

So before we dive into the challenges and solutions of each type of Kubernetes, let’s explore some of the key considerations for businesses, that will impact which Kubernetes approach is most suitable to their needs.

Get the full picture in our complete guide to enterprise Kubernetes.

Download whitepaper

What to consider when choosing an enterprise Kubernetes solution

1. Size

Every organization’s IT department and engineering team has different strengths, weaknesses, and priorities. The falling power meters can determine whether a Kubernetes solution can help or impede organisations with a cloud native transformation size. Kubernetes becomes exponentially more complex with scale. Smaller organizations will thus not have some of the challenges that a fortune 500 company would.

2. Technical sophistication

Some enterprises have a very high level of technical sophistication and consider their ability to not only develop and ship applications, but also manage infrastructure as a competitive advantage. Other enterprises are less sophisticated and would rather focus on the application layer industry.

3. Industry

Different types of organizations are governed by a variety of compliance frameworks and have different requirements around uptime security and data loss budget.

4. Budget

Some organizations are more cost constrained than others, which might define what Kubernetes choice they make. For example, an organisation may not have sufficient know-how or bandwidth to manage its own Kubernetes but be forced to do so because of its inability to afford a managed Kubernetes service – even if that means imperfect results.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/e0b2/Title-Cards-15.png" width="720" /> </noscript>

5. Current infrastructure

Some organisations’ plants run Kubernetes on premises; others run Kubernetes in the cloud; others plan to run on multiple public clouds or across on-premises and public clouds.

Kubernetes makes all of those choices possible, but the challenges are different depending on what type of environment Kubernetes will be running.

Kubernetes challenges at the enterprise level

1. Unexpected costs

Kubernetes is open source, which usually helps reduce OpEx and CapEx. But in order to get additional features and support many organisations opt for a commercial distribution tied to a cloud provider, which can lead to vendor lock-in with licensing costs running high. Secondly, there can be significant costs tied to migrating legacy systems to containers, both in terms of human resources and data.

2. Skills gap

Kubernetes is still a relatively new technology and its ecosystem is ever evolving, which makes it difficult for individuals to keep up with the entire breadth of best practices and functionality.

3. Multi-cloud vs. hybrid cloud

Portability is one of the most important benefits of containers, but moving Kubernetes workloads across clouds can be costly.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/6243/Title-Cards-14.png" width="720" /> </noscript>

4. Tool overwhelm

Kubernetes is surrounded by a huge ecosystem of tools and platforms, and there are new additions popping up constantly. While this brings businesses much needed flexibility, it poses the question ‘which tool out of all these is optimal for our specific needs?’ which can be complex to answer.

5. Security, networking, and storage

Managing stateful applications in Kubernetes is not only possible, but part of K8s’ original design. However, this requires expertise in managing cloud native storage, which behaves differently from storage in a legacy environment; properly setting up load balancing can also be tricky, but is essential to ensure the application stays available and performant.

6. Day 2 operations

Lastly, very few organizations think of day two operations are the proof of concept stage, but applications running on Kubernetes, like all applications, spend most of their life cycle in the production phase. These applications need to be patched, upgraded, and monitored.

What enterprise Kubernetes solutions are there?

1. Vanilla Kubernetes

Starting from the basics: pure open source. Kubernetes is always an option and is likely the first type of Kubernetes. Then individuals will have experience with vanilla. Kubernetes is extremely flexible and extensible, but it’s also lacking enterprise great features around monitoring, managing state availability, lifecycle operations, and more platform. As a service Kubernetes offerings are products offered by a vendor that created more opinionated Kubernetes packages. The platform generally includes a branch stout pre-configured Kubernetes, as well as associated tools to manage infrastructure and applications. It is usually much easier for organizations to get up and running with a pass version of Kubernetes than with vanilla Kubernetes. This is accomplished by reducing the configuration options available to users by pre-selecting tools and services, which means that past solutions are considerably less flexible and difficult to upgrade.

2. Cloud hosted Kubernetes

Cloud hosted Kubernetes are convenient and easy: as with past solutions, organsations let the cloud provider handle the Kubernetes infrastructure. The cloud provider controls the configurations and controls. What tools can be integrated. The main difference between cloud hosted Kubernetes and past solutions is cost and workload portability. Cloud hosted Kubernetes is less expensive than past solutions, however, running a multi-cloud or hybrid cloud setup is not possible using cloud hosted Kubernetes alone.

3. Managed Kubernetes

Managed Kubernetes service providers handle the management of Kubernetes cluster for one organization in their data centers, whether it’s on premises or in the public cloud managed Kubernetes services offer enterprise support, uptime guarantees, and a hands-off experience for the customer.

4. Enterprise Kubernetes platforms

Enterprise Kubernetes platforms package upstream, compliant, Kubernetes with tools that help organisations manage the entire application life cycle, and generally focus on providing one central platform to control multiple clusters and multiple environments. These platforms make it easier for centralised teams to control configurations and access management for the entire organisation. Enterprise Kubernetes platforms offer dramatically more flexibility than any option other than vanilla Kubernetes, and tend to prioritize either ease of use or advanced operational controls.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/f06d/Title-Cards-16.png" width="720" /> </noscript>

Which enterprise Kubernetes is right for me?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/5509/Title-Cards-17.png" width="720" /> </noscript>

So how do these different solution types address the challenges of Kubernetes for the enterprise we examined earlier?

Here’s a point by point breakdown starting with vanilla Kubernetes: it’s made advantage is that it’s freely available and allows organisations to fully customise and install it on any substrate in a hybrid or multi-cloud setup. On the other hand, it has a very important learning curve might require time and effort for customization to compensate for the lack of out of the box features, it can be the solution.

For organisations that are highly technical and are able to build custom tools as a competitive advantage, PAAS Kubernetes is fairly easy to learn and use as it comes with specific opinions, tools, and solutions. It also usually comes with a high licensing price and can lead to vendor locking and a lack of flexibility to move workloads on other clouds. Organisations that are not highly technical, are looking to quickly get up and running with Kubernetes and have the budget to spend, can find PAAS Kubernetes suitable to their needs.

As far as public cloud Kubernetes goes, it is cheaper than paths, can be easy to set up, and comes with tools to address business needs and networking in store services from the cloud providers, the lack of workload portability and control over plus configuration, or this solutions main trade offs, businesses, especially small ones, that look for a cost sensitive solution do not have the necessary expertise and do not expect to need any unusual functionality from their K8s might find this solution very appealing.

Managed Kubernetes is an offering that effectively allows organisations to consume Kubernetes as a service. It requires very few Kubernetes specific skills with a service provider, enabling cloud and workload portability and taking care of any significant day two operations, like upgrades and security patching. This solution does not come in cheap, but in specific scenarios can be cheaper than a public cloud offering. The lack of flexibility and eventual vendor locking can be the biggest trade-off here. Organisations at the beginning of their K8s journey, lacking the necessary skills set, that want to focus on delivering apps rather than managing infrastructure can benefit from a managed Kubernetes enterprise solution.

Kubernetes platforms are the most flexible in terms of the supported substrates enabling hybrid and multi-cloud deployments easily. Depending on the vendor they can be expensive, but they compensate by providing comprehensive pricing and a rich tooling ecosystem. This type of solution is best suited for medium to large size companies that require complex deployments and have the necessary technical expertise to reap the full benefits of Kubernetes .

Canonical’s Kubernetes distributions

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/eb38/microk8s.jpg" width="720" /> </noscript>

From our discussion so far, it is clear that enterprises have various choices to cater to their own specific needs. Therefore, how does Canonical’s work around Kubernetes contribute and add value to the Kubernetes landscape for businesses and organisations? Firstly, let’s give a bit of context. Most people know Canonical from Ubuntu; what they might not know is that Ubuntu is at the core of all major public cloud Kubernetes distributions. Our experience with the public clouds allows us to be a trusted advisor for all businesses that are interested in Kubernetes. Our goal is to make organisations successful in their cloud native solutions, regardless of the Kubernetes, they use. EKS, AKS, GKE all run on Ubuntu so Canonical can help and provide support on any of these distributions. We also have our own distributions, Charmed Kubernetes and MicroK8s, through which we address multi-cloud use cases, from public cloud to the edge.

Kubernetes is an upstream, conformant composable Kubernetes with a high degree of configurability and fine grained service placement. If you want the Kubernetes tailored to your business, Charmed Kubernetes is an enterprise Kubernetes platform that offers full lifecycle operations at a very competitive price. MicroK8s is a lightweight zero-ops conformant Kubernetes with sensible defaults for workstations, edge, and IoT appliances. Organisations looking for an easy Kubernetes that can be used standalone, clustered, or even embedded in edge or IoT solutions will find a perfect match in MicroK8s.

We also offer managed Kubernetes for businesses looking to quick-start their Kubernetes journey without breaking the bank and consider getting control back at a later point.

No matter what its nature and needs are, Canonical enables your business to achieve its purpose faster and better through its Kubernetes offerings and with the support of its team of experts who have helped build the most popular Kubernetes clouds out there.

Want to try it out for yourself? Install Canonical’s Kubernetes today.

Install Charmed Kubernetes

Install MicroK8s

31 March, 2021 08:24AM

David Tomaschik: Making: A Desk Clamp for Light Panels

On a little bit of a tangent from my typical security posting, I thought I’d include some of my “making” efforts.

Due to the working from home for an extended period of time, I wanted to improve my video-conferencing setup somewhat. I have my back to windows, so the lighting is pretty bad, so I wanted to get some lights. I didn’t want to spend big money, so I got this set of Neewer USB-powered lights. It came with tripod bases, monopod-style stands, and ball heads to mount the lights.

The lights work well and are a great value for the money, but the stands are not as great. The tripods are sufficiently light that they’re easy to knock over, and they take more desk space than I’d really like. I have a lot of stuff on my desk, and appreciate desk real estate, so go to great length to minimize permanent fixtures on the desk. I have my monitors on monitor arms, my desk lamp on a mount, etc. I really wanted to minimize the space used by these lights.

I looked for an option to clamp to the desk and support the existing monopods with the light. I found a couple of options on Amazon, but they either weren’t ideal, or I was going to end up spending as much on the clamps as I did on the lamps. I wanted to see if I could do an alternative.

I have a 3D Printer, so almost every real-world problem looks like a use case for 3D printing, and this was no exception. I wasn’t sure if a 3D-printed clamp would have the strength and capability to support the lights, and didn’t think the printer could make threads small enough to fit into the base of the lamp monopods (which accept a 1/4x20 thread, just like used on cameras and other photography equipment).

I decided to see if I could incorporate a metal thread into a 3D printed part in some way. There are threaded inserts you can implant into a 3D print, but I was concerned about the strength of that connection, and would still need a threaded adapter to connect the two (since both ends would now be a “female” connector). Instead, I realized I could incorporate a 1/4x20 bolt into the print. I settled on 3/8” length so it wouldn’t stick too far through the print and a hex head so it wouldn’t rotate in the print, making screwing/unscrewing the item easier.

I designed a basic clamp shape with a 2” opening for the desk, and then used this excellent thread library to make a large screw in the device to clamp it to the desk from the bottom. I put an inset for the hex head in the top and a hole for the screw to fit through. When I printed my first test, I was pretty concerned that things wouldn’t fit or would break at the slightest torquing.

Clamp Sideview

Much to my own surprise, it just worked! The screw threads on the clamp side were a little bit tight at first, but they work quite well, and certainly don’t come undone over time. I’ve now had my light mounted on one of these clamps for a few months and no problems, but I would definitely not recommend a 3D printed clamp for something heavy or very valuable. (If I’m going to hold up a several thousand dollar camera, I’m going to mount it on proper mounts.)

Clamp On Table

Note on printing: If you want to 3D print this yourself, lay the clamp on its side on the print bed. Not only do you avoid needing support, you ensure that the layers lines go along the “spine” of the clamp, rather than the stress separating layers.

Clamp Model

31 March, 2021 07:00AM

March 30, 2021

Ubuntu Blog: Ubuntu in the wild – 30th of March 2021

The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/07d7/Ubuntu-in-the-wild1.png" width="720" /> </noscript>

Industrial IoT solutions for Manufacturing

The recent pandemic forced manufacturers to re-evaluate the importance of IoT solutions in their day-to-day operations. Galem Kayo, Product Manager at Canonical, explores the key benefits of industrial IoT solutions and how they can effectively support real-time monitoring and quality control.

Read more on that here!

Ubuntu 21.04 powered by Linux kernel 5.11

The news is out: Ubuntu 21.04, Hirsute Hippo, is powered by Linux kernel 5.11! This will bring lots of hardware improvements and new features.

Read more on that here!

Install Virtualmin on Ubuntu 20.04

Looking for an option to replace cPanel CentOS? This tutorial from TechRepublic will help you install Virtualmin on Ubuntu server 20.04 in a couple of minutes.

Read more on that here!

The best Ubuntu apps: a Techradar selection

What apps should you get now to make the best of your new or existing Ubuntu setup? Techradar came up with a list of their favorite snaps and Ubuntu Software store goodies to help you take your distro to the next level.

Read more on that here!

Easy remote collaboration

Canonical, Collabora, and Nextcloud, came together to allow Raspberry Pi users to easily turn their Pi 4 into a self-hosted content collaboration and document editing solution. This will make remote collaboration and working from home much easier.

Read more on that here!

Building an AI stack with Canonical

It can be hard for small companies to jump on the AI bandwagon, which could very well be attributed in part to the lack of common standards and cross-platform interoperability. To solve that, the AI Infrastructure Alliance (AIIA) is aiming to create a “canonical stack” for AI.

Read more on that here!

Bonus: Ubuntu in the wild and Mars dogs

For this week bonus, we wanted to share this cool presentation by the TEAM CoSTAR featuring an Ubuntu-powered pack of Mars dogs for autonomous martian lava tubes exploration:

30 March, 2021 07:09PM

Ubuntu Blog: What lies after LTS? Two years of Ubuntu 14.04 in ESM

Two years ago, we launched the Extended Security Maintenance (ESM) phase of Ubuntu 14.04, providing access to CVE patches through an Ubuntu Advantage for Infrastructure free or paid subscription. This phase extended the lifecycle of Ubuntu 14.04 LTS, released in April 2014, from the standard, five years of an LTS release to a total of eight years, ending in April 2022. During the ESM phase we release security fixes for high and critical priority vulnerabilities for the most commonly used packages in the Ubuntu main and restricted archives. In this post, I would like to review and share our experience from the past two years of maintaining this release

To date, in the lifecycle of Ubuntu 14.04 ESM we published 238 Ubuntu Security Notices (USN), covering 574 CVEs ranging from high-low in priority. The ensuing security updates, protected from vulnerabilities with impacts ranging from remote code execution and privilege escalation, to CPU hardware vulnerabilities. Our average time of resolving high-priority CVEs, was 14 days.

The software vulnerabilities that stood out

Not all vulnerabilities are the same, and high publicity, or a lavish name, does not always imply high risk. Below I have unpacked the vulnerabilities which had the most impact on the 14.04 ESM lifecycle -SACK Panic, GRUB2 BootHole, Baron Samedit and Exim remote code execution.

SACK Panic – kernel

The Linux kernel, being a core part of any system’s attack surface, has the potential for high impact vulnerabilities. What can be worse? A vulnerability on its TCP stack that can be triggered remotely. SACK Panic is a set of Denial of Service (DoS) vulnerabilities that apply to both server and client systems and can be remotely triggered. It takes advantage of an integer overflow and other flaws in the Linux kernel’s TCP implementation of Selective Acknowledgement (SACK). We fixed it with USN-4017-2 in June 2019

GRUB2 BootHole – GRUB2

The UEFI secure boot establishes the integrity of the operating system starting from the UEFI BIOS, and authenticating the subsequent software involved during boot. In our case that is GRUB2, which is in turn authenticating the Linux kernel. That process prevents malware from becoming persistent by getting outside the operating system’s boundary. GRUB2 contained various vulnerabilities, including integer overflow with heap-buffer overflow and use-after free, that would allow its authentication part to be bypassed. A local attacker with administrative privileges or with physical access to the system could circumvent GRUB2 module signature checking, resulting in the ability to load arbitrary GRUB2 modules that have not been signed by a trusted authority and hence bypass UEFI Secure Boot. It was fixed with USN-4432-1 in July 2020.

Baron Samedit – sudo

Sudo is one of the most common tools used by administrators. It allows performing privileged tasks while running as an unprivileged user, providing not only safety by allowing to operate unprivileged by default, but also creates an audit trail of who performed what in an environment with multiple administrators. This vulnerability can cause an unauthorized privilege escalation by local users, due to incorrect memory handling in sudo (heap buffer-overflow), present in the codebase since 2011. We addressed it with USN-4705-2 in January 2021.

Exim remote code execution vulnerability

Exim is one of the main mail (SMTP) handling servers in Ubuntu. This vulnerability can result in remote command execution due to a mismatch between the encoder and the decoder of a part of the SMTP protocol. It was fixed with USN-4124-1 in September 2019.

The hardware vulnerabilities that stood out

Hardware, unlike software, cannot easily be updated or patched in the field. As such, when vulnerabilities are found within the hardware itself, it is the software on top of the hardware that must work around the issue. Hardware has a very limited degree of adjustment, e.g., via the drivers or with microcode. Here I recap the most prominent hardware vulnerabilities that we addressed during the first two years of the 14.04 ESM lifetime

Intel Microarchitectural Data Sample (MDS) vulnerabilities

This vulnerability on Intel CPUs causes a confidentiality breach. Data used by applications running on the same CPU may be exposed to a malicious process that is executing on the same CPU core. This is a complex attack which uses a speculative execution side-channel, and cannot be easily used for targeted attacks (see our wiki for the more detailed review). This vulnerability is mitigated with kernel, qemu, libvirt and microcode updates released with USN-3977-1, USN-3977-3, USN-3983-1, USN-3982-2, USN-3981-2, USN-3985-2, USN-3977-2, USN-3978-1 in May 2019. 

The TSX Asynchronous Abort

The TSX Asynchronous Abort (TAA) vulnerability allows an attacker to access the CPU’s microarchitectural buffers through the Intel® Transactional Synchronization Extensions (Intel® TSX). This is again a data confidentiality vulnerability, that can cause memory exposure between userspace processes, between the kernel and userspace, between virtual machines, or between a virtual machine and the host environment. The issue is mitigated by a combination of updated processor microcode and Linux kernel updates, released with USN-4187-1, USN-4186-2 in November 2019

Intel graphics (i915) vulnerabilities

These are two vulnerabilities that affect Intel Graphics processor. The first (CVE-2019-0154) can cause a local user triggered Denial of Service (DoS) attack. That is, a system hang can be caused by an unprivileged user performing a read from GT memory mapped input output (MMIO) when the processor is in certain low power states

The second vulnerability (CVE-2019-0155) is a privilege escalation and a breach of data confidentiality. The graphics processors allowed unprivileged users to write and read kernel memory, resulting in possible privilege escalation. 

Both vulnerabilities are mitigated by updates to the Intel graphics driver as part of Linux kernel updates, released with USN-4187-1, USN-4186-2 in November 2019.

Taking another look

The most impactful software vulnerabilities solved in 14.04 ESM took advantage of well known techniques such as heap-buffer overflows, use-after-free, integer overflows as well as logic errors. While there are mitigations and defenses in play such as Non-Executable Memory available in modern CPUs, as well as the heap-protector, ASLR and others in Ubuntu 14.04, these raise the bar for the attacker, but do not provide complete protection, and attacks only get better. Hence regular vulnerability patching is a key element in reducing the risk of a breach.

Taking a step back on the hardware vulnerabilities, we see that CPUs are no longer seen as black boxes, and we see a steady stream of vulnerabilities identified and mitigated. In the presence of these vulnerabilities, shared hosting, e.g., on a cloud, cannot always guarantee the confidentiality of data processed by the shared CPUs. It is imperative to note that the necessary privacy on the cloud comes not only from the hardware and operating system integration, but also requires a well maintained system with regular vulnerability patching in place.

The packages with most security vulnerabilities

Although there are more than 2000 packages in Ubuntu 14.04 ESM main repositories, there were packages that stood out in our vulnerability fixes. This does not imply a design weakness, but can also be attributed to their popularity and importance to the open source ecosystem that attracts researchers’ attention. Our graph below displays the number of security fixes in the top packages that we patched.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/YNL9dDb1dA-tHNc9-nr5xewxnOujr-f-PuBlXBqy_cFo7RfBpNdI7wxMgeyLcOxQ_44u691qCflnoyQZKVuzcBynFrw4xyPtTlKwoMfQ9ciwti5hKUdDEqNWFfaBuJjNg7YZOmev" width="720" /> </noscript>

To upgrade or not upgrade?

Transitioning to the latest operating system is important for performance, hardware enablement, software fixes and technology enablement benefits. However, upgrading is not a simple, quick or inexpensive process. Enterprise solutions combine software from a variety of teams within an organization, and in most cases there is an extended supply chain, involving software from 3rd party vendors, who in turn may have their own software vendors.

Such complex scenarios result in a dependency on software stacks (e.g., Java, python) that have certain properties in the upgraded system that either got deprecated, replaced or slightly changed behavior in the newer system. The upgrade process in that case becomes a change management process involving risk analysis, stakeholder communication and possibly the upgrade of existing solutions, in addition to the actual operating system upgrade.

In fact the main driver of IT budget increase during the last 3 years is the need to update outdated infrastructure, and upgrading is part of everyone’s workflow. In the meantime, Ubuntu ESM serves as a vessel to give you the flexibility to schedule infrastructure upgrades while also ensuring the security and continuity of your systems.

Learn more on how other customers leveraged Ubuntu 14.04 ESM in the case studies below.

ESM maintains Interana’s system security while upgrading ›

TIM ensures system security and client confidence with ESM ›

30 March, 2021 03:10PM

Colin King: A C for-loop Gotcha

The C infinite for-loop gotcha is one of the less frequent issues I find with static analysis, but I feel it is worth documenting because it is obscure but easy to do.

Consider the following C example:

Since i is a 8 bit integer, it will wrap around to zero when it reaches the maximum 8 bit value of 255 and so we end up with an infinite loop if the upper limit of the loop n is 256 or more. 

The fix is simple, always ensure the loop counter is at least as wide as the type of the maximum limit of the loop. This example, variable i should be a uint32_t type.

I've seen this occur in the Linux kernel a few times.  Sometimes it is because the loop counter is being passed into a function call that expects a specific type such as a u8, u16.  In other occasions I've seen a u16 (or short) integer being used presumably because it was expected to produce faster code, however, most commonly 32 bit integers just as fast (or sometimes faster) than 16 bit integers for this kind of operation.

30 March, 2021 11:37AM by Colin Ian King (noreply@blogger.com)

Alan Pope: Diamond Rio PMP300

My loft is a treasure trove of old crap. For some reason I keep a bunch of aged useless junk up there. That includes the very first MP3 player I owned. Behold, the Diamond Rio PMP 300. Well, the box, in all its ’90s artwork glory. Here’s the player. It’s powered by a single AA battery for somewhere around 8 hours of playback. It’s got 32MB (yes, MegaBytes) of on-board storage.

30 March, 2021 11:00AM

Ubuntu Blog: Windows containers on Kubernetes with MicroK8s

Kubernetes orchestrates clusters of machines to run container-based workloads. Building on the success of the container-based development model, it provides the tools to operate containers reliably at scale. The container-based development methodology is popular outside just the realm of open source and Linux though. Exactly the same benefits of containers – low resource overhead, dependency management, faster development cycles, portability and consistent operation – apply to applications targeting the Windows OS also. In this article, we will present the steps to deploy Windows workloads on Kubernetes, with MicroK8s Windows workers.

Kubernetes on Windows

There are no plans to develop code for a Windows only Kubernetes cluster. The control plane and master components (kube-apiserver, kube-scheduler, kube-controller) will, for the foreseeable future, only run on a Linux OS. However, it is possible to run the services required for a Kubernetes node on Windows.

This means that a worker node can be used to run Windows workloads, while the control plane runs on Linux – a hybrid cluster. Production-level support for running a Windows node was introduced with version 1.14 of Kubernetes back in March 2019, so in terms of deployment of Windows containers, operators can expect exactly the same features for Windows workloads as they do for Linux ones.

The Kubernetes components that are required for a worker node are:

  • kubelet: this is the Kubernetes agent which manages and reports on the running containers in a pod.
  • kube-proxy: a network proxy component which maintains the network rules and allows pods to communicate within or externally to the rest of the cluster
  • a container runtime: the executable responsible for running individual containers. Actually, several different container runtimes are now supported by Kubernetes – Docker, CRI-O, containerd (and through that, many others such as the Windows-specific runhcs), as well as any other which support the Kubernetes container runtime interface (CRI) 
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Tf8YwcJnaQrx8S81HJ21YAz5ZSj_b8I9qc27qCwPY7MPFsNm_pGIPD-g6ez9rnpo9pILIovF_jR3T7U03ZCQYurZElYSKQBVgLZudYd0RwcwabFRJafqWDL-2KUPQOd4xT4lSELk" width="720" /> </noscript>
Kubernetes architecture diagram

Kubernetes cluster components (image CC BY 4.0 The Kubernetes Authors).

It is of course possible to manually install these components on a Windows machine and construct a node, then integrate it into an existing cluster. There are easier ways to achieve this though.

MicroK8s and Calico

For the Linux-based part of a hybrid Kubernetes cluster, MicroK8s is a compelling choice. MicroK8s is a minimal implementation of Kubernetes which can run on the average laptop, yet has production grade features. It’s great for offline development, prototyping and testing, and if required you can also get professional support for it through Ubuntu Advantage.

The most compelling feature of MicroK8s for this use case is that you can run it sort-of-almost-natively on Windows 10! For development work and even production use-cases, this is hugely useful – your hybrid cluster can reside on one physical Windows machine. MicroK8s on Windows works by making use of Multipass – a neat way of running a virtual Ubuntu machine on Windows. The MicroK8s installer for Windows doesn’t require any knowledge of multi-pass or even virtual machines though, it sets everything up with just a few options from the installer. 

To fetch the current installer and see the (simple!) install instructions for MicroK8s on Windows, check out the MicroK8s website.

Whether using MicroK8 on Windows or a separate Linux machine, the next thing to consider is networking. Calico has been the default CNI(Container Network Interface) for MicroK8s since the 1.19 release. The CNI handles a number of networking requirements:

  •  Container-to-container communications
  •  Pod-to-Pod communications
  •  Pod-to-Service communications
  •  External-to-Service communications

The popularity of Calico has been building for some time over simpler CNIs, as it supports more useful features (e.g. Border Gateway Protocol [#ref]) and is also available in a supported enterprise version.

Usefully, Calico already bundles the Windows executables for this CNI in an installer script that also fetches the other components required for a Windows node, making installation and setup that much easier.  The Calico website has complete instructions for this. This contains not only links to the installer script but also a detailed rationale on how the setup works. If you are just interested in getting started with adding a Windows worker with MicroK8s there are more concise and specific instructions in the MicroK8s documentation.

Scheduling Windows containers on hybrid Kubernetes

The final step of this journey is to actually deploy some Windows containers. With a hybrid cluster, you could potentially be running Windows or Linux based containers and Kubernetes will need some way of knowing which nodes to deploy them on.  

A pod specification for a Windows workload should contain a nodeselector field identifying that it is to run on Windows. The recommended additional workflow is to use the ‘taints and tolerations’ features of Kubernetes, to explicitly refuse deployments on the Windows nodes to any pod which hasn’t specifically requested them. There is a full explanation of this workflow in the upstream Kubernetes documentation.

More from MicroK8s

There is plenty more you can do with a MicroK8s cluster – be sure to check out the documentation for MicroK8s add-ons so you can enable the dashboard, ingress controllers, Kubeflow and more.

30 March, 2021 07:00AM