October 28, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Design and Web team summary – 28th October 2020

The web team here at Canonical run two-week iterations. This iteration was slightly different as we began a new cycle. A cycle represents six months of work. Therefore, we spent the first-week planning and scheduling the cycles goals. Therefore, the following highlights of our completed work from the previous week.

Meet the team

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/EjL5utT4CMp1bm2_68TSnSwaNgWUT_oE0fyMUesDww1i2NSeJiQKPBU2EmO1ZNOFaGmqajcesupE7jDxQ63WzZ2tBs_RU7pWHHowmF_PRntlQ-QoEmT18yq7m6GlU2Yq39lMuHEl" width="720" /> </noscript>

Photo credit: Claudio Gomboli

Hello, I am Peter.  I struggle to define myself these days, especially as the past year has brought so much change to our lives.  I will start with the easy stuff. I am an American (Wisconsin and New York City) living outside London and working from home these days. I am married and a recent empty-nester as my two boys are away at university. 

I have been working on the web since 1995, doing everything from design to code, but mostly as a product manager, editor and now running the Canonical web team.  I have worked in a few industries; financial information services, IT research, children’s and educational publishing, but the nine years at Canonical has been the most memorable.

Outside work I mostly read, jog, hike and garden.

Web squad

Our Web Squad develops and maintains most of Canonical’s promotional sites like ubuntu.com, canonical.com and more.

Ubuntu 20.10 “Groovy Gorilla” release

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/_jK0YjZPw-F1pL1EIT1qAyOGBIv6AsInIvPBSFSuWabw6WSWuWxIHgwxdK4X9UPzt7RdpSoe6_r__ZkogGVmhK01y8pBPI8WazBWGWdozUXoX728cqn6-FWJeUFY2ppo1zXGkPAO" width="720" /> </noscript>

Ubuntu is released twice a year, in April and October. There is always a lot of work to get right and a fair bit of pressure to make it perfect as we get a lot of visitors coming to learn about and download new versions of Ubuntu.  The 20.10 ‘Groovy Gorilla’ release was no exception.  We updated all the download pages, added a  ‘What’s new in 20.10’ strip in the desktop and server sections and a new homepage takeover.

Have you downloaded 20.10 yet?

All new Raspberry Pi pages

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/eRIc02YoUhZLr4wawc8Rz1a7Ihn4a_WukPj0NP8bOgfVsH1cUL5XO3uQ6aajw92vE6X0Vs25bMVmciRniWqHBVCLeFYJwrZ_Q8yPVo25UeA0D2c5awAjhekQHpyZ8ljTDRk2B-IA" width="720" /> </noscript>

This release, we announced Ubuntu on Raspberry Pi for desktops.  To support the release, we created three new pages about Ubuntu on Raspberry Pi – an Overview page, an Ubuntu Desktop page and Server page for Raspberry Pi with Ubuntu CLI.

Check out the new Raspberry Pi pages

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Creating assets to support the 20.10 release

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/uRKiNPI1Si4MbR5kHs2kwv3EwUYh1qdDbPPSwU7R7H3hRbeSXPjr6lnSLq8IGwhbLBzRvzJ-CxGwTMQFQFYNWq5biZ59axcUe78zrJFJiTSgQL3RZj2hyAGgzfJLDTe-1yx2D1sS" width="720" /> </noscript>

View all the options on our team Instagram account.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/q-CFRFlEqw-RZOSEbjPTqPLb-ND3CMk6mesrN0EJVuq1ZG6KUJ06gSczH2s7DHwZtv55BSKQNNFRrDfG4oLutm7zGCjeHX4JTLcnurqO2YpYiMp9bVzswbQfzwsAJX8dencgPiO2" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/bQGw5yPzy1oK1Zqlf1YwfFNr7F3HNegAblAvmZVt8H-Rc-ortbJPKOvy0nHJ-RR4yFWsSpomcx42os2yQb44aZ5o9ng-i6cZZh2j-4aAOgdWOUGMk98Xm3KgUFrRWGKd17ErY5cU" width="720" /> </noscript>

MAAS

The MAAS squad develops the UI for the MAAS project.

QA and debugging for the release of 2.9

We’ve been primarily focused on QA for the upcoming release of MAAS 2.9. Many bugs have been fixed, as well as a significant performance improvement to loading the machine details view on large scale MAAS installations.

Machine details React migration

Work has begun on migrating the machine details views to React, which will allow us to iterate faster on new features, and improve UI performance.

Machine details events and logs consolidation design

In the course of migrating the machine details to React, we’ll be improving the logs experience, by consolidating both the logs and events tabs and fixing some long-standing confusion

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/5wQHoSjGf8X95hIaBEYNpkhhbY-yUW6LR-QLV_qgJu0B7gpzoTTaAX-yDvwGCiZnIS_-xwtlbbL9-QcFOjEixltk7cGiSBKkafOE2S_7dmBTUY_1HYHx2rxTDO6KT4o4fiagD7OW" width="720" /> </noscript>

JAAS

The JAAS squad develops the UI for the JAAS store and Juju Dashboard projects.

The web CLI

Coming in Juju 2.9 the Juju Dashboard will include access to the Juju Web CLI. A UI that allows you to run a subset of the Juju model commands right from within your browser.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/WC27jYvsazqoRHfiBE5CxggWHh12qW7PzIFu1tOECOeZNF8Hh3jmBZGbk3gLI7ysCIsr5Bk2mRO6I7NR1miSCE3m-lXlXPv8p4JF9uudryMJP0l7GJ10ENRQzNAhAzuZnAkykyuc" width="720" /> </noscript>

The beta release of this feature is available for those running the Juju 2.9 beta snap and then after bootstrap running juju upgrade-dashboard --gui-stream=devel.

Defining the data structure of the future dashboard

We are iterating our recently implemented layout for the Juju Dashboard, improving the information architecture and the interactions of the flow of the app.

Users will be able to slide down the model view even further, from different points of view: apps, integrations, machines, actions, networking. This work is going to enable the layout and the entry points for other coming features this cycle.

Vanilla

The Vanilla squad designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

In-depth discussions on fixes for hiding table cells

One of the issues that we worked on during our regular maintenance involved updating a utility for hiding elements on the page to fix the issues when hiding table cells.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/OucXRPK_MLXBQiYrbmyedmvke3UtrdvyFuAdhsQg6fdBYsTLocpYNO9JOvC3tELHntwLyouTHKokjnfQroUdV7PvgeuRLm53CI3W0i3Ds9pmAIGfwuQ674xkyRXlti5xagS7eDpf" width="720" /> </noscript>

What may seem to be a small bugfix required quite an in-depth discussion about various aspects of the issue. We wanted to make sure we don’t introduce any breaking changes for existing use of the utility in different patterns, we discussed where within the framework should these changes be implemented, how potential new utilities should be named, etc.
While such discussions take time and slow down the review process, we know they are important because they always lead us to the best possible solutions and allow us to view the issues from different perspectives (for example taking into account future maintenance, code responsibility and backwards compatibility).

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/2x6_25cUOT1-usG3aBp5_OzHAHldurPcZss73tD4Br6IbQxDk-eOsIy791-g4Q1Hp1pU3xv8GnR1-PnZYhEkmycjGvxfmcuqypcAHxCWzcYYWSuWpm9cHYMp7AZXFjComyRPBiQN" width="720" /> </noscript>

If you are interested in having a little sneak peek into our process feel free to have a look at the discussion on the pull request.

Wrapping up the accessibility work

We’ve been finalising the accessibility work from the last couple of weeks and preparing a summary blog post on that topic that will be ready for publishing soon.

Snapcraft and Charmhub

The Snapcraft team works closely with the Store team to develop and maintain the Snap Store site and Charmhub site.

CLI Guidelines on discourse

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/K4nF2LKQWLev1xZyzIz2-Q6cbP-P9tLhPnRTVY0SNCmp9ulnR5E3F_zQ1je3DRA7TBHo8nbDfbgrXRu4w1L4Zvguxw0jdodTD71Mwt2WZxsqro3s_i-eZ5weyPTn_lFJW78wUA8N" width="720" /> </noscript>

The last cycle we worked with various engineering teams to begin defining guidelines for the design of both input and output of CLI commands. The first set of guidelines are now available on the Ubuntu discourse, waiting for comments, suggestions, and any other feedback. We will be working on expanding the guidelines over the next few months. By looking at more complex interactions and issues. 

Actions view for Charms

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Simh0JIoOsUHV8xY8IdTB3pnf17txihUUlJKHhYHounPXOhUbFKp6s5GntLoNp0RQIJJjcKYFnQgepfv2HadS0bUOWm8SZHlh-IVm4oG8KAxejedieK_ChPxkpzKOSmpmuNh_L4k" width="720" /> </noscript>

We have built the actions tab in charm details pages on Charmhub. Listing available actions and their parameters.

History tab on a Charm details page on Charmhub

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/ptp9WcISSOpeRXW0ja1F_srKoekehcSf4K_mHmq9Rr5DGQev77ZLcX19CXqsU2GunYBi2grzTf5EStjg7gPEFMAOh7rFjbFfwWdXEW4d3WCoxoXI1kCw2kRBAPQVrdyiEshXP7LW" width="720" /> </noscript>

The history details page was implemented using a new “Show more” pattern implemented in the “Modular Table” react component

Updated the Juju discourse navigation

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/ud1emugyTIH7HeuFIWUfKzveRHEM8cJ3OiTVE6tbgyfNf4qBywqul7xY9JA0Vx6VqKeAZnpz2agOqN3i5KgKTNvcEWVejj6sfskHqF-1g5hWp5QvSDMfdWNgrQcYFl0C6_uU693g" width="720" /> </noscript>

A new Canonical customised discourse navigation was implemented on https://discourse.juju.is/. It consists of the Canonical global navigation, main navigation and secondary navigation. This pattern will be implemented on all our discourses over the next few weeks.

Graylog dashboards

Graylog is a tool that we use to centralised log management, it’s built to open standards for capturing, storing, and enabling real-time analysis of logs. During this iteration, we created dashboards to track performance and usage of our services.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/cDPPWUNkqmMFOkE1jz9IfY3vLDmWiFz7i3xE8757dzt-qQcHns-194rAlyDn9yBYGyvxAQ2RYNRgk_XscO1yP3wGJYV9MZ1lONk0yh4IZI5c-LcXSgodkmZTDBH7EIl1clHZTVVy" width="720" /> </noscript>

User testing on the charm details pages

We performed some user testing sessions on the current live pages on charmhub.io and some of the designs for the upcoming detail pages of Charms. From the feedback, we realised that the proposed tab “Libraries” is a point of confusion for many users who are not familiar with this new concept which is introduced with operators. Because of this, we have now created a new section on this page to introduce the concept and help users become familiar with the use of libraries

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/2hYYrBZ3Ne8BysnYs9CIykvWRQcSEW2fMBiMCr7oz4j0USQnF3vU_YvE8V_7vM6CxWaAoDu-j_u9aZHIa3XiFFKuBVi3RI0T-IZ4Yw_4UDdEzv4Ko_vq7VQVA47DLZufnnT_fs9y" width="720" /> </noscript>

Follow the team on Instagram

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/0sjc2e9HR9bl_iqTOQPS7PmIluL1RV9_mNSFSh1chyvyDfa0EQ5INVsjoQMaXeHI7gbQGBSIoo0yEuFYrrGCNhrmkWN8pT4Z8yvralblPuLaYrDlPc3y7XKj1IfiYax7bRoTJpGP" width="720" /> </noscript>

Ubuntu designers on Instagram

With ♥ from Canonical web team.

28 October, 2020 09:01AM

October 27, 2020

Ubuntu Blog: Snap speed improvements with new compression algorithm!

Security and performance are often mutually exclusive concepts. A great user experience is one that manages to blend the two in a way that does not compromise on robust, solid foundations of security on one hand, and a fast, responsive software interaction on the other.

Snaps are self-contained applications, with layered security, and as a result, sometimes, they may have reduced perceived performance compared to those same applications offered via traditional Linux packaging mechanisms. We are well aware of this phenomenon, and we have invested significant effort and time in resolving any speed gaps, while keeping security in mind. Last year, we talked about improved snap startup times following fontconfig cache optimization. Now, we want to tell you about another major milestone – the use of a new compression algorithm for snaps offers 2-3x improvement in application startup times!

LZO and XZ algorithms

By default, snaps are packaged as a compressed, read-only squashfs filesystem using the XZ algorithm. This results in a high level of compression but consequently requires more processing power to uncompress and expand the filesystem for use. On the desktops, users may perceive this as a “slowness” – the time it takes for the application to launch. This is also far more noticeable on first launch only, before the application data is cached in memory. Subsequent launches are fast and typically, there’s little to no difference compared to traditionally packaged applications.

To improve startup times, we decided to test a different algorithm – LZO – which offers lesser compression, but needs less processing power to complete the action.

As a test case, we chose the Chromium browser (stable build, 85.X). We believe this is a highly representative case, for several reasons. One, the browser is a ubiquitous (and popular) application, with frequent usage, so any potential slowness is likely to be noticeable. Two, Chromium is a relatively large and complex application. Three, it is not part of any specific Linux desktop environment, which makes the testing independent and accurate.

For comparison, the XZ-compressed snap weighs ~150 MB, whereas the one using the LZO compression is ~250 MB in size.

Test systems & methodology

We decided to conduct the testing on a range of systems (2015-2020 laptop models), including HDD, SSD and NVMe storage, Intel and Nvidia graphics, as well as several operating systems, including Kubuntu 18.04, Ubuntu 20.04 LTS, Ubuntu 20.10 (pre-release at the time of writing), and Fedora 32 Workstation (just before Fedora 33 release). We believe this offers a good mix of hardware and software, allowing us a broader understanding of our work.

  • System 1 with 4-core/8-thread Intel(R) i5(TM) processor, 16GB RAM, 500GB SSD, and Intel(R) UHD 620 graphics, running Kubuntu 18.04.
  • System 2 with 4-core Intel(R) i3(TM) processor, 4GB RAM, 1TB 5,400rpm mechanical hard disk, and Intel(R) HD 440 graphics, running Ubuntu 20.04 LTS.
  • System 3 with 4-core Intel(R) i3(TM) processor, 4GB RAM, 1TB 5,400rpm mechanical hard disk, and Intel(R) HD 440 graphics, running Fedora 32 Workstation.
  • System 4 with 4-core/8-thread Intel(R) i7(TM) processor, 64GB RAM, 1TB NVMe hard disk, and Nvidia GM204M (GeForce GTX 980M), running Ubuntu 20.10.
PlatformSystem 1System 2System 3System 4
Snapd version2.46.1+18.042.472.45.3.1-1.fc322.47.1+20.10
Kernel4.15.0-118-generic5.4.0-48-generic5.8.13-200.fc325.8.0-21-generic
DEPlasmaGNOMEGNOMEGNOME

On each of the selected systems, we examined the time it takes to launch and display the browser window for:

  • Native package (DEB or RPM) where available (Kubuntu 18.04 and Fedora 32).
  • Snap with XZ compression (all systems).
  • Snap with LZO compression (all systems).

We compared the results in the following way:

  • Cold start – There is no cached data in the memory.
  • Hot start – The browser data is cached in the memory.

Results!

We measured the startup time for the Chromium browser with a new, unused profile. Please note that these results are highly indicative, but there is always a degree of variance in interactive usage measurements, which can result from things like your overall system state, the current system load due to other, background activities, disk usage, your browser profile and add-ons, and other factors.

Chromium startup timeNative package (DEB/RPM)
Cold/hot (s)
Snap with XZ compression
Cold/hot (s)
Snap with LZO compression
Cold/hot (s)
System 11.7/0.68.1/0.73.1/0.6
System 2NA18.4/1.211.1/1.2
System 315.3/1.334.9/1.110.1/1.3
System 4NA10.5/1.42.6/0.9
  • The results in the table are average values over multiple runs. The standard deviation is ~0.7 seconds for the cold startups, and ~0.1 seconds for the hot startups.
  • The use of the LZO compression offers 40-74% cold startup improvements over the XZ compression.
  • On the Kubuntu 18.04 system, which still has Chromium available as a DEB package, the LZO-compressed snap now offers near-identical startup performance!
  • On Fedora 32 Workstation, the LZO-compressed snap cold startup is faster than the RPM package by a rather respectable 33% (actual ~5.0 seconds difference).
  • Hot startups are largely independent of the packaging format selection.

If you’d like to test for yourself…

You may be interested in profiling the startup time of your browser – or any application for that matter. To that end, we’ve compiled a script, which you can download (link to a GitHub Gist), make the file executable, and run on your system. The script allows you to compare the startup time of any native-packaged software with snaps, and is designed to work with any package manager, so you can use this on Ubuntu, Fedora, openSUSE, Manjaro, etc.

To prevent any potential data loss, the functions are commented out in the main section of the script, so you will need to uncomment them manually before the script does anything.

Summary

We are happy with the improvements that the LZO compression introduces, as it allows users to have a faster, more streamlined experience with their snaps. We can now examine the optimal way to introduce and roll out similar changes with other snaps.

And this is by no means the end of the journey! Far from it. We are working on a whole range of additional improvements and optimizations. When it comes to size, you can use content snaps and stage snaps to reduce the side of your snaps, as well as utilize snapcraft extensions. We’re also working on a fresh set of font cache fixes, and there’s a rather compelling story on this topic, as well, which we will share soon. In the near future, we intend to publish a guide that helps developers trim down their snaps and reduce their overall size, all of which can help create leaner, faster applications.

If you have any comments or suggestions on this topic, we’d like to hear them. You can tell us about your own findings on snap startup performance, and point us to any glaring issues or problems you believe should be addressed, including any specific snaps you think should be profiled and optimized. We are constantly working on improving the user experience, and we take note of any feedback you may have. Meanwhile, enjoy your snappier browsing!

Photo by Ralph Blvmberg on Unsplash.

27 October, 2020 04:20PM

hackergotchi for Qubes

Qubes

Fedora 31 approaching EOL

Fedora 33 was released today, 2020-10-27. According to the Fedora Release Life Cycle, this means that Fedora 31 is scheduled to reach EOL (end-of-life) in approximately four weeks, around 2020-11-24.

We strongly recommend that all Qubes users upgrade their Fedora 31 TemplateVMs and StandaloneVMs to Fedora 32 or higher before Fedora 31 reaches EOL. We provide step-by-step upgrade instructions for upgrading Fedora TemplateVMs. For a complete list of TemplateVM versions supported for your specific version of Qubes, see Supported TemplateVM Versions.

We also provide a fresh Fedora 32 TemplateVM package through the official Qubes repositories, which you can install in dom0 by following the standard installation instructions.

After upgrading your TemplateVMs, please remember to switch all qubes that were using the old template to use the new one.

Please note that no user action is required regarding the OS version in dom0. For details, please see our note on dom0 and EOL.

27 October, 2020 12:00AM

October 26, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 654

Welcome to the Ubuntu Weekly Newsletter, Issue 654 for the week of October 18 – 24, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

26 October, 2020 09:47PM

October 25, 2020

hackergotchi for SparkyLinux

SparkyLinux

FFQueue

There is a new application available for Sparkers: FFQueue

What is FFQueue?

FFQueue is (yet another) graphical user interface for FFMpeg with comprehensive support for both the basic features but also the more advanced features like filtergraphs. FFQueue makes it easy to create multiple jobs and process them as a single queue.
FFQueue can sort out the most significant output from FFMpeg and display it in the graphical console and save it to a HTML-based (color coded) logfile for easy review when the queue has been processed.

Installation (Sparky stable & testing amd64):

sudo apt update
sudo apt install ffqueue

or via APTus-> VideoTool-> FFQueue icon.

FFQueue

Copyright (C) Torben Bruchhaus
License: GNU GPL v3
Web: ffqueue.bruchhaus.dk

 

25 October, 2020 05:01PM by pavroo

October 24, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Announcing the new Ubuntu Community Council

Thanks to all the Ubuntu Members that voted in the election, I am proud to announce our new Ubuntu Community Council!

The full results of the election can be seen here but our winners are:

  • Walter Lapchynski
  • Lina Elizabeth Porras Santana
  • Thomas Ward
  • José Antonio Rey
  • Nathan Haines
  • Torsten Franz
  • Erich Eichmeyer

Congratulations to all of them! They will serve on the Council for the next two years.

Should there be any pressing business that the Council should deal with, especially given the long absence of the Council, please contact the Council mailing list at community-council@lists.ubuntu.com.

Again, thanks to everyone involved for making Ubuntu and its community better!

24 October, 2020 07:37PM

October 23, 2020

Lubuntu Blog: Lubuntu 20.10 (Groovy Gorilla) Released!

Thanks to all the hard work from our contributors, Lubuntu 20.10 has been released! With the codename Groovy Gorilla, Lubuntu 20.10 is the 19th release of Lubuntu, the fifth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 20.10 will be supported until July 2021. Our main focus will be on […]

23 October, 2020 01:55AM

October 22, 2020

hackergotchi for Ubuntu

Ubuntu

Ubuntu 20.10 (Groovy Gorilla) released

Codenamed “Groovy Gorilla”, 20.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

The Ubuntu kernel has been updated to the 5.8 based Linux kernel, and our default toolchain has moved to gcc 10 with glibc 2.32. Additionally, there is now a desktop variant of the Raspberry Pi image for Raspberry Pi 4 4GB and 8GB.

Ubuntu Desktop 20.10 introduces GNOME 3.38, the fastest release yet with significant performance improvements delivering a more responsive experience. Additionally, the desktop installer includes the ability to connect to Active Directory domains.

Ubuntu Server 20.10 integrates recent innovations from key virtualization and infrastructure projects like QEMU 5.0, libvirt 6.6 and OpenStack Victoria. Ubuntu Server now ships Telegraf, the metrics collecting agent that together with Prometheus and Grafana form the basis of a strong and reliable logging, monitoring and alerting solution that can be deployed on Ubuntu systems.

The newest Ubuntu Budgie, Kubuntu, Lubuntu, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu are also being released today. More details can be found for these at their individual release notes under the Official Flavours section:

https://discourse.ubuntu.com/t/groovy-gorilla-release-notes

Maintenance updates will be provided for 9 months for all flavours releasing with 20.10.

To get Ubuntu 20.10

In order to download Ubuntu 20.10, visit:

https://ubuntu.com/download

Users of Ubuntu 20.04 LTS will be offered an automatic upgrade to 20.10 if they have selected to be notified of all releases rather than just LTS upgrades. For further information about upgrading, see:

https://ubuntu.com/download/desktop/upgrade

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the release notes, which document caveats, workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://discourse.ubuntu.com/t/groovy-gorilla-release-notes

Find out what’s new in this release with a graphical overview:

https://ubuntu.com/desktop
https://ubuntu.com/desktop/features

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net
https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
https://ubuntuforums.org
https://askubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

https://discourse.ubuntu.com/contribute

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, IoT, cloud, and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

https://ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

https://ubuntu.com

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Oct 22 17:39:05 UTC 2020 by Brian Murray, on behalf of the Ubuntu Release Team

22 October, 2020 10:10PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 113 – Cirurgia

Já votaram no Podcast Ubuntu Portugal em podes.pt? Não? Então não leias mais e vai até https://podes.pt/votar/ escreve Podcast Ubuntu Portugal e clica em VOTAR. Não falhes a aritmética e repete as vezes que conseguires.

Já sabem: oiçam, subscrevam e partilhem!

  • https://forum.pine64.org/showthread.php?tid=11772
  • https://forum.snapcraft.io/t/call-for-suggestions-featured-snaps-friday-9th-october-2020/20384
  • https://github.com/ubports/ubports-installer/releases
  • https://joplinapp.org/
  • https://snapcraft.io/joplin-james-carroll
  • https://snapstats.org/snaps/flameshot
  • https://twitter.com/m_wimpress/status/1314315931468914689
  • https://twitter.com/m_wimpress/status/1314497286425268224
  • https://twitter.com/stgraber/status/1314625640629448705
  • https://twitter.com/thefxtec/status/1314550781509541889
  • https://twitter.com/thepine64/status/1314911896177389570
  • https://ubuntu.com/blog/how-to-make-snaps-and-configuration-management-tools-work-together
  • https://www.meshtastic.org/
  • https://www.npmjs.com/package/android-tools-bin
  • https://www.pine64.org/2020/10/15/update-new-hacktober-gear/
  • https://www.youtube.com/channel/UCuP6xPt0WTeZu32CkQPpbvA/
  • https://podes.pt/
  • Apoios

    Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
    E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
    Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

    Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

    Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

22 October, 2020 09:45PM

Kubuntu General News: Kubuntu 20.10 Groovy Gorilla released

KDE Plasma-Desktop

The Kubuntu community are delighted to announce the release of Kubuntu 20.10 Groovy Gorilla. For this release Kubuntu ships with Plasma 5.19.5 and Applications 20.08. The desktop carries the fresh new look and gorgeous wallpaper design selected by the KDE Visual Design Group.

 

Cloud Ready

With the rapid growth in cloud native technologies the kubuntu community recognise that Kubuntu users need access to cloud and container technologies.
Kubuntu 20.10 also includes LXD 4.6 and MicroK8s 1.19 for resilient micro clouds, small clusters of servers providing VMs and Kubernetes.

Kubuntu 20.10 includes KDE Applications 20.08.

Dolphin, KDE’s file explorer, for example, adds previews for more types of files and improvements to the way long names are summarized, allowing you to better see what each file is or does. Dolphin also improves the way you can reach files and directories on remote machines, making working from home a much smoother experience. It also remembers the location you were viewing the last time you closed it, making it easier to pick up from where you left off.

For those of you into photography, KDE’s professional photo management application, digiKam has just released its version 7.0.0. The highlight here is the smart face recognition feature that uses deep-learning to match faces to names and even recognizes pets.

If it is the night sky you like photographing, you must try the new version of KStars. Apart from letting you explore the Universe and identify stars from your desktop and mobile phone, new features include more ways to calibrate your telescope and get the perfect shot of heavenly bodies.

And there’s much more: KDE’s terminal emulator Konsole and Yakuake; Elisa, the music player that looks great ; the text editor Kate; KDE’s image viewer Gwenview; and literally dozens of other applications are all updated with new features, bugfixes and improved interfaces to help you become more productive and making the time you spend with KDE software more pleasurable and fun.

22 October, 2020 09:15PM

Corey Bryant: OpenStack Victoria for Ubuntu 20.10 and Ubuntu 20.04 LTS

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Victoria on Ubuntu 20.10 (Groovy Gorilla) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Victoria release can be found at:  https://www.openstack.org/software/victoria.

To get access to the Ubuntu Victoria packages:

Ubuntu 20.10

OpenStack Victoria is available by default for installation on Ubuntu 20.10.

Ubuntu 20.04 LTS

The Ubuntu Cloud Archive for OpenStack Victoria can be enabled on Ubuntu 20.04 by running the following command:

sudo add-apt-repository cloud-archive:victoria

The Ubuntu Cloud Archive for Victoria includes updates for:

aodh, barbican, ceilometer, cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, ovn-octavia-provider, panko, placement, sahara, sahara-dashboard, sahara-plugin-spark, sahara-plugin-vanilla, senlin, swift, vmware-nsx, watcher, watcher-dashboard, and zaqar.

For a full list of packages and versions, please refer to:

http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/victoria_versions.html

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Victoria. Enjoy and see you in Wallaby!

Corey

(on behalf of the Ubuntu OpenStack Engineering team)

22 October, 2020 08:11PM

Ubuntu Studio: Ubuntu Studio 20.10 Released

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 20.10, code-named “Groovy Gorilla”. This marks Ubuntu Studio’s 28th release. This release is a regular release, and as such it is supported for nine months until July 2021.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 20.10 from our download page.

If you find Ubuntu Studio useful, please consider making a contribution.

Upgrading

Due to the change in desktop environment this release, direct upgrades to Ubuntu Studio 20.10 are not supported. We recommend a clean install for this release:

  1. Backup your home directory (/home/{username})
  2. Install Ubuntu Studio 20.10
  3. Copy the contents of your backed-up home directory to your new home directory.

New This Release

The biggest new feature is the switch of desktop environment to KDE Plasma. We believe this will provide a more cohesive and integrated experience for many of the applications that we include by default. We have previously outlined our reasoning for this switch as part of our 20.04 LTS release announcement.

This release includes Plasma 5.19.5. If you would like a newer version, the Kubuntu Backports PPA may include a newer version of Plasma when ready.

We are excited to be a part of the KDE community with this change, and have embraced the warm welcome we have received.

You will notice that our theming and layout of Plasma looks very much like our Xfce theming. (Spoiler: it’s the same theme and layout!)

Audio

Studio Controls replaces Ubuntu Studio Controls

Ubuntu Studio Controls has been spun-off into an independent project called Studio Controls. It contains much of the same functionality but also is available in many more projects than Ubuntu Studio. Studio Controls remains the easiest and most straightforward way to configure the Jack Audio Connection Kit and provide easy access to tools to help you with using it.

Ardour 6.3

We are including the latest version of Ardour, version 6.3. This version has plenty of new features outlined at the Ardour website, but contains one caviat:

Projects imported from Ardour 5.x are permanently changed to the new format. As such, plugins, if they are not installed, will not be detected and will result in a “stub” plugin. Additionally, Ardour 6 includes a new Digital Signal Processor, meaning projects may not sound the same. If you do not need the new functionality of Ardour 6, do not upgrade to Ubuntu Studio 20.10.

Other Notable Updates

We’ve added several new audio plugins this cycle, most notably:

  • Add64
  • Geonkick
  • Dragonfly Reverb
  • Bsequencer
  • Bslizr
  • Bchoppr

Carla has been upgraded to version 2.2. Full release announcement at kx.studio.

Video

OBS Studio

Our inclusion of OBS Studio has been praised by many. Our goal is to become the #1 choice for live streaming and recording, and we hope that including OBS Studio out of the box helps usher this in. With the game availability on Steam, which runs native on Ubuntu Studio and is easily installed, and with Steam’s development of Proton for Windows games, we believe game streamers and other streamers on Youtube, Facebook, and Twitch would benefit from such an all-inclusive operating system that would save them both money and time.

Included this cycle is OBS Studio 26.0.2, which includes several new features and additions, too numerous to list here.

For those that would like to use the advanced audio processing power of JACK with OBS Studio, OBS Studio is JACK-aware!

Kdenlive

We have chosen Kdenlive to be our default video editor for several reasons. The largest of which is that it is the most professional video editor included in the Ubuntu repositories, but also it integrates very well with the Plasma desktop.

This release brings version 20.08.1, which includes several new features that have been outlined at their website.

Graphics and Photography

Krita

Artists will be glad to see Krita upgraded to version 4.3. While this may not be the latest release, it does include a number of new features over that included with Ubuntu Studio 20.04.

For a full list of new features, check out the Krita website.

Darktable

This version of the icon seemed appropriate for an October release. :)

For photographers, you’ll be glad to see Darktable 3.2.1 included by default. Additionally, Darktable has been chosen as our default RAW Image Processing Platform.

With Darktable 3.2 comes some major changes, such as an overhaul to the Lighttable, A new snapshot comparison line, improved tooltips, and more! For a complete list, check out the Darktable website.

Introducing Digikam

For the first time in Ubuntu Studio, we are including the KDE application Digikam by default. Digikam is the most-advanced photo editing and cataloging tool in Open Source and includes a number of major features that integrate well into the Plasma desktop.

The version we have by default is version 6.4.0. For more information about Digikam 6.4.0, read the release announcement.

We realize that the version we include, 6.4.0, is not the most recent version, which is why we include Digikam 7.1.0 in the Ubuntu Studio Backports PPA.

For more information about Digikam 7.1.0, read the release announcement.

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Introducing the Ubuntu Studio Marketplace

Have you ever wanted to buy some gear to show off your love for Ubuntu Studio? Now you can! We just launched the Ubuntu Studio Marketplace. From now until October 27th, you can get our special launch discount of 15% off.

We have items like backpacks, coffee mugs, buttons, and more! Items for men, women, and children, even babies! Get your gear today!

Proceeds from commissions go toward supporting further Ubuntu Studio development.

Now Accepting Donations!

If you find Ubuntu Studio useful, we highly encourage you to donate toward its prolonged development. We would be grateful for any donations given!

Three ways to donate!

Patreon

Become a Patron!

The official launch date of our Patreon campaign is TODAY! We have many goals, including being able to pay one or more developers at least a part-time wage for their work on Ubuntu Studio. However, we do have some benefits we would like to offer our patrons. We are still hammering-out the benefits to patrons, and we would love to hear some feedback about what those benefits might be. Become a patron, and we can have that conversation together!

Liberapay

Liberapay is a great way to donate to Ubuntu Studio. It is built around projects, like ours, that are made of and using free and open source software. Their system is designed to provide stable crowdfunded income to creators.

PayPal

You can also donate directly via PayPal. You can establish either monthly recurring donations or make one-time donations. Whatever you decide is appreciated!

Get Involved!

Another great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, promotion, and documentation
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer, KDE Plasma Transition

22 October, 2020 06:30PM

Ubuntu Blog: Build a Raspberry Pi Desktop with an Ubuntu heart

On the 22nd October 2020, Canonical released an Ubuntu Desktop image optimised for the Raspberry Pi. The Raspberry Pi Foundation’s 4GB and 8GB boards work out of the box with everything users expect from an Ubuntu Desktop. It is our honour to contribute an optimised Ubuntu Desktop image to the Raspberry Pi Foundation’s mission to put the power of computing into people’s hands all over the world.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2bc4/GG_blog-post-RPi_3.png" width="720" /> </noscript>

The right hardware

Since the Raspberry Pi Foundation began its mission, users have been using their boards to run everything in their lives. Whether that’s making DIY devices, learning to code or building products, it was made possible by Raspberry Pis. But running a full-featured, LTS desktop that can handle the expectations of everyday users, without technical knowledge, wasn’t really possible. Until recently.   

The Raspberry Pi 4 debuted with the graphics, RAM and connectivity needed for a Linux workstation. Users finally had the hardware to make a Raspberry Pi into a viable primary PC. But there were still issues. Most importantly, a lot of the desktop options either required a non-zero amount of technical knowledge or weren’t suited for long term use. Usually because of a lack of upstream support or running unmaintained, niche software.

Canonical, the company that publishes Ubuntu, is and continues to be a long term fan of the Raspberry Pi Foundation. Together, our missions to make technology more accessible to people all of the world aligns, and both organisations understand the value of an active and trusting community. So, when the Raspberry Pi 4 launched with the capabilities to run a full-fat Ubuntu Desktop, we didn’t blink.

<noscript> <img alt="" height="446" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_446,h_446/https://ubuntu.com/wp-content/uploads/a473/Ubuntu-and-Raspberry-Pi-01.png" width="446" /> </noscript>

An engineering collaboration

The Ubuntu Desktop team, the Foundations team, and the Kernel team got to work. While they were cooking, we reached out to the Raspberry Pi Foundation to strengthen our relationship and express our appreciation for their work. One thing led to another, and seeing the value in collaboration; we began to work together on some common projects. One of which is this full Ubuntu Desktop for Raspberry Pi

After months of work and plenty of collaboration, the Ubuntu Desktop image for the Raspberry Pi is here! On a Raspberry Pi 4 (with 4GB or 8GBs RAM) you can do everything the average desktop user would expect. Surf the web, watch the latest films, develop new software, read the news, or do your shopping. All from the comfort of a Raspberry Pi. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/16c3/05_plex.png" width="720" /> </noscript>

This joining of Raspberry Pi, the incredible maker and educational hardware, used in schools, factories and robots alike, and the Ubuntu Desktop, best known for its leading cloud and desktop offerings, delivers not only a low-cost, versatile desktop experience but also a gateway to all of open source software. The Ubuntu Desktop on Raspberry Pi comes with committed long term support and a deepening collaboration upstream which, we hope, will only continue to flourish.   

The open source ARM workstation

Ubuntu on Raspberry Pi is not only a great place to start with Ubuntu, and Linux in general, but is already used and favoured by inventors and entrepreneurs, too. Start learning to code, develop applications or take it production, all from one board, with one operating system (OS).

Not only that, the Raspberry Pi is an ARM computer, like Android or iOS phones. You can build and test apps for ARM on a low-cost board that is still powerful enough to orchestrate workloads, manage virtual machines or run a micro-cloud.

What all this means is that a Raspberry Pi with Ubuntu is a path into the world of ARM computing, ARM development and ARM-based products. Both at the edge, on workstations and in the cloud. Most IoT devices out there already run ARM. The Raspberry Pi is a tried and tested ARM board that is the brains of countless devices. In people’s homes as a hobby, and in production as enterprise-grade products. Ubuntu is there too with its embedded version, Ubuntu Core. Ubuntu optimised to work on the Raspberry Pi to give users an industry-standard, secure, minimal OS from production. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4cdb/armlogo.png" width="720" /> </noscript>

But this has been the case for some time. What’s new is that Ubuntu Desktop on Raspberry Pi delivers a more accessible and more familiar experience to get going with ARM. With Apple announcing their ARM-based Mac intentions, and the likes of Amazon’s Graviton2 making high-performance ARM compute cost-effective, we will soon see companies and app developers across industries move to ARM. Or risk losing out.

Get it

To get the Ubuntu Desktop from the Raspberry Pi Foundation, download their Raspberry Pi Imager application. The app is available on macOS, Windows and Linux, and the new Ubuntu Desktop image is baked up inside. To get the image straight from Canonical, head to the website and look atop the Ubuntu Server and Core images.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/590f/Screenshot-from-2020-10-22-14-14-39.png" width="720" /> </noscript>

To find out more about the benefits of the image go to the website and have a read. Or, watch this video where Martin Wimpress, Director of Engineering for Ubuntu Desktop and I, Product Manager of IoT and Makerspace initiatives talk through the whole process.

Then, once you have the image, know all the context, and know-how to get going, there’s always more. On the Desktop itself, start using it and Tweet @ubuntu whatever it is you’re using it for. Or, fill out this form for a chance to win some free stuff. We’d love it just to see that you’ve got it up and running. Then, head over to our community forum to leave any comments or feedback you have too. 

Or, if you’re interested in getting more out of Raspberry Pis, there are plenty of more options too. For cloud enthusiasts, you can try MicroK8s Raspberry Pi clustering, to orchestrate and manage workloads and practice your Kubernetes. Or for embedded/IoT device developers, take a look at Ubuntu Core. Build a portfolio of appliances, that turns your Raspberry Pi into a dedicated device to do one thing, perfectly.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/759b/Screenshot-from-2020-10-22-14-15-01.png" width="720" /> </noscript>

To conclude

The full Ubuntu Desktop is now available for the Raspberry Pi. With it, users have access to a full Linux workstation on the world’s most versatile and popular single-board computer. This development paves the way not only to a more practical Raspberry Pi desktop experience but also to the new world of cloud computing and applications running on ARM. We have a deep admiration for the Raspberry Pi Foundation and look forward to working with them and their technology more in the future.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a3a7/ubuntu3pi.png" width="720" /> </noscript>



22 October, 2020 06:29PM

Ubuntu Podcast from the UK LoCo: S13E31 – Cheers with water

This week we’ve been upgrading computers and Ebaying stuff. We discuss the Windows Calculator coming to Linux, Microsoft Edge browser coming to Linux, Ubuntu Community Council elections and LibreOffice office getting Yaru icons. We also round up our picks from the general tech news.

It’s Season 13 Episode 31 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

22 October, 2020 02:00PM

Ubuntu Blog: Ubuntu 20.10 on Raspberry Pi delivers the full Linux desktop and micro clouds

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2ad9/Groovy-Gorilla_WP_4096x2304.jpg" width="720" /> </noscript>

22nd October 2020: Canonical today released Ubuntu 20.10 with optimised Raspberry Pi images for desktop in support of learners, inventors, educators and entrepreneurs, bringing the world’s most open platform to the world’s most accessible hardware.

“In this release, we celebrate the Raspberry Pi Foundation’s commitment to put open computing in the hands of people all over the world,” said Mark Shuttleworth, CEO at Canonical. “We are honoured to support that initiative by optimising Ubuntu on the Raspberry Pi, whether  for personal use, educational purposes or as a foundation for their next business venture.”

The Raspberry Pi 2, 3, and 4 join a very long list of x86 and ARM devices certified with Ubuntu, the operating system (OS) best known for its public cloud and desktop offerings. Dell, HP and Lenovo all certify PCs with Ubuntu Desktop, which is also the most widely used OS on the AWS, Microsoft Azure, Google, IBM and Oracle clouds.

Ubuntu 20.10 also includes LXD 4.6 and MicroK8s 1.19 for resilient micro clouds, small clusters of servers providing VMs and Kubernetes on demand at the edge, for remote office, branch office, warehouse and distribution oriented infrastructure.

Ubuntu Desktop 20.10

On top of Raspberry Pi desktop support, Ubuntu 20.10 includes GNOME 3.38, which tweaks the apps grid, removes the frequents tab and allows apps to be ordered and organised however users prefer. The battery percentage display toggle has been exposed in power settings, private WiFi hotspots can be shared using uniquely generated QR codes and a restart option has been added to the status menu next to logout/power off. 

The 20.10 desktop sees added support for Ubuntu Certified devices. More Ubuntu workstations now receive biometric identification support out of the box. 2-in-1 devices with on screen keyboards are now fully supported enabling an improved Ubuntu experience on devices including the Dell XPS 2-in-1 and Lenovo Yoga.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/I7lFmpIndiuad64rxfL10k8Y36762hDUZiUn51lBLOw2iGCViFEz7oYhhVvO_xD8_NSlAkvBD8ZNfB5NbJYreLSjIq2LBxHgqcbfD9gwCQn1mZJd7jM1TMJa4YWKVBzZgSemQFX5" width="720" /> </noscript>

Raspberry Pi models with 4GB or 8GB RAM gain full support for the Ubuntu Desktop. “From the classic Raspberry Pi board to the industrial grade Compute Module, this first step to an Ubuntu LTS on Raspberry Pi with long term support and security updates matches our commitment to widen access to the very best computing and open source capabilities” said Eben Upton, CEO of Raspberry Pi Trading.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Hy-saa5_-MtqubHuElpSKDk8QSlNWgo39f9rewwsMSnTk8acFbOAynKa8ol5Z6cDuFCsKRv8r-L8Kbome4A_LQbSB7Gmg9_rEwnzomqAGS15DZLXkBVZ7raTSp_xtn51P8fyQsS0" width="720" /> </noscript>

Introducing micro clouds

Micro clouds are a new class of infrastructure for on-demand compute at the edge. Micro clouds are distributed, minimal and come in small to extremely large scale. In Ubuntu 20.10, Canonical introduces its micro cloud stack that combines MAAS, LXD, MicroK8s and Ceph on Ubuntu, to deliver resilient pocket clouds hardened for mission-critical workloads in 5G RANs, industry 4.0 factories, V2X infrastructures, smart cities and health care facilities. 

On a Raspberry Pi, users can start with MicroK8s, to orchestrate highly available workloads at the edge or with LXD to build a home lab appliance using LXD’s clustering and virtual machine management capabilities. The Ubuntu 20.10 release introduces users a way to experiment, test, or develop with full cloud capabilities through the Raspberry Pi. With Ubuntu 20.10 on a Raspberry Pi, anything is possible, from robotics to AI/ML.    

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/UbyuKjhZ5NlFSO5Gyyj9XcGdFqJFK1BkQeUDoYRQB4N7tyGyc3HQnAXeiA00GG9aOXtQiWv8lGGSkgqw-x3KjKOelUAU_fVyJvrzlOd8Z-KJ4J9VC2RQ9ieONZ5iYO4MnTqdAkrf" width="720" /> </noscript>

Ubuntu 20.10 will be available to download here

To learn more about Ubuntu 20.10 on the Raspberry Pi, click here to join the live stream at 5PM (BST) on Friday 23rd October 2020.

For more on what is new in Ubuntu 20.10 in the data centre, including Ubuntu Server, Charmed OpenStack, MAAS and Charmed OpenStack, register for the webinar on November 4th 2020.

<Ends>

About Canonical

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

22 October, 2020 10:05AM

Ubuntu MATE: Ubuntu MATE 20.10 Release Notes

The releases following an LTS are always a good time ⌚ to make changes the set the future direction 🗺️ of the distribution with an eye on where we want to be for the next LTS release. Therefore, Ubuntu MATE 20.10 ships with that latest MATE Desktop 1.24.1, keeps paces with other developments within Ubuntu (such as Active Directory authentication) and migrated to the Ayatana Indicators project.

If you want bug fixes :bug:, kernel updates :corn:, a new web camera control :movie_camera:, and a new indicator :point_right: experience, then 20.10 is for you :tada:. Ubuntu MATE 20.10 will be supported for 9 months until July 2021. If you need Long Term Support, we recommend you use Ubuntu MATE 20.04 LTS.

Read on to learn more… :point_down:

Ubuntu MATE 20.10 (Groovy Gorilla) Ubuntu MATE 20.10 (Groovy Gorilla)

What’s changed since Ubuntu MATE 20.04?

MATE Desktop

If you follow the Ubuntu MATE twitter account 🐦 you’ll know that MATE Desktop 1.24.1 was recently released. Naturally Ubuntu MATE 20.10 features that maintenance release of MATE Desktop. In addition, we have prepared updated MATE Desktop 1.24.1 packages for Ubuntu MATE 20.04 that are currently in the SRU process. Given the number of MATE packages being updated in 20.04, it might take some time ⏳ for all the updates to land, but we’re hopeful that the fixes and improvements from MATE Desktop 1.24.1 will soon be available for those of you running 20.04 LTS 👍

Active Directory

The Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. We’ve been tracking that work and the same capability is available in Ubuntu MATE too.

Active Directory Enrollment Enroll your computer into an Active Directory domain

Ayatana Indicators

There is a significant under the hood change 🔧 in Ubuntu MATE 20.10 that you might not even notice 👀 at a surface level; we’ve replaced Ubuntu Indicators with Ayatana Indicators.

We’ll explain some of the background, why we’ve made this change, the short term impact and the long term benefits.

What are Ayatana Indicators?

In short, Ayatana Indicators is a fork of Ubuntu Indicators that aims to be cross-distro compatible and re-usable for any desktop environment 👌 Indicators were developed by Canonical some years ago, initially for the GNOME2 implementation in Ubuntu and then refined for use in the Unity desktop. Ubuntu MATE has supported the Ubuntu Indicators for some years now and we’ve contributed patches to integrate MATE support into the suite of Ubuntu Indicators. Existing indicators are compatible with Ayatana Indicators.

We have migrated Ubuntu MATE 20.10 to Ayatana Indicators and Arctica Greeter. I live streamed 📡 the development work to switch from Ubuntu Indicators to Ayatana Indicators which you can find below if you’re interested in some of the technical details 🤓

The benefits of Ayatana Indicators

Ubuntu MATE 20.10 is our first release to feature Ayatana Indicators and as such there are a couple of drawbacks; there is no messages indicator and no graphical tool to configure the display manager greeter (login window) 😞

Both will return in a future release and the greeter can be configured using dconf-editor in the meantime.

Arctica Greeter dconf configuration Configuring Arctica Greeter with dconf-editor

That said, there are significant benefits that result from migrating to Ayatana Indicators:

  • Debian and Ubuntu MATE are now aligned with regards to Indicator support; patches are no longer required in Ubuntu MATE which reduces the maintenance overhead.
  • MATE Tweak is now a cross-distro application, without the need for distro specific patches.
  • We’ve switched from Slick Greeter to Arctica Greeter (both forks of Unity Greeter)
    • Arctica Greeter integrates completely with Ayatana Indicators; so there is now a consistent Indicator experience in the greeter and desktop environment.
  • Multiple projects are now using Ayatana Indicators, including desktop environments, distros and even mobile phone projects such as UBports. With more developers collaborating in one place we are seeing the collection of available indicators grow 📈
  • Through UBports contributions to Ayatana Indicators we will soon have a Bluetooth indicator that can replace Blueman, providing a much simpler way to connect and manage Bluetooth devices. UBports have also been working on a network indicator and we hope to consolidate that to provide improved network management as well.
  • Other indicators that are being worked on include printers, accessibility, keyboard (long absent from Ubuntu MATE), webmail and display.

So, that is the backstory about how developers from different projects come together to collaborate on a shared interest and improve software for their users 💪

Webcamoid

We’ve replaced Cheese :cheese: with Webcamoid :movie_camera: as the default webcam tool for several reasons.

  • Webcamoid is a full webcam/capture configuration tool with recording, overlays and more, unlike Cheese. While there were initial concerns :pensive:, since Webcamoid is a Qt5 app, nearly all the requirements in the image are pulled in via YouTube-DL :tada:.
  • We’ve disabled notifications :bell: for Webcamoid updates if installed from universe pocket as a deb-version, since this would cause errors in the user’s system and force them to download a non-deb version. This only affects users who don’t have an existing Webcamoid configuration.

Linux Kernel

Ubuntu MATE 20.10 includes the 5.8 Linux kernel. This includes numerous updates and added support since the 5.4 Linux kernel released in Ubuntu 20.04 LTS. Some notable examples include:

  • Airtime Queue limits for better WiFi connection quality
  • Btrfs RAID1 with 3 and 4 copies and more checksum alternatives
  • USB 4 (Thunderbolt 3 protocol) support added
  • X86 Enable 5-level paging support by default
  • Intel Gen11 (Ice Lake) and Gen12 (Tiger Lake) graphics support
  • Initial support for AMD Family 19h (Zen 3)
  • Thermal pressure tracking for systems for better task placement wrt CPU core
  • XFS online repair
  • OverlayFS pairing with VirtIO-FS
  • General Notification Queue for key/keyring notification, mount changes, etc.
  • Active State Power Management (ASPM) for improved power savings of PCIe-to-PCI devices
  • Initial support for POWER10

Raspberry Pi images

We have been preparing Ubuntu MATE 20.04 images for the Raspberry Pi and we will be release final image for 20.04 and 20.10 in the coming days 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.8 are Firefox 81, LibreOffice 7.0.2, Evolution 3.38 & Celluloid 0.18.

Major Applications

See the Ubuntu 20.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 20.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 20.04 LTS

You can upgrade to Ubuntu MATE 20.10 from Ubuntu MATE 20.04 LTS. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ayatana Indicators Clock missing on panel upon upgrade to 20.10

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

22 October, 2020 12:00AM

October 21, 2020

hackergotchi for Cumulus Linux

Cumulus Linux

A video walk through of EVPN multihoming

You may have overheard someone talking about EVPN multihoming but do you know what it is? If you have, are you up to speed on the latest around it? I walk you through it all, beginning to end, in this three part video series. Watch all three below.

Chapter 1:

EVPN multihoming provides support for all-active server redundancy. In this intro to EVPN multihoming you will hear an overview of the feature and how it compares with EVPN-MLAG.


Chapter 2:

In this episode we dive into the various unicast packet flows in a network with EVPN multihoming. This includes, new data plane constructs such as MAC-ECMP and layer-2 nexthop-groups that have been introduced for the express purpose of EVPN-MH.


Chapter 3:

PIM-SM is used for optimizing flooded traffic in network with EVPN-MH. In this episode we walk through the implementation aspects of flooded traffic, including DF election and Split horizon filtering.


Want to know more? You can find more resources about EVPN and all things networking in our resource hub here.

21 October, 2020 09:17PM by Anuradha Karuppiah

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical & Ubuntu Join AfricaCom Virtual 2020

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/LHuqbqSjy6nZzswELu3x_XktjwNfsHqhZIePPipgNaKbObZz3XgL4DExwruGdD7q0s-jaID86PKWeNkqQfwJoqOoOLFNzxyP6gPey0e-YO-XUlJZUD5ATruZLd5ugHz7_dOAinbh" width="720" /> </noscript>

This year, AfricaCom becomes a virtual event as part of the new Virtual Africa Tech Festival – the largest and most influential tech and telecoms event on the continent. Canonical and Ubuntu will be joining as a Lead Stream Sponsor, introducing the  Digital Infrastructure Investment stream of sessions and exhibits with a speaker session by Mark Shuttleworth – Canonical’s founder and CEO. 

Get your free ticket Book a meeting

Mark Shuttleworth’s presentation for Canonical & Ubuntu at AfricaCom 2020

Wednesday, 11 November 2020 12:35 – 12:55

This year’s overarching theme for AfricaCom is connectivity infrastructure and digital inclusion’. Touching upon the topic of Digital Infrastructure Investment within that context, Mark Shuttleworth will deliver a presentation to share his insights on how the foundations of digital connectivity can be built to empower Africa’s connectivity champions.

The presentation is entitled ‘Software-defined everything – managing complexity from core to edge’, and you can access it on the AfricaCom agenda

So what will Canonical’s session entail? 

Mark will explain how new digital infrastructure is software defined, across many layers from multiple vendors, from central data centers and public cloud right to the cabinet or customer premises. Wrangling software complexity is a primary challenge for communications companies globally. Join Mark to learn how to best tackle this challenge, through a comprehensive exploration of:

  • Public clouds, private clouds and micro clouds
  • Layers of software-defined infrastructure
  • Application management and integration in a multi-vendor, multi-cloud world
  • The new wave of IoT applications
  • Digital Infrastructure Investment

The Canonical & Ubuntu booth at AfricaCom 2020

Come say hi to our team who will be welcoming you at our virtual booth to discuss:

NFV infrastructure based on OpenStack and Kubernetes
Network functions management and orchestration with OSM
Optimising towards edge workloads
Solutions around fully managed operations 

Access tons of relevant free resources, and hop on a live call with a member of our engineering team to advise you on your organisation’s infrastructure needs

We hope to see you there! 

Get your free ticket Book a meeting

21 October, 2020 06:56PM

hackergotchi for Purism PureOS

Purism PureOS

A Librem 5 Video Made on a Librem 5

When it comes to making a video, there are a lot of workflows involved. From writing, planning, to local screen capture, all the way to editing raw 4k footage with proxy clips. Even with all that workflow complexity, the following video was made completely on the Librem 5 phone.

Step by Step

While you can use the onboard mic, you can also drive a USB audio interface from a powered USB c hub. This allowed capturing great audio from a condenser mic.

Cleaning up the audio and editing the video works the same on the phone as it does on any Librem hardware running PureOS.

 

Ultimately the Librem 5 phone lets you take your regular workflow with you while also keeping you in contact with your friends and family.

The Librem 5 is a full-blown quad-core desktop computer, that is also a phone.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post A Librem 5 Video Made on a Librem 5 appeared first on Purism.

21 October, 2020 06:43PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Automating Server Provisioning in phoenixNap’s Bare Metal Cloud with MAAS (Metal-as-a-Service)

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/89f7/2020_October_Canonical-and-BMC-Blog-Post-1200x628.jpg" width="720" /> </noscript>

As part of the effort to build a flexible, cloud-native ready infrastructure, phoenixNAP collaborated with Canonical on enabling nearly instant OS installation. Canonical’s MAAS (Metal-as-a-Service) solution allows for automated OS installation on phoenixNAP’s Bare Metal Cloud, making it possible to set up a server in less than two minutes.  

Bare Metal Cloud is a cloud-native ready IaaS platform that provides access to dedicated hardware on demand. Its automation features, DevOps integrations, and advanced network options enable organizations to build a cloud-native infrastructure that supports frequent releases, agile development, and CI/CD pipelines. 

Through MAAS integration, Bare Metal Cloud provides a critical capability for organizations looking to streamline their infrastructure management processes.  

What is MAAS? 

Allowing for self-service, remote OS installation, MAAS is a popular cloud-native infrastructure management solution. Its key features include automatic discovery of network devices, zero-touch deployment on major OSs, and integration with various IaC tools. 

Built to enable API-driven server provisioning, MAAS has a robust architecture that allows for easy infrastructure coordination. Its primary components are Region and Rack, which work together to provide high-bandwidth services to multiple racks and ensure availability. The architecture also contains a central postgres database, which deals with operator requests. 

Through tiered infrastructure, standard protocols such as IPMI and PXE, and integrations with popular IaaS tools, MAAS helps create powerful DevOps environments. Bare Metal Cloud leverages its features to enable nearly instant provisioning of dedicated servers and deliver a cloud-native ready IaaS platform.   

How MAAS on Bare Metal Cloud Works

The integration of MAAS with Bare Metal Cloud allows for under-120-seconds server provisioning and a high level of infrastructure scalability. Rather than building a server automation system from scratch, phoenixNAP relied on MAAS to shorten the go-to-market timeframes and ensure excellent experience for Bare Metal Cloud users. 

Designed to bring the cloud experience to bare metal platforms, MAAS enables Bare Metal Cloud users to get full control over their physical servers while having cloud-like flexibility. They can leverage a command line interface (CLI), a web user interface (web UI), and a REST API for querying the properties, deploying operating systems, running custom scripts and initiating reboot the servers. 

“phoenixNAP’s Bare Metal Cloud demonstrates the full potential of MAAS,” explained Adam Collard, Engineering Manager, Canonical. “We are excited to support phoenixNAP’s growth in the ecosystem and look forward to working with them to accelerate customer deployments.”

Bare Metal Cloud Features and Usage

The capabilities of MAAS enabled phoenixNAP to automate the server provisioning process and accelerate deployment timeframes of its Bare Metal Cloud. The integration also helped ensure advanced application security and control with consistent performance. 

“Incredibly robust and reliable, MAAS is one of the fundamental components of our Bare Metal Cloud,” said Ian McClarty, President of phoenixNAP. “By enabling us to automate OS installation and lifecycle processes for various instance types, MAAS helped us accelerate time to market. We can now offer lightning-fast physical server provisioning to organizations looking to optimize their infrastructure for agile development lifecycles and CI/CD pipelines. Working with the Canonical team was a pleasure at every step of the process, and we look forward to new joint projects in future.”

Bare Metal Cloud is designed with automation in focus and integrates with the most popular IaC tools. It allows for a simple server deployment in under-120-seconds server provisioning, which is enabled by MAAS OS installation automation capabilities. In addition to this, it includes a range of features designed to support modern IT demand and DevOps approaches to infrastructure creation and management.  

Bare Metal Cloud Features

  • Single-tenant, non-virtualized environment
  • Fully automated, API-driven server provisioning
  • Integrations with Terraform, Ansible, and Pulumi
  • SDK available on GitHub
  • Pay-per-use and reserved instances billing models 
  • Dedicated hardware — no “noisy neighbors”
  • Global scalability
  • Cutting edge hardware and network technologies
  • Built with market proven and well-established technology partners
  • Suited for developers and business critical production environments alike

Looking to deploy a Kubernetes cluster on Bare Metal Cloud? 

Download our free white paper titled “Automating the Provisioning of Kubernetes Cluster on Bare Metal Servers in Public Cloud.” 

DOWNLOAD NOW

21 October, 2020 06:12PM

Ubuntu Blog: Canonical & Ubuntu at KubeCon NA Virtual 2020

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a5b5/Capture.jpg" width="720" /> </noscript>

When: November 17-20, 2020

Where: KubeCon + CloudNativeCon North America 2020 Virtual Platform 

By now it’s no surprise that KubeCon NA is going virtual, like the majority of events worldwide. Is that bad news? Quite the opposite! According to CNCF, this year’s KubeCon EU – the first KubeCon to ever be hosted virtually –  made it possible for over 18,700 Kubeheads to sign up,  72% of which were first-time KubeCon + CloudNativeCon attendees. In other words, as we have all believed for so many years now, tech is helping the community grow and get closer.

So the time is approaching fast for our second virtual KubeCon, this time addressing the US, and we couldn’t be more excited! A little birdie told us the organisers are planning heaps of new things for this KubeCon, and of course so are we. Here’s a taste of what you’ll see:

Canonical showcases MicroK8s HA at KubeCon NA 2020 

Get your ticket Book a meeting

This month, Canonical made a new announcement, introducing autonomous high availability (HA) clustering in MicroK8s. This gives MicroK8s the added benefit of full resilience for production workloads in cloud and server deployments.

Designed as a minimal conformant Kubernetes, MicroK8s installs and clusters with a single command. 

Want to see it in action? We’re excited to show you! Feel free to pre-book a meeting with one of our engineers using the button below, or keep reading for more on our next section. 

Already popular for IoT and developer workstations, MicroK8s is one of Canonical’s two Kubernetes distributions. One of the many reasons Canonical’s lightweight Kubernetes has gained so much community attention? You can install MicroK8s on any device in under a minute

High availability is enabled automatically once three or more nodes are clustered, and the data store migrates automatically between nodes to maintain quorum in the event of a failure. “The autonomous HA MicroK8s delivers a zero-ops experience that is perfect for distributed micro clouds and busy administrators”, says Alex Chalkias, Product Manager at Canonical.

Designed as a minimal conformant Kubernetes, MicroK8s installs and clusters with a single command. 

Want to see it in action? We’re excited to show you! Feel free to pre-book a meeting with one of our engineers using the button below, or keep reading for more on our next section. 

Book a meeting

The Canonical & Ubuntu booth 

Throughout the event, we welcome you to pop by and: 

  1. Have a live call with our senior engineers who will be available to hear your questions, offer advice, and give you customised demos.
  2. Throw your name in the proverbial hat (i.e. our online raffle) for a chance to win a rare Ubuntu 20.04 Release Focal Fossa T-shirt. Get a chance to show your love for Ubuntu and be the envy of every kid on the playground. 
  3. Access demos on all things Kubernetes. We’ll be showing you various use cases of using our two K8s distributions, Charmed Kubernetes and MicroK8s, for multi-cloud Kubernetes. Indicatively: 
  • Universal operators for Charmed Kubernetes installation and app deployment 
  • Effortless Charmed K8s upgrade
  • Charmed Kubeflow as a Kubernetes workload
  • MicroK8s HA: lightweight Kubernetes done right
  • MicroK8s and Charmed Ceph at the Edge
  • MicroK8s Basics 1
  • MicroK8s Basics 2 
  • MicroK8s with Multus add-on 
  • Charmed OSM
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2f18/Capture.jpg" width="720" /> </noscript>

Canonical’s demo at the KubeCon NA Main Theater 

We know our KubeCon friends can’t get enough demo time, so on top of our short booth demos, we’re also presenting a 15 minute video at the Main Theater, showcasing how you can run and scale operators on VMs and bare metal by leveraging the Juju operator lifecycle manager (OLM) on K8s. 

Stay tuned for time and date details!

KubeCon Co-Located Event: Open Operators Training Day hosted by Canonical

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/9eb5/Capture.jpg" width="720" /> </noscript>

As always, we aim to give back to the community in any way we can. That’s why this time round we’re hosting a full-day, co-located training event free of charge for all KubeCon NA attendees. 

Date: Tuesday, November 17
Registration Fees: Complimentary

The Open Operators Day is for devops to learn about the Open Operator Collection, an open-source initiative to provide a large number of interoperable, easily integrated operators for common workloads. We’ll talk about where Open Operators come from and what the community is looking to build. Organized by Canonical, the publisher of Ubuntu, the day will be split in 3 time zone friendly sessions:

  • Asia: 14:00-18:00 CST (1:00 AM – 5:00 AM ET)
  • EMEA: 13:00-17:00 BST (8:00 AM – 12:00 PM ET)
  • Americas: 11:00-15:00 PST (2:00 PM – 6:00 PM ET)

Each session will mix keynotes, training and community discussions.Please note that pre-registration is required. For questions regarding this event, please reach out to marketing@canonical.com.

Register for Operators Day

21 October, 2020 11:21AM

Ubuntu Blog: OpenStack at 10 – from peak to plateau of productivity

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/80ca/OpenStack-Logo-Horizontal.png" width="720" /> </noscript>

This week is the latest Open Infrastructure Summit, in a week where the OpenStack Foundation became the Open Infrastructure Foundation to reflect the expansion of the organisation’s mission, scope and community to advance open source over the next decade to support open infrastructure. It is also ten years since OpenStack launched and a lot has changed during that time. 

We asked freelance journalist, Sean Michael Kerner, to share his views on the last ten years. Sean is a freelance journalist writing on myriad IT topics for publications around the world. He has spoken at more OpenStack events than he cares to remember. English is his second language (Klingon his first). Follow him on Twitter @TechJournalist.

10 years ago in July 2010, I got an unusual pitch from a PR person. It was the beginning of a long and winding road that defines my experience and viewpoint on OpenStack.

Unlike the usual spate of product and open source pitches from vendors that I got at the time (and still get), the pitch I got on the sunny July afternoon was an offer to speak with the CTO of IT at NASA. It was an offer I couldn’t refuse – and I suspect it’s also the reason why OpenStack got so much attention early on – it was literally ‘rocket science’. In a 2012 video interview I did with Chris Kemp after he left the role of CTO at NASA to start his own OpenStack startup, he told me that in his view OpenStack could well become one of NASA’s great contributions to society.

That was the early hype cycle, and it was amazing to watch. From the early days of just two vendors in 2010, to the heady days of 2012 and the San Diego summit where analysts (and yours truly) were in awe of the rapid embrace of OpenStack by large IT vendors.  A year later in 2013 at the Portland summit, I remember clearly being approached by a venture capitalist after an analyst panel. The VC wanted to know who I thought they should invest in. There was a board of sponsors and vendors mounted against the wall and I told him without hesitation – in five years half the vendors would be gone – I wasn’t wrong.

Technology

While I’ve written more than my fair share about the ‘hype’, my interest in OpenStack has been about the technology, the processes that make the project work and the people that bring it all together.

Much like the early explosion of vendors, OpenStack had a rapid acceleration of projects in the early days. It started with just Nova for compute and Swift for storage. Then with each successive release, more projects came in, Keystone for identity, Glance for images, Quantum/Neutron for networking and so on. The OpenStack Foundation struggled in those early days to define what OpenStack actually was all about – there were efforts like Refstack which was an attempt to define a reference implementation and other efforts. There was also the ‘Big Tent’ – an idea where lots of different ideas and projects could all cohabitate under the OpenStack umbrella.

At one point, I could name every project in the OpenStack family – then one day I couldn’t. Did OpenStack bite off more than it could chew? Take on more projects than anyone could use? Aim to be all things to all people when not all people needed all things? … Maybe.

In 2019, at the OpenStack Summit in Denver, which was the first branded as the Open Infrastructure Summit, the halls were quiet and it was the first where I remember there being fewer people than any prior event. The hype was gone, but the core technology remained.

It Was Never Really OpenStack vs AWS

In the early days of OpenStack there was an idea and perhaps an expectation from some that most enterprises wanted or needed to build their own private clouds. There was also a hope that service providers would embrace OpenStack to build public cloud offerings that would effectively challenge or perhaps even surpass AWS.

That’s not what OpenStack is today – or where it ever really was – even if that’s what Rackspace wanted it to be. OpenStack is about open infrastructure and it’s fitting that now that’s also the rebranded name of the OpenStack Foundation.

10 years after first engaging with OpenStack, I have every expectation that it will still be around 10 years from now. OpenStack, though it has gone through the ‘trough of disillusionment’ is now firmly headed toward the ‘plateau of productivity’ at the end of the Gartner hype cycle.

We’re still talking about OpenStack 10 years later because it’s still useful.  We’re still talking about OpenStack because it hasn’t stood still, it has continued to evolve and it’s a technology that still matters. In the final analysis, technology survives if it can adapt to the actual needs of the market and that’s something that vendors like Canonical have long recognised. Among the many interviews I’ve had the privilege of doing with Mark Shuttleworth at OpenStack events over the years was the Barcelona Summit in 2016.

“There is no shortage of truly terrible ideas in OpenStack; it’s a truly open forum, with very little leadership and a lot of governance,” Shuttleworth said at the time. “OpenStack needs to focus on stuff that matters.”

And so it has.

21 October, 2020 10:50AM

Ubuntu Blog: Telco cloud: what is that?

Telco cloud or a network function virtualisation infrastructure (NFVI) is a cloud environment optimised for telco workloads. It is usually based on well-known technologies like OpenStack. Thus, in many ways, it resembles ordinary clouds. On the other hand, however, it differs from them. This is because telco workloads have very specific requirements. Those include performance acceleration, high level of security and orchestration capabilities. In order to better understand where those demands are coming from, let’s start with reviewing what kind of workloads are telcos running in the cloud.

Telco workloads in the cloud

Have you ever been wondering how the telecommunications infrastructure works? You probably have not, but do not worry, you are not the only one. All we usually care about today is a stable Internet connection. Understanding how does it work is of secondary importance. However, behind a network socket or your Wi-Fi router, there is a massive infrastructure which provides this connection. It consists of thousands of interconnected services, including firewalls, base transceiver stations (BTS) for providing mobile connection, voice and data aggregation systems, etc.

<noscript> <img alt="" height="267" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_267,h_267/https://ubuntu.com/wp-content/uploads/0fc2/icon-1000px.png" width="267" /> </noscript>

Historically, all of those services used to be implemented in hardware. Nowadays, however, service providers are moving to software-based network services. The migration is driven by economical benefits resulting from better utilisation of resources in cloud environments. As software-based network services are implemented on top of virtual machines (VMs) or containers, service providers can simply run them in a cloud, benefitting from lower operational costs and improved agility. Such a telco cloud, however, must meet certain criteria before network services can be deployed on top of it.

Telco cloud under the hood

In order to implement a telco cloud, service providers can use either proprietary or open source technologies. Over the past few years, it has been concluded that for the open source telco cloud implementation OpenStack will be used as the basis. What makes the telco cloud different from an ordinary OpenStack cloud, however, are very specific features required by telco workloads.

<noscript> <img alt="" height="157" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_254,h_157/https://ubuntu.com/wp-content/uploads/a064/Screenshotfrom2018-09-0310-33-04.png" width="254" /> </noscript>

Performance

Among various metrics, performance results are what telcos care the most. This is because telco workloads are network-heavy. They have to process up to 100 Gb per second. Thus, it is important that telco workloads achieve comparable performance results regardless of whether they are implemented in hardware or in software. This is challenging, however, as VMs usually cause performance degradation. In order to solve this problem telco clouds implement a bunch of performance extensions, such as single-root input/output virtualisation (SR-IOV), data plane development kit (DPDK) or central processing unit (CPU) pinning. All of that allows software-based network services to achieve performance results comparable to those achieved by physical machines.

Security

Another important aspect is security. Telcos are known for being security-oriented. Thus, the telco cloud must provide a desired level of security too. Service providers usually achieve that by applying hardening on the operating system level. Hardening is a process of securing the system by reducing potential vulnerabilities to an absolute minimum. This is achieved by disabling unnecessary services, narrowing down permissions, closing open ports, etc. For obvious reasons, telco cloud is also deployed on-prem in most of the cases. The security team can later use standard technologies, such as packet inspection or data encryption to secure the telco cloud at each layer of the infrastructure stack.

Orchestration

Last but not least orchestration is what characterises the telco cloud as well. Although orchestration is a broader term in general, it is especially important in the case of telco workloads. This is because software-based network services are usually very complex. They consist of multiple interconnected components (network functions) which are often distributed across multiple substrates. Thus, having a tool which can arrange resources, deploy network services and maintain them post-deployment is important for service providers. Among various proprietary and open source solutions, an Open Source MANO (OSM) project has recently been getting momentum, enabling telcos with management and orchestration (MANO) capabilities.

Telco cloud on Ubuntu

Canonical is an established leader in the field of implementation cloud environments for telcos. Over the past few years, the company has successfully onboarded leading global and national tier-1 service providers like AT&T, BT or Bell on their open source NFVI platform based on Ubuntu Server, Charmed OpenStack and Charmed Ceph. With an increasing demand for cloud-native network services Canonical also stands by ready to offer Charmed Kubernetes as an extension of the underlying cloud platform. Finally, as workloads orchestration becomes the biggest challenge in the telco world nowadays, the company provides Charmed OSM to enable service providers with these capabilities.

To get in touch with Canonical with regards to solutions for telecommunications, click here.

To learn more, watch the webinar: “NFV, cloud-native networking and OSM: everything you need to know” or visit Canonical’s website.

21 October, 2020 10:18AM

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Problems with receiving our email on Microsoft hosts

Existing users and users trying to register may find themselves unable to receive any subscription notification, password reset or sign-up emails from us. This is because a few weeks back, Microsoft has started to block the IP address ranges of our SMTP-as-a-service provider Sendgrid on practically all of their free and possibly also paid email hosting service platforms. This means that emails to you, should you use a @live.com, @hotmail.com, @outlook.com or similar Microsoft email service, will never arrive as they are just not getting accepted in general.

We moved to Sendgrid 5 years ago in the hopes of not having to deal with email deliverability issues again, because it is not getting easier to send email directly to the big hosters from random cloud IP address ranges, but as we likely won't be able to or want to cough up money in order to get a clean, dedicated source IP address from Sendgrid, we need to evaluate other options and eventually migrate to another email sending setup.

Until then, please use an email address not hosted at Microsoft for signing up, and if you put value into receiving notifications for your topic subscriptions, we recommend to switch email addresses, too.

If you have problems signing up, please email admin@bunsenlabs.org -- as some of new users have already done -- with your username and from the email address you used when signing up; we'll be happy to manually set a password for you so you can sign in normally after registering (although it may take a day or two).

Sorry for any inconvenience caused.

21 October, 2020 12:00AM

October 20, 2020

hackergotchi for Tails

Tails

Tails 4.12 is out

This release fixes many security vulnerabilities. You should upgrade as soon as possible.

Changes and updates

  • Update Tor Browser to 10.0.2.

  • Update tor to 0.4.4.5.

  • Update Linux to 5.8 and most firmware packages. This should improve the support for newer hardware (graphics, Wi-Fi, etc.).

  • Add a button to cancel an automated upgrade while downloading. (#17310)

Fixed problems

  • Fix several internationalization issues in Electrum, Tails Installer, and Tails Upgrader. (#17958, #17758, and #17961)

  • Anonymize URLs in the technical details provided by WhisperBack. (#17147)

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 4.12

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 4.2 or later to 4.12.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 4.12 directly:

What's coming up?

Tails 4.13 is scheduled for November 17.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

20 October, 2020 12:34PM

hackergotchi for SparkyLinux

SparkyLinux

XanMod Linux Kernel

There is a new tool available for Sparkers: XanMod Linux Kernel Installer

What is XanMod Linux Kernel?

XanMod is a general-purpose Linux kernel distribution with custom settings and new features. Built to provide a stable, responsive and smooth desktop experience.
The real-time version is recommended for critical runtime applications such as Linux gaming eSports, streaming, live productions and ultra-low latency enthusiasts.
Supports all recent 64-bit versions of Debian and Ubuntu-based systems.

Main Features:
– Preemptive Full Tickless Kernel at 500Hz w/ Tuned CPU Core Scheduler.
– RCU Boost for better responsiveness and lower overall system latency.
– Block Layer w/ multi-threaded runqueue for high I/O throughput.
– Caching, Virtual Memory Manager and CPUFreq Governor improvements.
– BBR TCP Congestion Control + FQ-PIE Packet Scheduling and AQM Algorithm [5.9][5.9-rt][5.8].
– ORC Unwinder for Kernel Stack Traces (debuginfo) implementation.
– Third-party patchset available: ZSTD kernel, initrd and modules support [5.9][5.9-rt]5.8], Full x86_64 FSGSBASE instructions [5.9][5.8], Clear Linux [partial], CK’s Hrtimer Patchset, Wine / Proton Fsync, PCIe ACS Override, Aufs [5.4] and GCC graysky’s.
– Real-time Linux kernel (PREEMPT_RT) build available [5.9-rt][5.4-rt].
– Generic kernel package for compatibility with most Debian & Ubuntu based distributions. Built on the latest GCC 10.2 and Binutils 2.35.
– GPLv2 license. Can be built for any distribution or purpose.

Installation (Sparky testing amd64 only):

sudo apt update
sudo apt install sparky-aptus

to upgrade APTus to the latest version, then launch APTus-> System-> XanMod Linux Installer icon.

XanMod Linux Kernel Installer

Founder: Alexandre Frade
License: GNU GPL v2
Git: github.com/xanmod/linux

 

20 October, 2020 11:38AM by pavroo

hackergotchi for Qubes

Qubes

XSAs 286, 331, 332, and 345 do not affect the security of Qubes OS

The Xen Project has published the following Xen Security Advisories: XSA-286, XSA-331, XSA-332, and XSA-345. These XSAs do not affect the security of Qubes OS, and no user action is necessary.

Special note: Although XSA-345 is included in QSB #060, we do not consider XSA-345 to affect the security of Qubes OS, since the default configuration is safe, and we have already implemented appropriate safeguards to prevent users from changing to a vulnerable configuration by accident. Please see the Impact section of QSB #060 for further details.

These XSAs have been added to the XSA Tracker:

https://www.qubes-os.org/security/xsa/#286
https://www.qubes-os.org/security/xsa/#331
https://www.qubes-os.org/security/xsa/#332
https://www.qubes-os.org/security/xsa/#345

20 October, 2020 12:00AM

QSB #060: Multiple Xen issues (XSA-345, XSA-346, XSA-347)

We have just published Qubes Security Bulletin (QSB) #060: Multiple Xen issues (XSA-345, XSA-346, XSA-347). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

Special note: Although XSA-345 is included in this QSB, we do not consider XSA-345 to affect the security of Qubes OS, since the default configuration is safe, and we have already implemented appropriate safeguards to prevent users from changing to a vulnerable configuration by accident. Please see the Impact section in QSB #060 below for further details.

View QSB #060 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-060-2020.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View the associated XSAs in the XSA Tracker:

https://www.qubes-os.org/security/xsa/#345
https://www.qubes-os.org/security/xsa/#346
https://www.qubes-os.org/security/xsa/#347



             ---===[ Qubes Security Bulletin #60 ]===---

                             2020-10-20


           Multiple Xen issues (XSA-345, XSA-346, XSA-347)


Summary
========

On 2020-10-20, the Xen Security Team published the following Xen
Security Advisories (XSAs):

XSA-345 [1] "x86: Race condition in Xen mapping code":
| The Xen code handling the updating of the hypervisor's own pagetables
| tries to use 2MiB and 1GiB superpages as much as possible to maximize
| TLB efficiency.  Some of the operations for checking and coalescing
| superpages take non-negligible amount of time; to avoid potential lock
| contention, this code also tries to avoid holding locks for the entire
| operation.
| 
| Unfortunately, several potential race conditions were not considered;
| precisely-timed guest actions could potentially lead to the code
| writing to a page which has been freed (and thus potentially already
| reused).
| 
| A malicious guest can cause a host denial-of-service.  Data corruption
| or privilege escalation cannot be ruled out.


XSA-346 [2] "undue deferral of IOMMU TLB flushes":
| To efficiently change the physical to machine address mappings of a
| larger range of addresses for fully virtualized guests, Xen contains
| an optimization to coalesce per-page IOMMU TLB flushes into a single,
| wider flush after all adjustments have been made.  While this is fine
| to do for newly introduced page mappings, the possible removal of
| pages from such guests during this operation should not be "optimized"
| in the same way.  This is because the (typically) final reference of
| such pages is dropped before the coalesced flush, and hence the pages
| may have been put to a different use even though DMA initiated by
| their original owner might still be in progress.
| 
| A malicious guest might be able to cause data corruption and data
| leaks.  Host or guest Denial of Service (DoS), and privilege
| escalation, cannot be ruled out.


XSA-347 [3] "unsafe AMD IOMMU page table updates":
| AMD IOMMU page table entries are updated in a step by step manner,
| without regard to them being potentially in use by the IOMMU.
| Therefore it was possible that the IOMMU would read and then use a
| half-updated entry.  Furthermore, updates to Device Table entries
| lacked suitable ordering enforcement for certain steps involved in
| these updates.
| 
| In both case the specific outcome heavily depends on how exactly the
| compiler translated the affected pieces of code.
| 
| A malicious guest might be able to cause data corruption and data
| leaks.  Host or guest Denial of Service (DoS), and privilege
| escalation, cannot be ruled out.


Impact
=======

XSA-345: The default Qubes configuration is safe. Shadow mode for HVM
and PVH domains is disabled at build time, and domains that have PCI
devices run in HVM mode by default. Therefore, we do not consider this
XSA to affect the security of Qubes OS. However, we are including it in
this QSB anyway since it is technically possible for the user to
manually change a domain that has PCI devices from HVM to PV, which
would result in a configuration that is vulnerable to this issue. Having
anticipated the risk associated with such a manual change, we have
already implemented appropriate safeguards. In the Qubes GUI for
changing VM settings, the user would have to go to the "Advanced" tab in
order to change the setting from HVM to PV. Upon making the change, the
user would immediately be confronted with a warning in bold red text
that reads, "Using PV mode exposes more hypervisor attack surface!"
Therefore, it is nearly impossible users would switch to the vulnerable
configuration by accident.

XSA-346, XSA-347: A malicious domain with a PCI device (e.g., sys-net or
sys-usb in the default configuration) could try to exploit this
vulnerability in order to crash the host. Beyond DoS, it is unlikely
that this vulnerability could be exploited to compromise the system, but
we cannot completely rule out the possibility. Both of these issues
apply only to systems running on AMD processors.


Patching
=========

The specific packages that resolve the problems discussed in this
bulletin are as follows:

  For Qubes 4.0:
  - Xen packages, version 4.8.5-25
  For Qubes 4.1:
  - Xen packages, version 4.14.0-6
  
The packages are to be installed in dom0 via the Qube Manager or via
the qubes-dom0-update command as follows:

  For updates from the stable repository (not immediately available):
  $ sudo qubes-dom0-update

  For updates from the security-testing repository:
  $ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.


Credits
========

See the original Xen Security Advisories.


References
===========

[1] https://xenbits.xen.org/xsa/advisory-345.html
[1] https://xenbits.xen.org/xsa/advisory-346.html
[1] https://xenbits.xen.org/xsa/advisory-347.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

20 October, 2020 12:00AM

October 19, 2020

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 653

Welcome to the Ubuntu Weekly Newsletter, Issue 653 for the week of October 11 – 17, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

19 October, 2020 11:00PM by guiverc

October 18, 2020

hackergotchi for OSMC

OSMC

OSMC's October update is here with Debian Buster and Kodi 18.8

As you may have noticed, we didn't release an OSMC update for a while. After a lot of hard work, OSMC's October update is now here featuring Debian 10, codename "Buster" and Kodi 18.8. This yields a number of improvements and is one of our most significant OSMC updates yet. It featurues:

  • Better performance
  • A larger number of software packages to choose from
  • More up to date software packages to choose from

We'd like to thank everyone involved with testing and developing this update.

We continue to work on our improved video stack for Vero 4K and Vero 4K + which brings HDR10+ and 3D MVC support. We also continue to work on Raspberry Pi 4 support and we will shortly make some kernel 5.x test builds available in our forums for currently supported Pi models so we can use a unified kernel code base for all models.

Shortly, we will resume development on Kodi v19 support as the release cycle accelerates further towards a stable release. This version will bring support for the Raspberry Pi 4, however it should be noted that we expect to discontinue support for Raspberry Pi 0/1 and Vero 2 devices with the release of Kodi v19.

Wrap up

This is a large OSMC update and will take some time to install on your system; depending on the number of packages installed on your system and the quality of your peripherals (i.e. SD card).

We will maintain an upgrade path for OSMC users on June 2020 or earlier until 31st December 2021. After this point, it will not be possible to update from an older version of OSMC without a reinstallation.

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on Twitter, like us on Facebook and consider making a donation if you would like to support further development.

You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.

18 October, 2020 06:03PM by Sam Nazarko

hackergotchi for SparkyLinux

SparkyLinux

Boostnote

There is a new application available for Sparkers: Boostnote

What is Boostnote?

Boost Note is an intuitive and stylish markdown editor. It’s fully open-source, and used by 1 million developers.

Features:
– Cloud Storage – Notes in a cloud storage will be stored safely and accessible from other devices.
– Multiple Platforms – Boost Note app is available in browsers, desktop app and mobile app.
– Syntax Highlight – Boost Note can highlight more than 100 programming languages.
– Math Equations – Boost Note supports math blocks. In the blocks, you can write math equations with LaTeX syntax.
– Customizable Theme – You can customize style of the app UI, its editor and rendered markdown contents.
– File System Based Storage – -You can have full control of your data. Share your notes with your favorite cloud storage service.
– Extensible Markdown (Coming Soon) – You can introduce custom markdown syntax and configure how to render it.

Installation (Sparky testing amd64 only):

sudo apt update
sudo apt install boostnote

or via APTus-> Ofice-> Boostnote icon.

Boostnote

Copyright (C) 2016 – 2020 BoostIO, Inc.
License: GNU GPL v3
Git: github.com/BoostIO/Boostnote

 

18 October, 2020 01:50PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

October 17, 2020

David Tomaschik: Course Review: Reverse Engineering with Ghidra

If you’re a prior reader of the blog, you probably know that when I have the opportunity to take a training class, I like to write a review of the course. It’s often hard to find public feedback on trainings, which feels frustrating when you’re spending thousands of dollars on that course.

Last week, I took the “Reverse Engineering with Ghidra” taught by Jeremy Blackthorne (0xJeremy) of the Boston Cybernetics Institute. It was ostensibly offered as part of the Infiltrate Conference, but 2020 being what it is, there was no conference and it was just an online training. Unfortunately for me, it was being run on East Coast time and I’m on the West Coast, so I got to enjoy some early mornings.

I won’t bury the lede here – on the whole, the course was a high-quality experience taught by an instructor who is clearly both passionate and experienced with technical instruction. I would highly recommend this course if you have little experience in reverse engineering and want to get bootstrapped on performing reversing with Ghidra. You absolutely do need to have some understanding of how programs work – memory sections, control flow, how data and code is represented in memory, etc., but you don’t need to have any meaningful RE experience. (At least, that’s my takeaway, see the course syllabus for more details.)

I would say that about 75% of the total time was spent executing labs and the other 25% was spent with lecture. The lecture time, however, had very little prepared material to read – most of it was live demonstration of the toolset, which made for a great experience when he would answer questions by showing you exactly how to get something done in Ghidra.

Like many information security courses, they provide a virtual machine image with all of the software installed and configured. Interestingly, they seem to share this image across multiple courses, so the actual exercises are downloaded by the student during the course. They provide both VirtualBox and VMWare VMs, but both are OVAs which should be importable into either virtualization platform. Because I always need to make things harder on myself, I actually used QEMU/KVM virtualization for the course, and it worked just fine as well.

The coverage of Ghidra as a tool for reversing was excellent. The majority of the time was spent on manual analysis tasks with examples in a variety of architectures. I believe we saw X86, AMD64, MIPS, ARM, and PowerPC throughout the course. Most of the reversing tasks were a sort of “crack me” style challenge, which was a fitting way to introduce the Ghidra toolkit.

We also spent some time on two separate aspects of Ghidra programming – extending Ghidra with scripts, plugins, and tools, and headless analysis of programs using the GhidraScript API. Though Ghidra is a Java program, it has both Java APIs and Jython bindings to those APIs, and all of the headless analysis exercises were done in Python (Jython).

Jeremy did a great job of explaining the material and was very clear in his teaching style. He provided support for students who were having issues without disrupting the flow for other students. One interesting approach is encouraging students to just keep going through the labs when they finish one, rather than waiting for that lab to be introduced. This ensures that nobody is sitting idle waiting for the course to move forward, and provides students the opportunity to learn and discover the tools on their own before the in-course coverage.

One key feature of Jeremy’s teaching approach is the extensive use of Jupyter notebooks for the lab exercises. This encourages students to produce a log of their work, as you can directly embed shell commands and python scripts (along with their output) as well as Markdown that can include images or other resources. A sort of a hidden gem of his approach was also an introduction to the Flameshot screenshot tool. This tool lets you add boxes, arrows, highlights, redactions, etc., to your screenshot directly in an on-screen overlay. I hadn’t seen it before, but I think it’ll be my goto screenshot tool in the future.

Other tooling used for making this a remote course included a Zoom meeting for the main lecture and a Discord channel for class discussion. Exercises and materials were shared via a Sharepoint server. Zoom was particularly nice because Jeremy recorded his end of the call and uploaded the recordings to the Sharepoint server, so if you wanted to revisit anything, you had both the lecture notes and video. (This is important since so much of the class was done as live demo instead of slides/text.)

It’s also worth noting that it was clear that Jeremy adjusted the course contents and pace to match the students goals and pace. At the beginning, he asked each student about their background and what they hoped to get out of the course, and he would regularly ask us to privately message him with what exercise we’re currently working on (the remote version of the instructor walking around the room) to get a sense of the pace. BCI clearly has more exercises than can fit in the four day timing of the course, so Jeremy selected the ones most relevant to student’s goals, but then provided all the materials at the end of the course so we could go forth and learn more on our own time. This was a really nice element to help get the most out of the course.

The combination of the live demo lecture style, lots of lab/hands-on exercises, and customized content and pace really worked well for me. I feel like I got a lot out of the course and am at least somewhat comfortable using Ghidra now. Overall, definitely a recommendation for those newer to reverse engineering or looking to use Ghidra for the first time.

I also recently purchased The Ghidra Book so I thought I’d make a quick comparison. The Ghidra Book looks like good reference material, but not a way to learn from first principles. If you haven’t used Ghidra at all, taking a course will be a much better way to get up to speed.

17 October, 2020 07:00AM

October 16, 2020

hackergotchi for Maemo developers

Maemo developers

Figuring out corrupt stacktraces on ARM

If you’re developing C/C++ on embedded devices, you might already have stumbled upon a corrupt stacktrace like this when trying to debug with gdb:

(gdb) bt 
#0  0xb38e32c4 in pthread_getname_np () from /home/enrique/buildroot/output5/staging/lib/libpthread.so.0
#1  0xb38e103c in __lll_timedlock_wait () from /home/enrique/buildroot/output5/staging/lib/libpthread.so.0 
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

In these cases I usually give up gdb and try to solve my problems by adding printf()s and resorting to other tools. However, there are times when you really really need to know what is in that cursed stack.

ARM devices subroutine calls work by setting the return address in the Link Register (LR), so the subroutine knows where to point the Program Counter (PC) register to. While not jumping into subroutines, the values of the LR register is saved in the stack (to be restored later, right before the current subroutine returns to the caller) and the register can be used for other tasks (LR is a “scratch register”). This means that the functions in the backtrace are actually there, in the stack, in the form of older saved LRs, waiting for us to get them.

So, the first step would be to dump the memory contents of the backtrace, starting from the address pointed by the Stack Pointer (SP). Let’s print the first 256 32-bit words and save them as a file from gdb:

(gdb) set logging overwrite on
(gdb) set logging file /tmp/bt.txt
(gdb) set logging on
Copying output to /tmp/bt.txt.
(gdb) x/256wa $sp
0xbe9772b0:     0x821e  0xb38e103d   0x1aef48   0xb1973df0
0xbe9772c0:      0x73d  0xb38dc51f        0x0          0x1
0xbe9772d0:   0x191d58    0x191da4   0x19f200   0xb31ae5ed
...
0xbe977560: 0xb28c6000  0xbe9776b4        0x5      0x10871 <main(int, char**)>
0xbe977570: 0xb6f93000  0xaaaaaaab 0xaf85fd4a   0xa36dbc17
0xbe977580:      0x130         0x0    0x109b9 <__libc_csu_init> 0x0
...
0xbe977690:        0x0         0x0    0x108cd <_start>  0x0
0xbe9776a0:        0x0     0x108ed <_start+32>  0x10a19 <__libc_csu_fini> 0xb6f76969  
(gdb) set logging off
Done logging to /tmp/bt.txt.

Gdb already can name some of the functions (like main()), but not all of them. At least not the ones more interesting for our purpose. We’ll have to look for them by hand.

We first get the memory page mapping from the process (WebKit’s WebProcess in my case) looking in /proc/pid/maps. I’m retrieving it from the device (named metro) via ssh and saving it to a local file. I’m only interested in the code pages, those with executable (‘x’) permissions:

$ ssh metro 'cat /proc/$(ps axu | grep WebProcess | grep -v grep | { read _ P _ ; echo $P ; })/maps | grep " r.x. "' > /tmp/maps.txt

The file looks like this:

00010000-00011000 r-xp 00000000 103:04 2617      /usr/bin/WPEWebProcess
...
b54f2000-b6e1e000 r-xp 00000000 103:04 1963      /usr/lib/libWPEWebKit-0.1.so.2.2.1 
b6f6b000-b6f82000 r-xp 00000000 00:02 816        /lib/ld-2.24.so 
be957000-be978000 rwxp 00000000 00:00 0          [stack] 
be979000-be97a000 r-xp 00000000 00:00 0          [sigpage] 
be97b000-be97c000 r-xp 00000000 00:00 0          [vdso] 
ffff0000-ffff1000 r-xp 00000000 00:00 0          [vectors]

Now we process the backtrace to remove address markers and have one word per line:

$ cat /tmp/bt.txt | sed -e 's/^[^:]*://' -e 's/[<][^>]*[>]//g' | while read A B C D; do echo $A; echo $B; echo $C; echo $D; done | sed 's/^0x//' | while read P; do printf '%08x\n' "$((16#"$P"))"; done | sponge /tmp/bt.txt

Then merge and sort both files, so the addresses in the stack appear below their corresponding mappings:

$ cat /tmp/maps.txt /tmp/bt.txt | sort > /tmp/merged.txt

Now we process the resulting file to get each address in the stack with its corresponding mapping:

$ cat /tmp/merged.txt | while read LINE; do if [[ $LINE =~ - ]]; then MAPPING="$LINE"; else echo $LINE '-->' $MAPPING; fi; done | grep '/' | sed -E -e 's/([0-9a-f][0-9a-f]*)-([0-9a-f][0-9a-f]*)/\1 - \2/' > /tmp/mapped.txt

Like this (address in the stack, page start (or base), page end, page permissions, executable file load offset (base offset), etc.):

0001034c --> 00010000 - 00011000 r-xp 00000000 103:04 2617 /usr/bin/WPEWebProcess
...
b550bfa4 --> b54f2000 - b6e1e000 r-xp 00000000 103:04 1963 /usr/lib/libWPEWebKit-0.1.so.2.2.1 
b5937445 --> b54f2000 - b6e1e000 r-xp 00000000 103:04 1963 /usr/lib/libWPEWebKit-0.1.so.2.2.1 
b5fb0319 --> b54f2000 - b6e1e000 r-xp 00000000 103:04 1963 /usr/lib/libWPEWebKit-0.1.so.2.2.1
...

The addr2line tool can give us the exact function an address belongs to, or even the function and source code line if the code has been built with symbols. But the addresses addr2line understands are internal offsets, not absolute memory addresses. We can convert the addresses in the stack to offsets with this expression:

offset = address - page start + base offset

I’m using buildroot as my cross-build environment, so I need to pick the library files from the staging directory because those are the unstripped versions. The addr2line tool is the one from the buldroot cross compiling toolchain. Written as a script:

$ cat /tmp/mapped.txt | while read ADDR _ BASE _ END _ BASEOFFSET _ _ FILE; do OFFSET=$(printf "%08x\n" $((0x$ADDR - 0x$BASE + 0x$BASEOFFSET))); FILE=~/buildroot/output/staging/$FILE; if [[ -f $FILE ]]; then LINE=$(~/buildroot/output/host/usr/bin/arm-buildroot-linux-gnueabihf-addr2line -p -f -C -e $FILE $OFFSET); echo "$ADDR $LINE"; fi; done > /tmp/addr2line.txt

Finally, we filter out the useless [??] entries:

$ cat /tmp/bt.txt | while read DATA; do cat /tmp/addr2line.txt | grep "$DATA"; done | grep -v '[?][?]' > /tmp/fullbt.txt

What remains is something very similar to what the real backtrace should have been if everything had originally worked as it should in gdb:

b31ae5ed gst_pad_send_event_unchecked en /home/enrique/buildroot/output5/build/gstreamer1-1.10.4/gst/gstpad.c:5571 
b31a46c1 gst_debug_log en /home/enrique/buildroot/output5/build/gstreamer1-1.10.4/gst/gstinfo.c:444 
b31b7ead gst_pad_send_event en /home/enrique/buildroot/output5/build/gstreamer1-1.10.4/gst/gstpad.c:5775 
b666250d WebCore::AppendPipeline::injectProtectionEventIfPending() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp:1360 
b657b411 WTF::GRefPtr<_GstEvent>::~GRefPtr() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/DerivedSources/ForwardingHeaders/wtf/glib/GRefPtr.h:76 
b5fb0319 WebCore::HTMLMediaElement::pendingActionTimerFired() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WebCore/html/HTMLMediaElement.cpp:1179 
b61a524d WebCore::ThreadTimers::sharedTimerFiredInternal() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WebCore/platform/ThreadTimers.cpp:120 
b61a5291 WTF::Function<void ()>::CallableWrapper<WebCore::ThreadTimers::setSharedTimer(WebCore::SharedTimer*)::{lambda()#1}>::call() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/DerivedSources/ForwardingHeaders/wtf/Function.h:101 
b6c809a3 operator() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:171 
b6c80991 WTF::RunLoop::TimerBase::TimerBase(WTF::RunLoop&)::{lambda(void*)#1}::_FUN(void*) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:164 
b6c80991 WTF::RunLoop::TimerBase::TimerBase(WTF::RunLoop&)::{lambda(void*)#1}::_FUN(void*) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:164 
b2ad4223 g_main_context_dispatch en :? 
b6c80601 WTF::{lambda(_GSource*, int (*)(void*), void*)#1}::_FUN(_GSource*, int (*)(void*), void*) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:40 
b6c80991 WTF::RunLoop::TimerBase::TimerBase(WTF::RunLoop&)::{lambda(void*)#1}::_FUN(void*) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:164 
b6c80991 WTF::RunLoop::TimerBase::TimerBase(WTF::RunLoop&)::{lambda(void*)#1}::_FUN(void*) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:164 
b2adfc49 g_poll en :? 
b2ad44b7 g_main_context_iterate.isra.29 en :? 
b2ad477d g_main_loop_run en :? 
b6c80de3 WTF::RunLoop::run() en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/glib/RunLoopGLib.cpp:97 
b6c654ed WTF::RunLoop::dispatch(WTF::Function<void ()>&&) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WTF/wtf/RunLoop.cpp:128 
b5937445 int WebKit::ChildProcessMain<WebKit::WebProcess, WebKit::WebProcessMain>(int, char**) en /home/enrique/buildroot/output5/build/wpewebkit-custom/build-Release/../Source/WebKit/Shared/unix/ChildProcessMain.h:64 
b27b2978 __bss_start en :?

I hope you find this trick useful and the scripts handy in case you ever to resort to examining the raw stack to get a meaningful backtrace.

Happy debugging!

0 Add to favourites0 Bury

16 October, 2020 07:07PM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for Purism PureOS

Purism PureOS

Specify Form-Factors in Your Librem 5 Apps

While more and more applications are being redesigned to take smartphones like the Librem 5 into account, PureOS still offers lots of desktop applications which are not ready to run on such devices yet.

As a user you want to know which applications are relevant to install, so PureOS Store will by default only present mobile-ready applications, while still letting you opt-into showing all applications to take full advantage of the Librem 5’s convergeant docked mode. As a user you also want to know which applications are relevant to run at a given time, so Phosh will let you run desktop-only applications only when the phone is docked.

This requires the applications to provide some information on which form-factors they can handle, if you are an application developer and you want your applications to work as expected on the Librem 5, please provide the relevant information as shown below.

To make your application appear in PureOS Store, add the following lines to your AppStream metainfo:

<?xml version="1.0" encoding="UTF-8"?>
<component type="desktop"><custom>
    <value key="Purism::form_factor">workstation</value>
    <value key="Purism::form_factor">mobile</value>
  </custom></component>

 

Convergent app in PureOS Store

To make your application appear in Phosh, add the following lines to your desktop entry:

[Desktop Entry]
…
# Translators: Do NOT translate or transliterate this text (these are enum types)!
X-Purism-FormFactor=Workstation;Mobile;
…
Convergent app icons in Phosh

If you don’t add these, your application will be assumed to only be compatible with the desktop mode.

You may be intrigued by the Purism namespace in these solutions, we came up with these ad-hoc solutions to provide the form-factor compatibility information we need until better fleshed out solutions are ready. You can read more about this issue here:

The post Specify Form-Factors in Your Librem 5 Apps appeared first on Purism.

16 October, 2020 03:02PM by Adrien Plazas

October 15, 2020

hackergotchi for Grml developers

Grml developers

Frank Terbeck: Whitespace, the Language

One of the esoteric programming languages that a fair number of people have heard of, is “Whitespace”. I'm sure this has nothing at all to do with jokes like its source code being so fantastically efficient when being printed out. The language was actually meant as a joke (at least that's how one of its creators puts it when he mentions the language in some of his talks). But even though people are aware of the language's existence, they rarely know how it works. Let's change that, because it's really not all that hard.

At its core, the language describes operations on a stack machine, that has access to a heap. The only data type of the machine are integer values of arbitrary size. There are operations for the machine, that use these values to encode characters as well. And that's about it. The rest is just a peculiar way to encode operations and values using three ASCII whitespace characters (TAB, SPACE and LINEFEED).

Operations are grouped by function. These groups determine an operation's encoding prefix, that the language spec calls “Instruction Modification Parameter”. Many of the operations have no arguments, as they create and consume arguments on the machine's stack. Some however do take an argument: Integers and Labels. Label arguments are used by flow control operations; and integer arguments are used by some of the stack manipulation operations.

Arithmetic Operation: Integer Division

Their encoding is similar: Both use strings of spaces and tabs, that are terminated by linefeeds. In labels, the spaces and tabs have no special semantics. At least not within the language specification; more on that later. In integers, tabs encode ones and spaces encode zeroes. Something to note about such integer literals is that they do not use two's complement to encode negative numbers. Instead, they use the literal's leftmost bit as a signedness bit: Tab means negative number, space means positive number. That makes encoding arbitrarily wide integers straight-forward.

Integer Literals

When you take a look at actual whitespace programs, you'll sometimes notice extremely long labels. Oftentimes with that, there seems to be a silent convention to use chunks of eight characters to encode eight bits (same semantics as in number literals as to what characters encode ones and zeroes) that are turned into a positive number, which is then mapped to the ASCII encoding (seven would have sufficed, but that's not what the programs I've seen use).

When you try and implement this language, you'll notice a couple of things your machine implementation needs: A stack obviously, since whitespace is a stack-manipulating language. Another stack, used as a callstack, since the language has Call and Return operations. It also needs a heap, mapping addresses to integers. Finally you'll need program memory and a program counter register. You might want a jump-table too, to deal with translating labels to addresses. That's not strictly required, though: You could just translate all labels to addresses before loading the program into your machine.

When I digged deep enough into the language spec to figure this out, I was intrigued enough to actually do yet another implementation of the language. It's called SpaceMan and it is available at gitlab.com/ft/spaceman as well as github.com/ft/spaceman.

I've added an org-mode conversion of the original language homepage, because that one is currently only available via archive.org. When trying some of the more complex examples you can find on the net, I was running into problems. My implementation failed to even parse them. I was verifying my code for quite some time, until I concluded that it was implementing the parser correctly. So I looked at other implementations. And it turned out most of them implemented two additional stack-manipulating operations: Copy and Slide. Apparently, they were added to a later specification of the language. I couldn't find such a spec on the net, though (not that I invested a lot of time — see the update at the end of the post for a resolution to this). However, after implementing these two, spaceman could run the most elaborate examples that I could find online, like a sudoku solver. I've added those two additional operations to the included version of the language spec.

I'm using Megaparsec for parsing purposes. And with a couple of utilities put in place, writing the parser becomes rather pleasant:

stackParser :: Parser StackOperation
stackParser = do
  (      try $ imp [ space ])
  (      try $ Push  <$> number [ space ])
    <|> (try $ operation        [ linefeed, space    ] Duplicate)
    <|> (try $ operation        [ linefeed, tabular  ] Swap)
    <|> (try $ operation        [ linefeed, linefeed ] Drop)
    <|> (try $ Copy  <$> number [ tabular,  space    ])
    <|> (try $ Slide <$> number [ tabular,  linefeed ])

When implementing the language's operations, you'll find that you're facing lots of common instructions that manipulate the virtual machine. You put those common tasks into functions, of course, and like any designer of an assembly language worth their salt, you obviously give your instructions slightly cryptic three letter names. With those, implementing the stack-manipulating operations looks like this:

eval :: WhitespaceMachine -> StackOperation -> IO WhitespaceMachine
eval m (Push n)  = return $ pci $ psh [n] m
eval m Duplicate = return $ pci $ psh h m               where h     = peek 1 m
eval m Swap      = return $ pci $ psh [b,a] $ drp 2 m   where [a,b] = peek 2 m
eval m Drop      = return $ pci $ drp 1 m
eval m (Copy i)  = return $ pci $ psh [n] m             where n = ref  i m
eval m (Slide n) = return $ pci $ psh h $ drp (n+1) m   where h = peek 1 m

Implementing the other groups of operations looks similar. I sort of like it. Each of them would basically fit onto an overhead slide.

As it turns out, editing whitespace programs is tough work. Doing it directly is best done in an hex-editor. But spaceman has a feature, that makes it dump the programs syntax-tree to stdout. And those program dumps are actually executable programs. So if you'd like to edit a whitespace program, you can dump it into a file, edit its AST and then run that program to yield the changed whitespace program.

“Yet another whitespace implemention created” achievement unlocked, I guess.

[Update] Andrew Archibald informs me, that if you spend a little more time looking for the language specification, that includes Slide and Copy, you will find http://compsoc.dur.ac.uk/whitespace/index.php (via archive.org), that contains v0.3 of the whitespace specification, that adds these operations.

15 October, 2020 11:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 112 – Conversa com Pedro Silva da Collabora

Já votaram no Podcast Ubuntu Portugal em podes.pt? Não? Então não leias mais e vai até https://podes.pt/votar/ escreve Podcast Ubuntu Portugal e clica em VOTAR. Não falhes a aritmética e repete as vezes que conseguires.

Já sabem: oiçam, subscrevam e partilhem!

  • https://collaboraonline.github.io/
  • https://events.opensuse.org/conferences/oSLO
  • https://www.humblebundle.com/books/learn-to-code-the-fun-way-no-starch-press-books?partner=pup
  • https://www.jonobacon.com/webinars/content/
  • https://www.twitch.tv/videos/763496146
  • https://podes.pt/votar/

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

15 October, 2020 09:45PM

Ubuntu Blog: Introducing Ubuntu support for Amazon EKS 1.18

This article originally appeared on the Amazon AWS Blog.

Amazon Elastic Kubernetes Service (EKS) is a fully automated Kubernetes cluster service on Amazon Web Services (AWS). Ubuntu is a popular and proven operating system for both virtual machine and containerized cloud computing. Canonical (the creator and primary maintainer of Ubuntu) is an Amazon partner and works with the EKS team to provide an optimized Ubuntu Amazon Machine Image (AMI) for running Kubernetes on AWS. EKS-optimized Ubuntu AMIs give you the familiarity and consistency of using Ubuntu, optimized for performance and security on EKS clusters.

Ubuntu optimized AMIs for Amazon EKS and Kubernetes versions 1.17 and 1.18 are now available. These images combine the Ubuntu OS with Canonical’s distribution of upstream Kubernetes that automates K8s deployment and operations. In addition to using a slimmed-down, minimal image these images take advantage of a custom kernel that is jointly developed with AWS.

You can find the EKS-optimized Ubuntu AMI IDs for a variety of AWS regions on the Ubuntu Cloud Images EKS site.

Running Ubuntu Managed Node Groups

Amazon EKS recently announced support for launch template and custom AMI support for EKS managed node groups. This feature lets you leverage the simplicity of managed node provisioning and lifecycle management features while allowing for any level of customization, compliance, or security requirements. Previously, using Ubuntu with EKS required provisioning and managing your own EC2 instances. Now you can use managed node groups with a custom Ubuntu AMI to provide compute for your Amazon EKS cluster.

To use Ubuntu with EKS, we will first create an EKS cluster and an Amazon EC2 launch template. EC2 launch templates enable users to create versioned, declarative instance configuration specifications that meet their specific needs. For example, the launch template can specify instance types, custom AMI ID, tags, networking, as well as other configuration options. Next, we will create a managed node group using the launch template and start our nodes.

Let’s get started! The rest of this post will take you through the process of launching a managed node group with a launch template using an Ubuntu EKS AMI.

Prerequisites

We assume that you already have a running EKS cluster. If not, you can start a new cluster following the instructions in the EKS documentation. Since the focus of this post is to start Ubuntu nodes for your cluster, you don’t need to provision any nodes for your cluster yet.

While your cluster is starting, create a node IAM role with the following IAM policies:

Create an EC2 Launch Template

The first step is to create the EC2 Launch Template. The launch template is very flexible and allows for a number of customizations. You can learn more about customizing a launch template for your managed node group in the EKS documentation. Because launch templates are versioned, you can update these parameters at any time and deploy those updates across your node group

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4b3b/1-18-launch-template.png" width="720" /> </noscript>

To create a launch template using an EKS-optimized Ubuntu AMI, enter the following parameters:

  1. Amazon machine image (AMI): Enter in one of the Ubuntu optimized EKS AMI IDs. The latest AMI IDs are published at https://cloud-images.ubuntu.com/docs/aws/eks
  2. Instance type: Choose the EC2 instance type for your node group. You must choose the instance type for the node group during template creation.
  3. Key pair (login): The key pair enables you to SSH directly into the instance after it starts. This is optional, but must be entered as part of the launch template.
  4. Security groups: Under Network settings, choose the security group required for the cluster. Be default users should use the security group created by the EKS cluster (e.g. named “eks-cluster-sg-*”)
  5. User data: Under Advanced details, at the bottom, is a section for user data. With EKS nodes, user data is passed to the instance to connect the node to the cluster. Add the following and replace the cluster name with your EKS cluster name:
#!/bin/bash
/etc/eks/bootstrap.sh {cluster name}

Again, these are the minimum items that users need to consider. You can further customize the template based on your needs. Be aware that some settings like IAM instance profile and spot instances are not configurable. For a full list with more details, see the documentation on the launch template support page.

Launch a Node Group with Template

With our launch template defined, we can use EKS to start the EC2 instances for the cluster. Go to the EKS cluster and under the Compute tab click “Add Node Group”. On the new page, enable the “Use launch template” option and choose the template name created above

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ab5e/1-18-node-group.png" width="720" /> </noscript>

Continue through the setup process and create the node group.

In the AWS Console, the status will show up as Active once the nodes are launched and connected to the cluster:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/9740/1-18-nodegroup.png" width="720" /> </noscript>

Using the AWS CLI

We just walked through using the AWS console to create your Ubuntu node group. You can do these same steps using the AWS CLI.

First, capture the launch template data as JSON. This includes the user data that will get passed to the instance into base64. Below is an example, only specifying the minimum required items

{
  "LaunchTemplateData": {
	"ImageId": "ami-018a7f43b2beb7a00",
	"InstanceType": "m5.large",
	"UserData": "IyEvYmluL2Jhc2hcbi9l....",
	"SecurityGroupIds": [
  		"sg-01b7bd9742f8feec1"
	]
   }
}

Next, create the template:

$ aws ec2 create-launch-template \
       --launch-template-name ubuntu-eks-nodes \
       --version-description "Create Ubuntu EKS Template" \
       --cli-input-json file://./ubuntu_node_template.json

Finally, launch a node group using the template.

$ aws eks create-nodegroup --cluster-name eks-cluster \
       --nodegroup-name ubuntu-nodes-cli \
       --subnets subnet-024699a3e184137fc subnet-06b9aaf79435fe7d8 \
       --node-role 'arn:aws:iam::927445640099:role/eksNodeGroup' \
       --launch-template name=ubuntu-eks-nodes

See your nodes!

Ensure you are connected to your cluster using kubectl. You can watch the nodes come online and transition to the ready state with kubectl get nodes -w

Latest Ubuntu EKS AMIs

Ubuntu supports Amazon EKS clusters with the optimized AMIs for the latest EKS Kubernetes versions. You can find the EKS-optimized Ubuntu AMI IDs for a variety of AWS regions on the Ubuntu Cloud Images EKS site.

— Josh Powers (Senior Engineer at Canonical) and Nate Taber (Principal Product Manager for Amazon EKS at Amazon)

15 October, 2020 07:30PM

Ubuntu Studio: About Website Security

UPDATE 2020-10-16: This is now fixed.

We are aware that, as of this writing, our website is not 100% https. Our website is hosted by Canonical. There is an open ticket to get everything changed-over, but these things take time. There is nothing the Ubuntu Studio Team can do to speed this along or fix it ourselves. If you explicitly type-in https:// to your web browser, you should get the secure SSL version of our site.

Our download links, merchandise stores, and donation links are unaffected by this as they are hosted elsewhere.

We thank you for your understanding.

15 October, 2020 05:21PM

hackergotchi for SparkyLinux

SparkyLinux

UKUI desktop

A new desktop environment has been implemented into APTus & APTus AppCenter: UKUI

What is UKUI?

UKUI is a desktop environment for Linux distributions and other UNIX-like operating systems. It provides a simpler and more enjoyable experience for browsing, searching and managing your computer.

Installation (Sparky testing):

sudo apt update
sudo apt install sparky-desktop-ukui

or via APTus-> Desktop-> UKUI desktop icon.

The UKUI desktop has been implemented to Sparky Advanced Installer too, so can be installed using Sparky MinimalGUI/MinimalCLI iso images of Sparky rolling edition.

UKUI on Sparky6

License: GNU GPLv2+
Git: github.com/ukui/ukui-desktop-environment

 

15 October, 2020 04:24PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.

Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 October, 2020 02:07PM

Ubuntu Blog: The Windows Calculator on Linux with Uno Platform

The good folks in the Uno Platform community have ported the open-source Windows Calculator to Linux. And they’ve done it quicker than Microsoft could bring their browser to Linux.  The calculator is published in the snapstore and can be downloaded right away. If you’re on Ubuntu or you have snapd installed just run: 

snap install uno-calculator

The Uno Platform brought their support to Linux during UnoConf 2020. Uno Plaform allows you to build native mobile, desktop, and WebAssembly apps with C# and XAML from a single code base. You can build Linux applications with Uno Platform using Visual Studio and Ubuntu on WSL. You can snap them up in the snap store and then run your apps on anything from the Linux desktop to a Raspberry Pi.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/b876/Screenshot-from-2020-10-07-13-09-54.png" width="720" /> </noscript>

Developing with Uno Platform for Linux

Maintaining separate code bases for multiple platforms requires a lot of time and effort. Committing to support and maintain an application on Windows, iOS, Android, macOS and Linux to make an application truly cross-platform can be daunting. 

With Uno Platform you can build your C# and XAML codebase to make it more portable. Uno Platforms unique approach for achieving pixel-perfect UI, Uno Platform adjusts your application to look and feel the way it should regardless of the operating system. All you need is to maintain one codebase. 

On Linux, Uno Platform projects use the Skia rendering engine to draw graphical elements. Uno Platform applications then integrate into the Ubuntu desktop with a GTK shell. And it’s all open-source, built on the Mono Project.

You can get started by visiting the Uno Platform docs for Linux and WSL. In the coming months, the Uno Platform will also publish detailed documentation on working with snaps and exactly how to build your Uno application as a snap. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/5634/UnoLogoLargeCut-1.png" width="720" /> </noscript>

Uno Platform snaps

When you finish your application you have to start thinking about support and maintenance. The users of your apps should be able to trust that you’ll keep them up-to-date and fully patched. On Linux, snaps are the best way to do this. Snaps are a way of packaging your software that bundles your application and its dependencies into an easily updated container. 

Uno and snaps both run on x86 and ARM so developers can target and test IoT applications on Raspberry Pi. The Uno Calculator is one example of an app that, with a little bit of work, could make a neat little IoT device. Imagine a physical Windows calculator, running Ubuntu, on a Raspberry Pi. And as a strictly confined snap, it can easily be made into a production-ready device with Ubuntu Core;  a minimal, containerised version of Ubuntu made up of snaps for security and the same updatability you have in your application.

What now? 

If you’re interested in what else you can do with Ubuntu Core and snaps, or want to build an Uno project for the Raspberry Pi, there are a few places you can go. You can give the Uno Platform a try, read up on building snaps, or get involved with the Ubuntu Appliances initiative. Talk to us about your application and we can help you snap it up, publish it, and find new users.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/931e/Snapstore_UnoCal_banner-1.jpg" width="720" /> </noscript>


15 October, 2020 02:00PM

Ubuntu Podcast from the UK LoCo: S13E30 – Whistling indoors

This week we’ve been upgrading our GPUs. We discuss our experiences using IoT devices, bring you some command line love and go over all your wonderful feedback.

It’s Season 13 Episode 30 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

bpytop

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

15 October, 2020 02:00PM

Ubuntu Blog: Introducing HA MicroK8s, the ultra-reliable, minimal Kubernetes

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a59d/microk8s.jpg" width="720" /> </noscript>

15th October 2020: Canonical today announced autonomous high availability (HA) clustering in MicroK8s, the lightweight Kubernetes. Already popular for IoT and developer workstations, MicroK8s now gains resilience for production workloads in cloud and server deployments.

High availability is enabled automatically once three or more nodes are clustered, and the data store migrates automatically between nodes to maintain quorum in the event of a failure. “The autonomous HA MicroK8s delivers a zero-ops experience that is perfect for distributed micro clouds and busy administrators”, says Alex Chalkias, Product Manager at Canonical.

Designed as a minimal conformant Kubernetes, MicroK8s installs and clusters with a single command.

“A substantial part of our work at MavenCode is focused on operationalising machine learning model pipelines for production deployment at scale with Kubeflow. MicroK8s comes in handy for our Data Scientists and ML Engineers to quickly prototype, build, and deploy these pipelines. MicroK8s is very easy to set up and configure, extremely lightweight and it easily emulates our production environments for seamless migration and deployment of the pipelines. This has really helped improve our team’s overall operational efficiency,” said Charles Adetiloye, Co-founder and MLOps Platform Engineer, MavenCode. 

HA MicroK8s can withstand the loss of any node and still provide reliable services, meeting production requirements with minimal administrative costs and oversight.

Failsafe, autonomous Kubernetes datastore

The datastore which makes this possible is Dqlite, Canonical’s raft-enhanced Sqlite, embedded inside Kubernetes. Dqlite reduces the cluster memory footprint and automates datastore maintenance. MicroK8s can also be configured to use etcd, but Dqlite provides automatic, autonomous high availability.

MicroK8s automatically chooses the best nodes to provide the datastore. In case of a datastore node failure, the next best node is automatically promoted in its place. MicroK8s manages its own control plane, ensuring API services remain up and running.

Automated, reliable operations at the edge

The increased resilience of HA MicroK8s benefits Kubernetes clusters on edge nodes, such as remote branch office racks, retail points of sale, cell towers, or cars.

Distributed micro clouds make human intervention expensive, so the zero-ops nature of HA MicroK8s greatly reduces manual operations costs. Snap packaging of MicroK8s provides compressed over the air updates, transactional rollbacks and automatic security patching to reduce exposure in unmanned environments, especially for mission-critical workloads.

Kubernetes for industrial IoT

HA MicroK8s hardens industrial IoT applications, supporting cloud-native applications in the high stakes environment of operation technology (OT). Industry 4.0 workloads such as AI inference, or connectivity microservices like OPC-UA, MQTT and Kafka are a natural fit with HA MicroK8s on mission-critical control systems.

Enterprise support

Long-term support and maintenance of MicroK8s is provided by Canonical.

Want to learn more about how HA MicroK8s works? Read our docs.

Visit the MicroK8s website.

<ENDS>

About Canonical

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.


15 October, 2020 12:07PM

October 13, 2020

Kubuntu General News: Kubuntu Focus Model 2 Launched

The Kubuntu Focus team, announce the immediate availability of their second generation laptop, the Kubuntu Focus M2.

Customers experience power out of the box acclaimed by both experts and new users alike. The finely-tuned Focus virtually eliminates the need to configure the OS, applications, or updates. Kubuntu combines industry standard Ubuntu 20.04 LTS with the beautiful yet familiar KDE desktop. With dozens of Guided Solutions and unparalleled support, the shortest path to Linux success is the Focus.

The M2 is available now and is smaller, lighter, and faster than the prior generation M1. The 8c/16t i7-10875H CPU is faster by 17% single-core and 58% multi-core.

Full details are available on the Kubuntu Focus website at kfocus.org

13 October, 2020 06:31PM

hackergotchi for Purism PureOS

Purism PureOS

Hand Drawn 2D Animation with PureOS and Librem Laptops

Professional animation is not just possible but ideal with free software, this story shares what is possible running PureOS, Librem laptops, and accessories. I have been using free software for 6 years and each year these freedom respecting professional tools I use seem to improve faster than the commercial proprietary pace.

Krita, as an example, released an animation feature that made it the perfect tool for making rough animations. That same year, the software Toonz, that was used by the legendary Studio Ghibli for clean up and coloring purpose, was released as free software under the name of OpenToonz. Nice, with just these two features and tools released, I had everything I needed to do traditional animations again with my Librem based digital studio. Below I will go through the workflow of making a simple hand made 2D animation.

This particular animation was commissioned to me, during the summer, by a young french film production called Baze Production. The goal of this project was to make a cute production identity intro in the same style as Pixar or Illumination Studios, but with hand made animations instead of 3D computer graphics. For that matter, I used 2 Librem laptops and 2 Wacom tablets.

Designing the character

The first step, in this project, was to design the character. The requirements I have been given were pretty straight forward : The character has to be a goat and it has to be cute.

Based on that, I made a few character designs on Krita and the following one was selected.

Drawing the storyboard

Animating is a lot about observing and understanding how to decompose a movement. Therefore, before diving into the animation, I watched many “cute goats” videos online. I was impressed by how popular those videos are on the internet!

After a few hours of watching cute baby goats videos, I had a rough idea about how they move but I didn’t really know what our goat would do on those “BAZE” letters. The first requirement was that the goat enters the screen from the left, jumps on the letter “B” and sits on it. Then, I put myself in the head of a goat and thought that the “E” was flatter and wider than the “B” so it would be more comfortable to sit there. I could have made the goat appear from the right side of the screen but I though it would be fun to see it jump across the different letters. Especially as the “A” is a tricky one to stand on top of.

As this small animation is a single shot, instead of making a proper storyboard, I ended up drawing a few key frames that would give a first impression of what the animation would be.

Doing the rough animation

Based on those few key frames, I made a 12 fps rough animation on Krita. This is a pretty long process but it is the one I prefer doing as it feels like giving life to this cute animal. I always think that there is something magical with animations.

When I do sketches or rough animations, I don’t need to be extremely precise with my lines and I prefer using a classic graphics tablet that is standing on my table and where I do not have my hand over the screen. This way, it lets me keep my eye on the entire canvas while drawing.

For making this rough animation, I used my Librem 13 with a simple Wacom Bamboo tablet.

The technique I use for animating is to draw some key frames, dispatch them across the timeline in order to get an idea of the rhythm of the overall movement, then I draw the in-between frames until I get to a smooth result.

I usually animate at 12 frames per second and if I want to do a full speed 24 fps animation, I do a second pass of in-between drawings. For this particular video, I stayed at 12 fps.

Here is what the rough animation looked like :

Clean up and coloring

I personally love the style of hand made rough animations and I would often end an animation project at this point. However, for this one, I was asked to do a clean and colored animation.

For this kind of work, I need to be a lot more precise with my lines and so, I used a Wacom Cintiq tablet connected to my second Librem laptop. Both laptops data get constantly synchronized through the use of Unison and they share a single mouse, thanks to Barrier. This way, it is easy for me to move from one computer to another. I can even copy and paste from one computer to another. It feels just like if I had 4 screens on a single computer.

OpenToonz is a beautiful and powerful software. I am pretty new to it and I still have a lot to learn and to practice in order to use it correctly. For this project, I have made the line art and coloring on the same vector layer while the best practice seems to be doing the line art on a vector layer, in order to have smooth editable lines, and the color on a raster layer for it to be well applied and detailed with a brush. I will experiment more with that on a future project.

Here is a video of the final animation.

The post Hand Drawn 2D Animation with PureOS and Librem Laptops appeared first on Purism.

13 October, 2020 01:35PM by François Téchené

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu wants to code the future of Italy

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/1bf7/Capture.jpg" width="720" /> </noscript>

When is CodeMotion Rome 2020: November 24th-26th
Location: virtual Italian space
Booth: Canonical / Ubuntu

Book a meeting

In a year of challenges and changing times, the community gets closer and starts gathering in a different way.  From face to face meet-ups, we moved everything virtually and CodeMotion is not an exception. Made by developers for developers, it is an event where participants are used to meet and code the future of the world. Well, who said that this has changed?

Canonical will give a series of technical talks to learn more about Kubernetes, AI, and big data pipelines on Ubuntu.

Ubuntu and Canonical at CodeMotion Rome 2020

Lorenzo Cavassa, field engineer at Canonical will demonstrate how to use MicroK8s and Kubeflow, in order to build a reliable environment for any artificial intelligence project.

Here is a teaser of his presentation:

  • Kubeflow on Ubuntu: a fully integrated stack for Machine Learning, from data ingestion to training and serving
  • MicroK8s: Canonical’s solution to cover edge cloud use cases. It is a lightweight, opinionated, zero-ops Kubernetes for micro cloud clusters and IoT. 
  • Bringing the best of both worlds together: Kubeflow on MicroK8s, the easiest way to deploy a production-ready Machine Learning pipeline.

Join Lorenzo at CodeMotion Rome 2020.

Get your free ticket Book a meeting

Talk Ubuntu at Canonical’s  virtual booth

WhiCodeMotion Rome, come and visit the booth to:

  • Meet the team behind Ubuntu and discover all the technologies that can be deployed on it.
  • Discover how Canonical supports Ubuntu, Kubernetes, open infrastructure and a number of open source applications for enterprises.
  • Ask all the questions you have about Ubuntu 20.10, the latest Ubuntu release.
  • Discover how you can join the Ubuntu and Canonical community in Italy.

We cannot wait to meet you and tell you more about Ubuntu. Want to skip the queue? Book a meeting with us!

Book a meeting


13 October, 2020 09:39AM

October 12, 2020

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 652

Welcome to the Ubuntu Weekly Newsletter, Issue 652 for the week of October 4 – 10, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

12 October, 2020 09:33PM by guiverc

hackergotchi for VyOS

VyOS

VyOS Project September 2020 Update

The biggest announcement of September was the 1.2.6 release, and we even skipped the August update while we've been busy with it and the automated release procedure, but we worked on other things as well. Now that 1.2.6 and its security hotfix update are done, we are back to working on the strategic goal — 1.3.0 release, and further improving our processes.

12 October, 2020 07:59PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Grml developers

Grml developers

Frank Terbeck: A client library for XMMS2

“…done in Scheme.” was an idea I had when I started liking Scheme more and more. XMMS2 is my preferred audio player for a long time now. And I always wanted to write a client for it that fits my frame of mind better than the available ones.

I started writing one in C and that was okay. But when you're used to use properly extensible applications on a regular basis, you kind of want your audio player to have at least some of those abilities as well. I started adding a lua interpreter (which I didn't like), a Perl interpreter (which I did before in another application, but which is also not a lot of fun). So I threw it all away, setting out to write one in Scheme from scratch.

To interface the XMMS2 server, I was first trying to write a library that wrapped the C library that the XMMS2 project ships. But now I was back to writing C, which I didn't want to do. Someone on IRC in #xmms2 on freenode suggested to just implement the whole protocol in Scheme natively. I was a little intimidated by that, because the XMMS2 server supports a ton of IPC calls. But XMMS2 also ships with machine readable definitions of all of those, which means that you can generate most of the code and once you've implemented the networking part and have the serialization and de-serialization for the protocol's data types, you're pretty much set. …well, after you've implemented the code that generates your IPC code from XMMS2's definition file.

Most of XMMS2's protocol data types map very well to Scheme. There are strings, integers, floating point numbers, lists, dictionaries. All very natural in Scheme.

And then there are Collections. Collections are a way to interact with the XMMS2 server's media library. You can query the media library using collections. You can generate play lists using collections and perform a lot of operations on them like sorting them in a way you can specify yourself. For more information about collections see the Collections Concept page in the XMMS2 wiki.

Now internally, Collections are basically a tree structure, that may be nested arbitrarily. It carries a couple of payload data sets, but they are trees. And implementing a tree in Scheme is not all that hard either. The serialization and de-serialization is also pretty straight forward, since the protocol reuses its own data types to represent the collection data.

What is not quite so cool is the language you end up with to express these collections. Say, you want to create a collection that matches four Thrash Metal groups you can do that with XMMS2's command line client like so:

xmms2 coll create big-four artist:Slayer \
                        OR artist:Metallica \
                        OR artist:Anthrax \
                        OR artist:Megadeth

To create the same collection with my Scheme library, that would look like this:

(make-collection COLLECTION-TYPE-UNION
    '() '()
    (list (make-collection COLLECTION-TYPE-EQUALS
              '((field . "artist")
                (value . "Slayer"))
              '()
              (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
          (make-collection COLLECTION-TYPE-EQUALS
              '((field . "artist")
                (value . "Metallica"))
              '()
              (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
          (make-collection COLLECTION-TYPE-EQUALS
              '((field . "artist")
                (value . "Anthrax"))
              '()
              (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
          (make-collection COLLECTION-TYPE-EQUALS
              '((field . "artist")
                (value . "Megadeth"))
              '()
              (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))))

…and isn't that just awful? Yes, yes it is. It so is.

In order to reign in this craziness, the library ships a macro that implements a little domain specific language to express collections with. Using that, the above boils down to this:

(collection (∪ (artist = Slayer)
               (artist = Metallica)
               (artist = Anthrax)
               (artist = Megadeth)))

So much better, right? …well, unless you really don't like Unicode characters and the ‘∪’ in there gives you a constant headache… But worry not, you can also use ‘or’ in place if the union symbol if you like. Or ‘UNION’ if you really want to be clear about things.

If you know a bit of Scheme, you may wonder how to use arguments to those operators that get evaluated at run time. Since evidently, the Slayer in there is turned into a "Slayer" string at compile time; same goes for the artist symbol. These transformations are done to make the language very compact for users to just type expressions. If you want an operator to be evaluated, you have to wrap it in a (| ...) expression:

(let ((key "artist") (value "Slayer"))
  (collection ((| key) = (| value)))

These expressions may be arbitrarily complex.

Finally, to traverse Collection tree data structures, the library ships a function called ‘collection-fold’. It implements pre, post and level tree traversal with both left-to-right and right-to-left directions. So, if you'd like to count the number of nodes in a collection tree, you can do it like this:

(define *big-four* (collection (∪ (artist = Slayer)
                                  (artist = Metallica)
                                  (artist = Anthrax)
                                  (artist = Megadeth)))

(collection-fold + 0 *big-four*)   ;; → 9

This would evaluate to 9.

The library is still at an early stage, but it can control an XMMS2 server as the ‘cli’ example, shipped with the library, will show you. There are no high level primitives to support synchronous and asynchronous server interactions. And there is not a whole lot of documentation, yet either. But the library implements all data types the protocol supports as well as all methods, signals and broadcasts the IPC document defines. The collection DSL supports all collection operators and all attributes one might want to supply.

Feel free to take a look, play around, report bugs. The library's home is with my account on github.

[Update] In a previous version of this post I mixed up the terminology for unions and intersections. The library mixed up the set operators associated with ‘and’ and ‘or’, too, and was fixed as well.

12 October, 2020 01:20AM

Frank Terbeck: SYSHH#7

In this episode of songs you should have heard, let's have some metal, shall we? In particular, the Devin Townsend Band. Some people are flooding youtube with reaction videos to Devin's performance of Kingdom, which is a good one, but let's have this one instead:

Devin Townsend Band — Accelerated Evolution — Deadhead

12 October, 2020 12:12AM

October 11, 2020

hackergotchi for SparkyLinux

SparkyLinux

MellowPlayer

There is a new application available for Sparkers: MellowPlayer

What is MellowPlayer?

MellowPlayer is a free, open source and cross-platform desktop application that runs web-based music streaming services in its own window and provides integration with your desktop (hotkeys, multimedia keys, system tray, notifications and more).
MellowPlayer is a Qt based alternative to NuvolaPlayer initially crafted for KaOS. MellowPlayer is written in C++ and QML.

Installation (Sparky stable & testing):

sudo apt update
sudo apt install mellowplayer

or via APTus-> Audio-> MellowPlayer icon.

MellowPlayer

Author: Colin Duquesnoy
License: GNU GPL
Git: gitlab.com/ColinDuquesnoy/MellowPlayer

 

11 October, 2020 11:53AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

October 10, 2020

hackergotchi for Tails

Tails

Tails report for September, 2020

Releases

The following changes were introduced in Tails 4.11:

  • We added a new feature of the Persistent Storage to save the settings from the Welcome Screen: language, keyboard, and additional settings.

  • Configure KeePassXC to use the new default location Passwords.kdbx. (#17286)

  • Update python3-trezor to 0.12.2 to add compatibility with the new Trezor Model T.

  • Disable the feature to Turn on Wi-Fi Hotspot in the Wi-Fi settings because it doesn't work in Tails. (#17887)

Code

  • Completed the last steps of an effort started 14 months ago to import our custom software into our main Git repository (#7036), in order to streamline Tails development work:

  • Ported all our Perl code to a translatable strings format supported by GNU gettext (#17928), which will then allow us to benefit from brand new i18nspector checks, avoiding some classes of Tails bugs caused by buggy translations

  • Upgraded to tor 0.4.4 (#17932)

  • Settled on a new policy for kernel updates, that balances rapid hardware enablement with lower risk of regressions (#17911)

Hot topics on our help desk

  1. A lot of people reached the helpdesk complaining about JavaScript not being disabled by Tor Browser's safest mode anymore (NoScript is doing that job now as it should; to be documented in #17963).

  2. People are still affected (and will be until the Linux kernel is upgraded to 5.8) by a regression with some Intel GPU.

  3. Another regression affecting some Acer and Asus laptops was reported since the release of Tails 4.11.

  4. People are still regularly complaining about Seahorse failing to import public keys.

Also, it seemed important to the Help Desk team to remind Tails users that we are not responsible for the cosmetic changes made upstream (e.g. the handling of JavaScript in Tor Browser, or the new Firefox URL bar that received a lot of comments from users on our helpdesk as well as Tor project's). Any thoughts on those changes should be done directly to the people working on them.

Outreach

On-going discussions

Translations

All programs

  • ar: 22 translated messages, 316 untranslated messages.
  • ca: 48 translated messages, 290 untranslated messages.
  • cs: 112 translated messages, 226 untranslated messages.
  • da: 338 translated messages.
  • de: 55 translated messages, 283 untranslated messages.
  • el: 58 translated messages, 280 untranslated messages.
  • es: 335 translated messages, 3 untranslated messages.
  • es_AR: 336 translated messages, 2 untranslated messages.
  • fi: 45 translated messages, 293 untranslated messages.
  • fr: 338 translated messages.
  • ga: 335 translated messages, 3 untranslated messages.
  • he: 65 translated messages, 273 untranslated messages.
  • hr: 338 translated messages.
  • hu: 338 translated messages.
  • id: 6 translated messages, 332 untranslated messages.
  • it: 162 translated messages, 176 untranslated messages.
  • km: 38 translated messages, 300 untranslated messages.
  • ko: 1 translated message, 337 untranslated messages.
  • lt: 87 translated messages, 251 untranslated messages.
  • mk: 335 translated messages, 3 untranslated messages.
  • nl: 17 translated messages, 321 untranslated messages.
  • pl: 2 translated messages, 336 untranslated messages.
  • pt_BR: 94 translated messages, 244 untranslated messages.
  • pt_PT: 44 translated messages, 294 untranslated messages.
  • ro: 135 translated messages, 203 untranslated messages.
  • sv: 134 translated messages, 204 untranslated messages.
  • tr: 337 translated messages, 1 untranslated message.
  • zh_CN: 337 translated messages, 1 untranslated message.

All the website

  • de: 28% (1899) strings translated, 14% strings fuzzy
  • es: 49% (3300) strings translated, 6% strings fuzzy
  • fa: 21% (1396) strings translated, 13% strings fuzzy
  • fr: 78% (5159) strings translated, 9% strings fuzzy
  • it: 28% (1863) strings translated, 10% strings fuzzy
  • pt: 19% (1299) strings translated, 9% strings fuzzy

Core pages of the website

  • de: 45% (972) strings translated, 23% strings fuzzy
  • es: 83% (1791) strings translated, 6% strings fuzzy
  • fa: 19% (425) strings translated, 15% strings fuzzy
  • fr: 75% (1603) strings translated, 13% strings fuzzy
  • it: 49% (1059) strings translated, 21% strings fuzzy
  • pt: 38% (821) strings translated, 15% strings fuzzy

Core pages of the website for languages not activated on the website yet

  • ar: 7% (164) strings translated, 8% strings fuzzy
  • ca: 8% (174) strings translated, 7% strings fuzzy
  • id: 6% (139) strings translated, 5% strings fuzzy
  • pl: 7% (165) strings translated, 7% strings fuzzy
  • ru: 8% (181) strings translated, 7% strings fuzzy
  • sr_Latn: 5% (114) strings translated, 4% strings fuzzy
  • tr: 7% (171) strings translated, 7% strings fuzzy
  • zh: 10% (222) strings translated, 8% strings fuzzy
  • zh_TW: 21% (461) strings translated, 13% strings fuzzy

Metrics

  • Tails has been started more than 906 004 times this month. This makes 30 200 boots a day on average.

How do we know this?

10 October, 2020 06:48AM

October 09, 2020

hackergotchi for Ubuntu

Ubuntu

Ubuntu Community Council election 2020 underway!

Voting has begun for the Ubuntu Community Council election. We will be voting in all seven seats for a two year term. All Ubuntu Members are eligible to vote and should receive their ballot by email.

The candidates are as follows:

In the event that you are an Ubuntu Member but have not already received your ballot, first check your spam folders. The email should have the following identifying headers:

From: "Walter Lapchynski (CIVS poll supervisor)" <civs@cs.cornell.edu>
Sender: civs@cs.cornell.edu
Reply-To: community-council@lists.ubuntu.com
Subject: Poll: Ubuntu Community Council election 2020
X-Mailer: CIVS

If you still cannot find your ballot, it is likely because you do not have a public email address on Launchpad. Please send an email to community-council@lists.ubuntu.com with your name and Launchpad username and we’ll get you a ballot immediately.

Please remember to rank all of the candidates in order of preference.

Voting closes Saturday October 24 2020 00:01 UTC.

09 October, 2020 08:49PM by wxl

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Homelab clusters: LXD micro cloud on Raspberry Pi

Set up and run your own homelab with the LXD Ubuntu Appliance. Spin up and manage virtual machines (VMs) and containers, run and test workloads across platforms and architectures, and rest assured of security and updates with Ubuntu Core. Follow the tutorial to get started or read on to learn why you might care.     

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2fbd/image-4.png" width="720" /> </noscript>

Why should I care?

Running or testing workloads at home, safely, still has issues. There are lots of ways to do it, maybe you have a dedicated homelab in your basement, or you run workloads on your main machine, or something in between. But every method has drawbacks. People want something that doesn’t take up racks of space, that is quiet enough not to be in a different room, and won’t cost an arm and a leg.

Previously you had to use cloud technology, fork out the cash, or let go of your homelab dreams forever. But this all changes with the LXD Ubuntu Appliance. A smaller, quieter and comparably inexpensive way to spin up and manage all the VMs and containers you need. 

What is LXD?

LXD is a next-generation system container and VM manager. It gives you a way to easily spin up and manage pre-made images for a range of Linux distributions over the network with a designed to be simple REST API. Read more about it on the website where you can start locally

What’s so great about it? 

The LXD appliance targets the Raspberry Pi 4 and Intel NUCs and supports mixed architecture deployments. This lowers the price tag of a homelab but also means you can test and build on the Raspberry Pi’s ARM architecture and x86, at the same time. And with the growing interest in ARM, this might just set you up well for developing in the future.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/f580/assembly_4.jpg" width="720" /> </noscript>

Ubuntu Appliances are operating system agnostic, you can set one up on Windows, macOS or Linux. And you can use the native LXD appliance on any of the above OSs too. It works the same as running LXD on your desktop but can be accessed by multiple users from any system on the network, from under the desk or behind the T.V. 

And as an Ubuntu Appliance VMs and containers are significantly easier to support and maintain. Ubuntu Appliances run on Ubuntu Core, the minimal, modular and embedded version of Ubuntu. Ubuntu Core is made up of snap packages that allow Canonical and LXD upstream to keep your OS and LXD up-to-date while you focus on your workloads. 

What are these Ubuntu Appliance things?

Ubuntu Appliances are software-defined projects that enable users to download everything they need to turn a Raspberry Pi or an Intel NUC into a dedicated device. In this case LXD. Following the tutorial gets you up and running and shows you how to grow to a much larger cluster in the future. You could go on to attach remote storage, use virtual networking, and integrate with MAAS or with Canonical RBAC. 

This Ubuntu Appliance is part of a growing portfolio of other projects that you can download and set up today. If while reading this you thought of another project or piece of software that you would like to see become a dedicated appliance, let us know.

<noscript> <img alt="" height="246" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_398,h_246/https://ubuntu.com/wp-content/uploads/e7a3/Screenshot-from-2020-01-08-16-12-58.png" width="398" /> </noscript>

09 October, 2020 05:00PM

hackergotchi for Volumio

Volumio

MyVolumio Lifetime Deal is available NOW! Limited time offer!

What’s better than having lifetime access to your favorite audiophile music player?

After receiving requests from many of you to provide the availability to purchase the lifetime license, we decided to make it now available for a limited time!

For 4 days and only a limited amount of 200 licenses, MyVolumio Superstar Lifetime is available NOW for sale at just €199

myvolumio-lifetime-deal-superstar

You can be one of the lucky to get lifetime access to all Superstar premium features for life, for the same cost as less than 3 years of the yearly plan! Just pay once and forget about extra payments on MyVolumio!

With MyVolumio Lifetime Deal, you will have exclusive access to premium features such as:

  • Use MyVolumio on 6 devices
  • TIDAL and Qobuz Native Integration
  • CD Playback and Ripping
  • Alexia and Highresaudio.com Integration
  • Automatic Backup and Sync
  • Music and Artist Credits Discovery
  • Digital and Analog Inputs Playback
  • Bluetooth Audio Playback Input
  • Dedicated email support from the Volumio team

Plus, all upcoming premium features and unlimited upgrades!

MYVOLUMIO SUPERSTAR LIFETIME DEAL IS AVAILABLE NOW!

From now until Monday October 19 at 23:59 CEST… or while supplies last.

As we continue growing as an organization and continue expanding Volumio capabilities and performance, there will come a time where soon it’ll be necessary for us to increase the monthly and annual MyVolumio subscription prices. This offer we provide now is for the petitions we have received from many members of our community since we released MyVolumio, giving a chance to benefit before changes are made.

There won’t be another chance like this again, so mark your calendars and be on the lookout since it is a limited amount… Once they’re gone, they’re gone.

YOU CAN GET YOUR LIFETIME DEAL HERE

 

The post MyVolumio Lifetime Deal is available NOW! Limited time offer! appeared first on Volumio.

09 October, 2020 04:36PM by Monica Ferreira

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical & Ubuntu at Open Infrastructure Summit 2020

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4d34/open-infrastructure-summit-2020-ubuntu.png" width="720" /> </noscript>

When is Open Infrastructure Summit 2020: October 19th-23rd
Where: Everywhere! This year’s OIS is virtual.

Get your free ticket Book a meeting

This year we’ve probably used the word ‘unprecedented’ almost as often as we’ve said ‘Linux’ and yet life must go on, and certainly so does tech. That’s why we were so thrilled to hear that Open Infrastructure Summit (OIS) will indeed take place this year too (virtually of course) – especially considering how solid the interest around OpenStack is today, and Canonical’s ongoing commitment to this technology and community.

As the OpenStack Summit’s became the Open Infrastructure Summit, the community’s focus has begun to broaden; in addition to traditional data centre infrastructure, you will be able to learn about micro clouds for edge use cases and open operators to run your infrastructure as code.

Here’s what you can expect:

  • Keynote by Mark Shuttleworth
  • Operator day: 3 training sessions hosted at various times for Asia, EMEA, and Americas
  • Breakout Session: Putting OpenStack at the edge
  • Canonical booth and demos
  • A number of technical sessions run by Canonical experts

Canonical’s Open Infrastructure 2020 Keynote

When: Monday, October 19th

This year, Canonical founder Mark Shuttleworth will be joining the community again to set the tone for the event with a few words on where the future of open infrastructure is headed. In his presentation, Mark will address the open infrastructure use cases that characterised the past, and what demands he forecasts for the upcoming decade. Join him to hear what use cases the tech sector should be preparing for, how Canonical is building its roadmap around them, and the essential relevance of MicroStack – Canonical’s solution for OpenStack at the edge.

Get your free ticket Book a meeting

Breakout Session: MicroStack – putting OpenStack at the edge

When: Monday, October 19th, 10:45am-11:15 PST


Tytus Kurek, product manager at Canonical, will introduce the community to MicroStack – the first OpenStack “on rails” solution for micro clouds. MicroStack can be installed across 40+ Linux distributions and configured by simply running 2 commands. MicroStack uses OVN as an SDN and supports clustering which makes it ideal for edge deployment. Tytus will be available for live Q&A to at the presentation so make sure to be there promptly and watch it live!

Canonical’s demos for OIS 2020: see them on the Canonical booth

As always, we’ll make as many resources as possible available for attendees on our booth -including whitepapers, product collateral, videos, and of course demos. We do encourage you to pre-book a live, personalised demo and conversation with one of our engineers, or just come visit us through the booth video-call function. If you’re not feeling chatty we’ve still got you covered! Here are a few on-demand demos you can look forward to:

  • OpenStack at the edge: MicroStack based micro-cloud running Kubernetes workloads
  • Ceph at the edge: MicroK8s based micro cloud with Ceph persistent storage
  • Telco Edge – Bare metal provisioning for micro cloud with LXD & MAAS & Ceph
  • Kata at the edge: MicroK8s based micro cloud with Kata container workloads
  • Charmed OpenStack and remote Nova compute

Operator day, hosted by Canonical

When: Thursday, October 22nd

Operators simplify everyday application management on Kubernetes. That’s why, with the growing interest surrounding K8s both in developer communities and the enterprise, operators are more relevant than ever.

At Open Infrastructure Summit 2020, Canonical will be hosting a live, interactive Operator Training Day, to help you learn how to use operators and how to create them in Python. We’re all in on the Open Operator Manifesto, working to create a community-driven collection of operators.

The training will repeat in 3 time zone friendly slots, for Asia, EMEA and Americas. Each block will start with a keynote introduction with Mark Shuttleworth, followed by hands-on training by the Canonical engineering team, and then community discussion.

N.B. We’re reserving a generous number of rare 20.04 Focal Fossa release T-shirts for our trainee friends so make sure to RSVP and book your spot on time!

Canonical’s Open Infrastructure Summit 2020 Technical Sessions

When: Please check individual sessions’ times

In addition to our formal sponsor participation throughout the event, our team members are presenting a number of submissions to share with the community everything they know about the topics they’re personally most passionate about. Take a reinvigorating shot of open infra how-to’s by joining their 30 minute sessions.

Get your free ticket Book a meeting

We look forward to seeing you there and can’t wait to hear what you’d like to learn from us at OIS this time!

09 October, 2020 02:32PM

hackergotchi for Qubes

Qubes

Calling all humans!

Greetings, Qubes community! We are running our first ever survey of current, former, and future Qubes OS users. We invite you all to lend us 10-15min of your time, to participate. Of note: we’ve had reports that in Firefox on Android devices, it does not work. FYI.

https://survey.qubes-os.org/index.php?r=survey/index&sid=791682&lang=en

The Qubes OS team loves the conversations we have with our community across forums, email lists, in support tickets, and at conferences. As most of us understand, though, structured data is very different — and clear information to help us make product and development decisions in the weeks and months to come, we feel is necessary to best serve our users.

This survey is also just the beginning of several weeks of user research work that will consist of interviews, user testing, co-creation workshop(s) with users guided by a ux specialist, and possibly more surveys. At the end of this survey, we’ll collect contact information should participating in that work be of interest to folks. We also look forward to keeping folks updated in our user communities, with how all of this work is progressing.

09 October, 2020 12:00AM

October 08, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 111 – Duplicar Ciclomotor

Já votaram no Podcast Ubuntu Portugal em podes.pt? Não? Então não leias mais e vai até https://podes.pt/votar/ escreve Podcast Ubuntu Portugal e clica em VOTAR. Não falhes a aritmética e repete as vezes que conseguires.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.humblebundle.com/books/learn-to-code-the-fun-way-no-starch-press-books?partner=pup
  • https://www.youtube.com/watch?v=8J00rfZE17Y
  • https://www.youtube.com/watch?v=66EZetk-HQ4
  • https://nextcloud.com/blog/nextcloud-hub-20-debuts-dashboard-unifies-search-and-notifications-integrates-with-other-technologies/
  • https://nextcloud.com/blog/bridging-chat-services-in-talk/
  • https://www.open-cuts.org/
  • https://gitlab.com/open-cuts
  • https://twitter.com/NeoTheThird/status/1312684717351874561
  • https://twitter.com/NeoTheThird/status/1309816408839258112
  • https://twitter.com/NeoTheThird/status/1312684729385320448
  • https://www.jonobacon.com/webinars/content/
  • https://podes.pt/votar/

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

08 October, 2020 09:45PM