July 30, 2021

hackergotchi for Qubes

Qubes

Qubes virtual mini-summit 2021!

We are pleased to announce the third annual Qubes mini-summit co-hosted by 3mdeb and the Qubes OS Project. (For prior year summaries, agendas, and slides, see 2019 and 2020.) This year’s event will take place across two virtual sessions on August 3 and 10. Each day, there will be four talks, intermixed with Q&A time. An abstract for each talk is provided below. The discussion section will be a live meeting on Jitsi, with details to follow. The whole event will be also streamed on 3mdeb’s YouTube channel, where we will also accept questions. We invite everyone interested to join!

Agenda for August 3

Time (UTC) Event description
18:00 – 18:15 Welcome and introduction by Piotr Król
18:15 – 19:00 “Qubes OS 4.1 highlights” by Marek Marczykowski-Górecki
19:00 – 19:45 “First Impressions Count: Onboarding Qubes Users Through an Integrated Tutorial” by deeplow
19:45 – 20:15 Break
20:15 – 21:00 “Wyng-backups: revertible local and remote known safe Qubes OS states (including dom0)” by Thierry Laurion
21:00 – 21:45 “SRTM and Secure Boot for VMs” by Piotr Król
21:45 vPub, informal afterparty

Agenda for August 10

Time (UTC) Event description
18:00 – 18:15 Welcome and introduction by Piotr Król
18:15 – 19:00 “Usability Within A Reasonably Secure, Multi-Environment System” by Nina Alter
19:00 – 19:45 “Qubes OS Native App Menu: UX Design and Implementation” by Marta Marczykowska-Górecka and Nina Alter
19:45 – 20:15 Break
20:15 – 21:00 “A brief history of USB camera support in Qubes OS” by Piotr Król
21:00 – 21:45 “How to setup BTC and XMR cold storage in Qubes OS” by Piotr Król
21:45 vPub, informal afterparty

Abstracts of the talks

“Qubes OS 4.1 highlights” by Marek Marczykowski-Górecki

The upcoming Qubes OS 4.1 release is full of new exciting features, ranging from a technology preview of the GUI domain to subtle, yet important, Qrexec improvements. In this talk I will give a brief overview of them and demo a select few.

“First Impressions Count: Onboarding Qubes Users Through an Integrated Tutorial” by deeplow

We may all relate to having a rough time when starting using Qubes — be that because we’re coming from Windows and everything is different or because we come from Linux and many things don’t work like we expect them to. Apart from the usual challenges of going into a different system, Qubes has the additional one of requiring a fundamentally different way of thinking about your computer (a hypervisor mental-model). Smoothing out this transition is particularly important as Qubes aims to target vulnerable populations that are less technically inclined and have less time to explore and read the documentation.

The solution proposed by deeplow is to implement an integrated onboarding tutorial. The idea is that a short tutorial (with optional extra parts) that guides the user through the essential mechanics of Qubes will make the transition simpler. That’s what deeplow’s been working on for his master’s dissertation. In this talk he’ll introduce the idea and give an update on the current progress and challenges.

“Wyng-backups: revertible local and remote known safe Qubes OS states (including dom0)” by Thierry Laurion

Wyng-backups is an incremental backup/restore tool for LVMs. For Qubes OS, this means even dom0 can be reverted to a known safe state; locally or remotely, applying changes only. This talk will be a deep dive into the possibilities of wyng-backups for deploying and maintaining up to date, revertible states of Qubes OS base systems.

“SRTM and Secure Boot for VMs” by Piotr Król

This talk is the continuation of the Qubes OS mini-summit presentation “SRTM for Qubes OS VMs”, where the theoretical background of the Static Root of Trust was presented and discussed. In this presentation, we will practically approach SRTM and Secure Boot for VMs. We will also explore potential use cases for self-decrypting storage and signed kernels using safeboot. Finally, we will discuss how to introduce this and other security mechanisms in Qubes OS.

“Usability Within A Reasonably Secure, Multi-Environment System” by Nina Alter

Nina Alter first became aware of exciting possibilities of Qubes OS, when asked to lead UX research and design for the SecureDrop Workstation project. SecureDrop’s Workstation is built atop Qubes OS, yet exists for high-risk, non-technical journalist users. Nina will share her early discovery work; the joys, the pain-points, and the many opportunities she has since uncovered to extend Qubes OS’ reach to some of our world’s most vulnerable digital citizens.

“Qubes OS Native App Menu: UX Design and Implementation” by Marta Marczykowska-Górecka and Nina Alter

A brief overview of the new Application Menu that’s being introduced in (at latest) Qubes 4.2, the process of creating it, and design and implementation challenges. Based on design work by Nina Alter and implementation by Marta Marczykowska-Górecka.

“A brief history of USB camera support in Qubes OS” by Piotr Król

The use of complex multi-endpoint isochronous USB devices in the presence of sys-usb was not always possible in Qubes OS. Luckily, the Qubes Video Companion project was created by Elliot Killick, which enabled users to use USB cameras on Qubes OS. The project is still in development and testing, but it is very promising and gives many USB camera users hope. This presentation will tell the story of using Qubes Video Companion with the Logitech C922.

“How to set up BTC and XMR cold storage in Qubes OS” by Piotr Król

Cold storage, also called offline wallets, provides the highest level of security for cryptocurrency. The critical characteristic of a computing environment that can be used as cold storage is the lack of network connectivity. Good examples of cold storage are spare computer devices or microcontroller-based devices like a hardware wallet. By leveraging the same architecture, Qubes OS domains can be used for cold storage. In such a case, one of the domains is disconnected from the network and runs a cryptocurrency wallet inside. Other domains may generate transactions files, which are sent to cold storage VM for signing. Signed transaction files are sent back to the online environment. All operations are performed using well-specified and secure Qubes RPCs. This presentation will show how to set up and use BTC and XMR cold storage with the most popular wallets for those cryptocurrencies. We will also discuss what other measures can be taken to secure offline wallet VMs.

30 July, 2021 12:00AM

July 29, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 153 – Quim Barreiros

O último episódio do mês de Julho é sempre muito especial, e com cheiro a férias e a planos longe de computadores, Internet e microfones. Fiquem então a saber como vai ser o mês de Agosto do Constantino e do Carrondo em mais um episódio do Podcast Ubuntu Portugal.

Já sabem: oiçam, subscrevam e partilhem!

  • https://addons.mozilla.org/pt-PT/firefox/addon/privacy-possum/
  • https://github.com/cowlicks/privacypossum
  • https://web.archive.org/web/20210718182008/
  • https://web.archive.org/web/20210718182242/
  • https://www.youtube.com/watch?v=F4iHEbQKcik
  • https://twitter.com/castrojo/status/1410746172596125697
  • https://github.com/realgrm/snapd-in-Silverblue
  • https://github.com/realgrm
  • https://www.humblebundle.com/books/definitive-programming-cookbooks-oreilly-books?partner=PUP
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

29 July, 2021 04:00PM

Ubuntu Podcast from the UK LoCo: S14E21 – Gladiator Suits Share

This week we’ve been selling more things on eBay and return to the office. We round up news from the Ubuntu community and discuss our picks from the wider tech news.

It’s Season 14 Episode 21 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

29 July, 2021 02:00PM

Sean Davis: Xubuntu 21.10 Dev Update

Xubuntu 21.10 Dev Update

I&aposm finally getting back to regular FOSS development time, this time focusing again on Xubuntu. Resuming team votes and getting community feedback has kicked off development on Xubuntu 21.10 "Impish Indri". Recent team votes have expanded Xubuntu&aposs collection of apps. Read on to learn more!

New Additions

Disk Usage Analyzer (baobab)

Baobab is a utility that scans your storage devices and remote locations and reports on the disk space used by each file or directory. It&aposs useful for tracking down what&aposs eating your disk space so you can keep your storage from filling up.

Xubuntu 21.10 Dev UpdateBaobab makes it easy to visual your disk usage and drill down to find large files.

Disks (gnome-disk-utility)

Disks is an all-in-one utility for managing storage devices. It&aposs feature list is expansive, but some key features include:

  • Create and restore disk images (including flashing ISOs to thumb drives)
  • Partition and format disks, with encryption support
  • Inspect drive speed and health
  • Edit mounting rules (basically a user-friendly fstab editor)
Xubuntu 21.10 Dev UpdateGNOME Disks makes managing your storage a lot easier. Never manually edit fstab again.

Rhythmbox

Rhythmbox is a music playing and library application. It supports local files and audio streams, and includes a radio browser to find online radio stations. In Xubuntu, we&aposre currently using the default layout (see left screenshot), but members of the community have proposed using the Alternative Toolbar plugin. Which do you prefer? Vote in the Twitter poll!

Clipman (xfce4-clipman-plugin)

Clipman is a clipboard management application and plugin. It keeps a history of text and images copied to the clipboard and allows you to paste it later. Clipboard history can also be searched with the included xfce4-clipman-history command.

Xubuntu 21.10 Dev UpdateClipman remembers your clipboard history and makes it easy to paste later.

Removals

Pidgin

Pidgin is a multi-client chat program that has been included in Xubuntu since the beginning, when it was known as Gaim. In recent years, as chat services have moved to proprietary and locked down protocols, Pidgin has become less and less useful, leading to its removal in Xubuntu. If you want to install Pidgin on your system, it can be found on GNOME Software.

Xubuntu 21.10 Dev UpdateIf you&aposre still using Pidgin, you can easily find and install it from GNOME Software.

Active Team Votes

Themes

We&aposve got an active team vote to add two new themes to our seed. The themes in question are Arc and Numix Blue.

The Arc theme is a series of flat themes with transparent elements. It includes both light and dark themes with support for numerous desktop environments and applications.

Numix Blue is a blue variation of the Numix theme already included in Xubuntu. It&aposs an unofficial fork that does have some graphical differences beside the switch to a blue accent color.

Clipman by Default

Since we added Clipman to Xubuntu, we now have a second vote for including it in the panel by default. This would automatically enable Clipman&aposs clipboard management, which I&aposm personally opposed to. For my use case, I frequently copy sensitive strings to the clipboard, and I don&apost want them to be saved or displayed anywhere. New users would have no idea the clipboard monitor is even running.

Process Updates

Xubuntu Seed

Because our seed is also updated by Ubuntu maintainers, it is important that the code continue to be hosted on Launchpad. The @Xubuntu/xubuntu-seed code is mirrored from Launchpad every few hours. To help reduce the friction between the two systems, I made some small improvements.

Issues are now synced from Launchpad for the xubuntu-meta source package. I found a solution by the Yaru team for syncing the issues using GitHub actions. Our syncing scripts runs daily, syncing both newly created issues and issues that have been closed out.

I&aposve also added a GitHub action to prevent pull requests on our mirror repository. Since the repository is a mirror repository, pull requests are not actionable on GitHub. This action automatically closes those pull requests with a comment pointing the contributor in the right direction.

What&aposs Next?

Votes are ongoing, and there&aposs a lot of activity in the Ubuntu space. GNOME 40 and GTK 4 are starting to land, so there&aposs a strong likelihood that GTK 4 components will make their way into Xubuntu. This means we&aposll now have 3 versions of the toolkit thanks to GIMP (GTK 2), Xfce (GTK 3), GNOME 40 (GTK 4). Hopefully we&aposll see a stable GIMP 3.0 release soon so we can free up some space.

There&aposs some important dates coming soon in the Impish Indri Release Schedule. August 19 marks Feature Freeze, so the outstanding team votes and new feature requests should be settled soon. A couple weeks later, we have the User Interface Freeze on September 9. Let&aposs keep Xubuntu development rolling forward!

Post photo by Adriel Kloppenburg on Unsplash

29 July, 2021 11:06AM

July 28, 2021

Ubuntu Blog: From notebooks to pipelines with Kubeflow KALE

What is Kubeflow?

Kubeflow is the open-source machine learning toolkit on top of Kubernetes. Kubeflow translates steps in your data science workflow into Kubernetes jobs, providing the cloud-native interface for your ML libraries, frameworks, pipelines and notebooks.

Read more about Kubeflow

Notebooks in Kubeflow

Within the Kubeflow dashboard, data scientists can spin up notebook servers for their data preparation and model development.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/IyKlRi6WG5frn1YLGY6O3qsb58qa1XjlInpwlQ8ecF-3atEEzVRlhrKOYXyRhQQ6RqUm3jKGWyVEVxNZ7x81yPMNz_eGG9CyeFbdEZ1aNuWSHTxN3Pmn5wmdfAw3VBlM7rb4W0K2" width="720" /> </noscript>

Upon creating the server, users can select among default images, or use a custom image, including one with pre-installed KALE.

What is Kubeflow KALE?

KALE (Kubeflow Automated pipeLines Engine) extends notebooks within Kubeflow in order to allow for automated pipeline creation. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/57HBL5hIqG7_REhyQX-soGs2IT_P7TtK2ZVLIWQQi1cd3K94idlpmAKOvJNtwlVFpMWPyI8Jy8Y2eRKpB10caIPi71DnLq9BV8JOpdPiSvL5nRZvihAWORoYiMPIp2lvSYIEL-KF" width="720" /> </noscript>

All you have to do is to annotate cells (pieces of your code) within your Jupyter notebook and these tags will be automatically translated into your Kubeflow Pipeline.

You can tag a cell as a component or block indicating that the code within represents a step in your pipeline, and indicate the dependencies of that step (pre: <dependency-name>).

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/UTgq4BjO84-SHSp51u5QtBOBo94r4rsfbMo160P5q82UDnBxcLco_UXAx2xxEvHYxSWfQjk7RjeG5Tsc5juEqjRjo1ST3hMZFO7k5JFRvNZPub5yhJTX9pY4hCXpGCpFMF08xnYs" width="720" /> </noscript>

DEMO: Using KALE with Charmed Kubeflow, ElasticSearch and Ceph

In this demo, an Elasticsearch cluster will be used as a data store, a Ceph cluster as object storage to store the resulting models, and a Kubeflow environment is used as a data scientist platform to develop the AI algorithm as well as run the pipeline.

The following diagram shows an overview of the environment:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/9242/Screenshot-from-2021-07-28-15-59-01.png" width="720" /> </noscript>

To set up the environment demo, you can follow the steps mentioned in this repository.

To set up the environment demo, you can follow the steps mentioned in this repository.

After setting up the environment and making sure that everything is running, we can now open a sshuttle tunnel (or an SSH forward) from your machine to the AWS instance using this command:

$ sshuttle -r ubuntu@<EC2 public ip> 10.64.140.43/24

Then we access the Kubeflow dashboard and create a new Jupyter notebook with the Kale container image. We will use the following parameters for the notebook demo:

Name: kale-demo
Custom image: localhost:32000/kale-demo
CPU: 1
MEM: 2 Gi

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/rjF5k6MeyFLejhqJVkaEugK_Ad1ROr61M8SuEjPeY1iyT2O1Wa13ZarVWe4J4R_kkWgVZJ4EoxPjfcXE7xItCuEapUzSgBUHUM0l9iPfT7Y_e4WFKw9TGYM3cE5Z4dB55Ji2mUKO" width="720" /> </noscript>

In a new terminal within this notebook we need to download the financial time series demo notebook:

$ wget https://raw.githubusercontent.com/aym-frikha/kale-demo/master/financial-time-series.ipynb 

We need to do some changes to the following notebook parameters that reflect the current environment:

  • The endpoint_url (IP of the ceph-radosgw unit)
  • The access_key (access key for the ceph user: from ./create-rados-user.sh output)
  • The secret_key (secret key for the ceph user: from ./create-rados-user.sh output)
  • The elastic_url  (IP of the elastic unit)
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/EnYq0HNM4_ZeVbmxOHmMkmO9qyiM7NUVDw-VlRkBXAGSgYD77YB7JCMCZMJPHMvM67xXu1xykJrcr34IrMm7ygltvfosAU3_Vu2Bu-RreCG9zrfZLHzcTlmmCQ1HQ1541FeEXdXX" width="720" /> </noscript>

Now we can enable Kale for this notebook and, in the advanced settings, we need to change the docker image to “localhost:32000/kale-demo”

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/qQcMV8VXggRogyBJufc98tP8aK-n3A9-0GbvpGCXSIhQwJLU1nG-XF9YDVOzSbt8zUQ322l7Jiu2P5LtcgK43N-vb6AN3-h32bTj8fBtSovIG75YnaMGecHMPtHH4YdZkIviQq4x" width="720" /> </noscript>

Then we can run the Kubeflow pipeline by hitting the button compile and run.

<noscript> <img alt="" height="492" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_261,h_492/https://lh6.googleusercontent.com/eBTNooUYpMFAFvgLvngOm_1Uh4RLqd8rgWhpiTv1RR2WiFHNJu6PXpXP2MdoVaNlBF_NmuentOc9-X8QOSSrHw1fnIg2za3J27TVn9G110iYV4YMobJsPlwBajRSoHon01wX47qM" width="261" /> </noscript>

This will compile and run your Kubeflow pipeline.

Finally, if you click on View, you will be redirected to the specific pipeline run just created.

<noscript> <img alt="" height="339" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_242,h_339/https://lh4.googleusercontent.com/Wr5KA6X0fjaDNVyKF4yE_F495D2vaun4QRHC9LsSmZubYq3D9c8KtQaBK0dn4Dq5qkPpoGQRIHABDOe2QnYqSHtVMLuUdLGweCZk-TyDjr79CMMG6rD51nSzXXhLm3p8aiSsLWyx" width="242" /> </noscript>

Serving models on Kubernetes

Enterprise computing is moving to Kubernetes, and Kubeflow has long been talked about as the platform to solve MLOps at scale.

KFServing, the model serving project under Kubeflow, has shown to be the most mature tool when it comes to open-source model deployment tooling on K8s, with features like canary rollouts, multi-framework serverless inferencing and model explainability.

Learn more about KFServing in What is KFServing?

Learn more about MLOps

Canonical provides MLOps & Kubeflow training for enterprises alongside professional services such as security and support, custom deployments, consulting, and fully managed Kubeflow – read Ubuntu’s AI services page for details.

Simplify your Kubeflow operations

Get the latest Kubeflow packaged in Charmed Operators, providing composability, day-0 and day-2 operations for all Kubeflow applications including KFServing.

<noscript> <img alt="" height="242" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_225,h_242/https://lh6.googleusercontent.com/K6wfiuKqk-BVFwIyoVPwnv7NMblXJXz8VoPTDx_hpuFpRIHjh1WzAmaygNCbZb630bim2JKIzCPUW38-_K61Z2JzKyiRPbdS2EGRH2SxzsLQAbWHBJt55_ZkH5M9fssHqGSPCFFk" width="225" /> </noscript>

Get up and running in 5 minutes

28 July, 2021 04:23PM

July 27, 2021

hackergotchi for SparkyLinux

SparkyLinux

Systemback

There is a new application available for Sparkers: Systemback

What is Systemback?

Simple system backup and restore application with extra features Systemback makes it easy to create backups of the system and the users configuration files. In case of problems you can easily restore the previous state of the system. There are extra features like system copying, system installation and Live system creation.

Installation (Sparky 5 & 6 / amd64 & i386):
sudo apt update
sudo apt install systemback

Systemback

License: GNU GPL-3.0
Web: github.com/fconidi/Systemback_source-1.9.4

 

27 July, 2021 07:11PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu 20.10 (Groovy Gorilla) End of Life reached on July 22 2021

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of July 22, 2021, Ubuntu 20.10 is no longer supported. No more package updates will be accepted to 20.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 20.10 (Groovy Gorilla) release almost 9 months ago, on October 22, 2020, and its support period is now nearing its end. Ubuntu 20.10 will reach end of life on July 22, 2021.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 20.10.

The supported upgrade path from Ubuntu 20.10 is via Ubuntu 21.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/HirsuteUpgrades

Ubuntu 21.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Sat Jul 24 14:48:04 UTC 2021 by Brian Murray on behalf of the Ubuntu Release Team

27 July, 2021 09:32AM

July 26, 2021

The Fridge: Ubuntu Weekly Newsletter Issue 693

Welcome to the Ubuntu Weekly Newsletter, Issue 693 for the week of July 18 – 24, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

26 July, 2021 11:12PM

Ubuntu Blog: Running FIPS 140 workloads on Ubuntu

This is the first article in a two-article series regarding FIPS 140 and Ubuntu. The first part of this series, this article, covers running FIPS 140 applications on Ubuntu while the second part, will be covering the development of FIPS applications on Ubuntu.

What is FIPS and why do I need it?

Even though cryptography is used by almost every application today, the implementation of it is usually delegated to specialized cryptographic libraries. There are multiple reasons for that, including that implementing cryptography is not easy, and in fact it is easy to get wrong. Small mistakes–such as reusing a nonce–may render the data encrypted by an application recognizable. At the same time, the security landscape changes so fast that secure software of 10 years ago can no longer withstand attacks that exploit newly discovered vulnerabilities. For instance, algorithms like RC4 that were dominant in the early days of Internet commerce are today considered broken.

How can we be assured that these cryptographic applications and libraries implement cryptography correctly and follow best practices such as not using legacy cryptography? As cryptography is sensitive to governments around the world, there is no universally accepted answer yet. To address this problem in the U.S., NIST developed FIPS 140, a data protection standard that is our focus in this article.

FIPS 140 defines security requirements related to the design and implementation of a cryptographic module, or in software terms, for an application or library implementing cryptography. The standard has multiple levels of security, from levels 1 to 4, with level 1 applying to software implementations, while level 2 and further applying to specialized hardware alongside its software.  On level 1, the standard requires the use of known, secure cryptographic algorithms and modes for data protection and requires their logical separation from the application. It further includes a certification process that ensures that the claims are tested and attested by an accredited lab by NIST.

In essence the FIPS 140 standard ensures that cryptography is implemented using well known secure designs, follows certain best practices, does not involve obscure algorithms, and that there is a due process in attestation.

What about FIPS 140-3?

The FIPS 140 standard is now transitioning from the existing FIPS 140-2 version to the new FIPS 140-3 revision. FIPS 140-3 aligns the general security requirements with ISO/IEC 19790 – an international standard- and after September 2021, it is expected to be the only active cryptographic certification mechanism by NIST. For the purposes of this article, we will refer to FIPS 140-2 as it is presently the most widely used version, and we will use the shorthand FIPS to refer to the standard.

What does it mean to comply with FIPS?

There is a lot of terminology around compliance with FIPS and it can sometimes be confusing. To make things simple, when we talk about an application complying with FIPS, we mean that the application uses a FIPS-validated cryptographic module, e.g., a library and uses it in accordance with the module’s security policy. The security policy is a document that accompanies every FIPS-validated module and includes guidance on certain aspects of cryptography. For example, the Ubuntu 18.04 OpenSSL module guidance states that the cryptographic algorithm AES-XTS can only be used for encrypting storage devices.

Who can benefit from FIPS?

In the process of procuring software for an average organization, it is not reasonable to expect a detailed cryptographic analysis of the software. To a non-expert, it is intimidating to read about a cryptographic algorithm’s features, modes of operation, and key sizes. While open source applications make the source code auditable and potentially everyone can verify the choices within, in practice the qualified persons to perform such an audit are a small set of individuals. A validation program such as FIPS 140 ensures that packages that include cryptography do not make questionable choices, such as using unapproved cryptography or implementing their own.

Hence, the procuring organization is assured that the validated applications and libraries are following certain good design principles and do not include custom or unapproved cryptography.

Hence, the procuring organization is assured that the validated applications and libraries are following certain good design principles and do not include custom or unapproved cryptography.

To whom do FIPS 140-2 requirements apply?

U.S. Federal agencies and anyone deploying systems and cloud services for Federal government agency use, whether directly or through contractors and vendors, are required to use FIPS 140-2 compliant systems. FIPS 140-2 has also been adopted outside of the public sector in industries where data security is heavily regulated, such as financial services, healthcare (HIPAA), and in international certifications such as Common Criteria.

FIPS on Ubuntu

The approach Ubuntu takes in FIPS certifications

By default, Ubuntu comes with cryptographic packages based on the upstream sources and is not configured to adhere to any national standard. The Ubuntu Advantage (ua) tool makes it possible to set up the system to adhere to the FIPS standard, by a process that we describe as “enabling FIPS” (see below for more details).

Although there is a global system “switch” for FIPS, the FIPS 140 certification covers specific binary packages. In Ubuntu we select a set of cryptographic packages from the main repository that form our cryptographic core set. This set of packages is tested and validated for the FIPS 140-2 requirements on each Ubuntu LTS release. The FIPS validated packages are installed during the FIPS enablement.

fips-updates vs fips

Each FIPS 140 certificate for a package can take several months to complete and is valid for 5 years. However, as vulnerabilities happen security-critical fixes may need to be included faster than a certification cycle. For that, we provide two ways to consume validated packages: a stream called ‘fips’, where the exact packages validated by NIST are present; and another stream called ‘fips-updates’ where the validated packages are present, but are updated with security fixes. The ‘fips-updates’ stream also allows access to the packages during the validation phase, enabling early application development and testing. Both streams are revalidated periodically during Ubuntu standard support phase.

The FIPS validated cryptographic packages

The cryptographic core of Ubuntu 20.04 consists of the following packages:

PackageDescriptionFIPS 140-2 certificate
linux-image-fipsThe Linux Kernel Crypto API.#3928
libssl1.1The OpenSSL cryptographic backend. This includes the necessary cryptography for OpenSSH as well.#3966
libgcrypt20The libgcrypt cryptographic library.#3902
strongswanStrongSwan, the IPSec VPN implementation.Under validation

This set of packages is validated on x86-64 and IBM z/15. In the past, the table included both the OpenSSH server and client, but since 20.04 they no longer include cryptography for the purposes of FIPS 140-2 certification and use it from the OpenSSL package.

Note that the Linux kernel is itself a validated cryptographic module in the sense of FIPS, because it contains not only cryptography used by software (e.g., strongswan uses cryptography provided by it for IPSec), but also because it contains the random number generator that feeds all the user space applications and cryptographic libraries. Because of that the validated linux kernel is a dependency of the rest of the validated packages.

How do I enable FIPS on Ubuntu?

You can enable FIPS on an LTS Ubuntu release, such as 18.04 or 20.04 with a subscription. As Ubuntu’s mission is to bring free software to the widest audience, developers and individuals can access FIPS 140 through a Free personal subscription. For developing and running workloads with FIPS on the enterprise, the validated packages are available with Ubuntu Pro or an Ubuntu Advantage subscription.

The following instructions will enable FIPS on Ubuntu LTS.

Step 1: attach your subscription

Obtain your subscription token from ubuntu.com/advantage and attach it to your system. This step is not necessary in Ubuntu Pro.

$ sudo apt update
$ sudo apt install ubuntu-advantage-tools
$ sudo ua attach <TOKEN>

Step 2: enable FIPS

The following step enables FIPS using the ‘fips-updates’ stream on Ubuntu LTS.

$ sudo ua enable fips-updates

The previous command hides a lot of complexity relating to FIPS enablement. It installs the packages from the FIPS repository, and adds a kernel command line option to enable FIPS system-wide. A reboot is necessary to complete the FIPS enablement. You can verify its status using the command below.

$ sudo ua status
SERVICE       ENTITLED  STATUS
cc-eal yes n/a
cis-audit no —
esm-infra yes enabled
fips yes n/a
fips-updates yes enabled
livepatch yes disabled

How do I run my application with FIPS enabled?

Once FIPS is enabled, the FIPS 140-2 requirements are enforced in the core cryptographic packages. From that point you can start applications as you would do in an non-FIPS enabled system. However, one thing to keep in mind is that applications that were not designed to comply with FIPS may issue an error, for example when using an unapproved algorithm. Additionally, we recommend that you install and set up applications after FIPS is enabled, to prevent errors caused by old configuration files that may contain non-FIPS compliant options.

How can I run my FIPS-enabled application on an Ubuntu container?

To run FIPS-enabled applications in a container, you need to to generate a container that has the necessary FIPS-validated dependencies for the application, in addition to running your container on an Ubuntu FIPS-enabled host. The reason a FIPS-enabled host is necessary is because there are dependencies between cryptographic packages like OpenSSL and the kernel (for example the random generator), and the FIPS enablement in Ubuntu is signaled by the kernel (via /proc/sys/crypto/fips_enabled).

To generate a container with the FIPS cryptographic packages check the instructions in this article.

Summing up

Ubuntu enables running and developing applications compliant with the FIPS 140-2 data protection standard. The approach we follow gives a system-wide switch that is transparent for the applications. On enterprise environments you have Ubuntu Pro or Ubuntu Advantage subscriptions available to enable your development and running applications using the Ubuntu FIPS validated packages!

26 July, 2021 05:47PM

hackergotchi for Grml

Grml

Grml - new stable release 2021.07 available

This Grml release provides fresh software packages from Debian bullseye. As usual it also incorporates current hardware support and fixes known bugs from previous Grml releases.

More information is available in the release notes of Grml 2021.07.

Grab the latest Grml ISO(s) and spread the word!

Thanks to everyone contributing to Grml and this release, stay healthy and happy Grml-ing!

26 July, 2021 07:53AM by Michael Prokop (nospam@example.com)

July 25, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Kubuntu 20.10 Groovy Gorilla reaches end of life

Kubuntu Groovy Gorilla was released on October 22nd, 2020 with 9 months support.

As of July 22nd, 2021, 20.10 reached ‘end of life’.

No more package updates will be accepted to 20.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 20.04 Focal Fossa and 21.04 Hirsute Hippo continue to be supported.

Users of 20.10 can follow the Kubuntu 20.10 to 21.04 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 20.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 20.10 Groovy Gorilla.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2021-July/000270.html
[2] – https://help.ubuntu.com/community/HirsuteUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

25 July, 2021 01:47PM

July 24, 2021

Colin King: Intel Hardware P-State (HWP) / Intel Speed Shift

Intel Hardware P-State (aka Harware Controlled Performance or "Speed Shift") (HWP) is a feature found in more modern x86 Intel CPUs (Skylake onwards). It attempts to select the best CPU frequency and voltage to match the optimal power efficiency for the desired CPU performance.  HWP is more responsive than the older operating system controlled methods and should therefore be more effective.

To test this theory, I exercised my Lenovo T480 i5-8350U 8 thread CPU laptop with the stress-ng cpu stressor using the "double" precision math stress method, exercising 1 to 8 of the CPU threads over a 60 second test run.  The average CPU temperature and average CPU frequency were measured using powerstat and the CPU compute throughput was measured using the stress-ng bogo-ops count.

The HWP mode was set using the x86_energy_perf_policy tool (as found in the Linux source in tools/power/x86/x86_energy_perf_policy).  This allows one to select one of 5 policies:  "normal", "performance", "balance-performance", "balance-power" and "power" as well as enabling or disabling turbo frequencies.  For the tests, turbo mode was also enabled to allow the CPU to run at higher CPU turbo frequencies.

The "performance" policy is the least efficient option as the CPU is clocked at a high frequency even when the system is idle and is not idea for a laptop. The "power" policy will optimize for low power; on my system it set the CPU to a maximum of 400MHz which is not ideal for typical uses.

The more useful "balance-performance" option optimizes for good throughput at the cost of power consumption where as the "balance-power" option optimizes for good power consumption in preference to performance, so I tested these two options.

Comparison #1,  CPU throughput (bogo-ops) vs CPU frequency.

The two HWP policies are almost identical in CPU bogo-ops throughput vs CPU frequency. This is hardly surprising - the compute throughput for math intensive operations should scale with CPU clock frequency. Note that 5 or more CPU threads sees a reduction in compute throughput because the CPU hyper-threads are being used.

Comparison #2, CPU package temperature vs CPU threads used.

Not a big surprise, the more CPU threads being exercised the hotter the CPU package will get. The balance-power policy shows a cooler running CPU than the balance-performance policy.  The balance-performance policy is running hot even when one or a few threads are being used.

Comparison #3, Power consumed vs CPU threads used.

Clearly the balance-performance option is consuming more power than balance-power, this matches the CPU temperature measurements too. More power, higher temperature.

Comparison #4, Maximum CPU frequency vs CPU threads used.

With the balance-performance option, the average maximum CPU frequency drops as more CPU threads are used.  Intel turbo boost allows one to clock a few CPUs to higher frequencies,  exercising more CPUs leads to more power and hence more heat. To keep the CPU package from hitting thermal overrun, CPU frequency and voltage has to be scaled down when using more CPUs. 

This also is true (but far less pronounced) for the balance-power option. As once can see, balance-performance runs the CPU at a much higher frequency, which is great for compute at the expense of power consumption and heat.

Comparison #5, Compute throughput vs power consumed.

So running with the balance-performance runs the CPU at higher speed and hence one gets more compute throughput per unit of time compared to the balance-power mode.  That's great if your laptop is plugged into the mains and you want to get some compute intensive tasks performed quickly.   However, is this more efficient? 

Comparing the amount of compute performance with the power consumed shows that the balance-power option is more efficient than balance-performance.  Basically with balance-power more compute is possible with the same amount of energy compared to balance-performance, but it will take longer to complete.

CPU frequency scaling over time

The 60 second duration tests were long enough for the CPU to warm up enough reach thermal limits causing HWP to throttle back the voltage and CPU frequencies.  The following graphs illustrate how running with the balance-performance option allows the CPU to run for several seconds at a high turbo frequency before it hits a thermal limit and then the CPU frequency and power is adjusted to avoid thermal overrun:

 

After 8 seconds the CPU package reached 92 degrees C and then CPU frequency scaling kicks in:

..and power consumption drops too:

..it is interesting to note that we can only run for ~9 seconds before the CPU is scaled back to around the same CPU frequency that the balance-power option allows.

Conclusion

Running with HWP balance-power option is a good default choice for maximizing compute while minimizing power consumption for a modern Intel based laptop.  If one wants to crank up the performance at the expense of battery life, then the balance-performance option is most useful.

The balance-performance option when a laptop is plugged into the mains (e.g. via a base-station) may seem like a good idea to get peak compute performance. Note that this may not be useful in the long term as the CPU frequency may drop back to reduce thermal overrun.  However, for bursty infrequent demanding CPU uses this may be a good choice.  I personally refrain from using this as it makes my CPU rather run hot and it's less efficient so it's not ideal for reducing my carbon footprint.

Laptop manufacturers normally set the default HWP option as "balance-power", but this may be changed in the BIOS settings (look for power balance, HWP or Speed Shift options) or changed with x86_energy_perf_policy tool (found in the linux-tools-generic package in Ubuntu).

24 July, 2021 10:59PM by Colin Ian King (noreply@blogger.com)

July 23, 2021

hackergotchi for Grml developers

Grml developers

Evgeni Golov: It's not *always* DNS

Two weeks ago, I had the pleasure to play with Foremans Kerberos integration and iron out a few long standing kinks.

It all started with a user reminding us that Kerberos authentication is broken when Foreman is deployed on CentOS 8, as there is no more mod_auth_kerb available. Given mod_auth_kerb hasn't seen a release since 2013, this is quite understandable. Thankfully, there is a replacement available, mod_auth_gssapi. Even better, it's available in CentOS 7 and 8 and in Debian and Ubuntu too!

So I quickly whipped up a PR to completely replace mod_auth_kerb with mod_auth_gssapi in our installer and successfully tested that it still works in CentOS 7 (even if upgrading from a mod_auth_kerb installation) and CentOS 8.

Yay, the issue at hand seemed fixed. But just writing a post about that would've been boring, huh?

Well, and then I dared to test the same on Debian…

Turns out, our installer was using the wrong path to the Apache configuration and the wrong username Apache runs under while trying to setup Kerberos, so it could not have ever worked. Luckily Ewoud and I were able to fix that too. And yet the installer was still unable to fetch the keytab from my FreeIPA server 😿

Let's dig deeper! To fetch the keytab, the installer does roughly this:

# kinit -k
# ipa-getkeytab -k http.keytab -p HTTP/foreman.example.com

And if one executes that by hand to see the a actual error, you see:

# kinit -k
kinit: Cannot determine realm for host (principal host/foreman@)

Well, yeah, the principal looks kinda weird (no realm) and the interwebs say for "kinit: Cannot determine realm for host":

  • Kerberos cannot determine the realm name for the host. (Well, duh, that's what it said?!)
  • Make sure that there is a default realm name, or that the domain name mappings are set up in the Kerberos configuration file (krb5.conf)

And guess what, all of these are perfectly set by ipa-client-install when joining the realm…

But there must be something, right? Looking at the principal in the error, it's missing both the domain of the host and the realm. I was pretty sure that my DNS and config was right, but what about gethostname(2)?

# hostname
foreman

Bingo! Let's see what happens if we force that to be an FQDN?

# hostname foreman.example.com
# kinit -k

NO ERRORS! NICE!

We're doing science here, right? And I still have the CentOS 8 box I had for the previous round of tests. What happens if we set that to have a shortname? Nothing. It keeps working fine. And what about CentOS 7? VMs are cheap. Well, that breaks like on Debian, if we force the hostname to be short. Interesting.

Is it a version difference between the systems?

  • Debian 10 has krb5 1.17-3+deb10u1
  • CentOS 7 has krb5 1.15.1-50.el7
  • CentOS 8 has krb5 1.18.2-8.el8

So, something changed in 1.18?

Looking at the krb5 1.18 changelog the following entry jumps at one: Expand single-component hostnames in host-based principal names when DNS canonicalization is not used, adding the system's first DNS search path as a suffix.

Given Debian 11 has krb5 1.18.3-5 (well, testing has, so lets pretend bullseye will too), we can retry the experiment there, and it shows that it works with both, short and full hostname. So yeah, it seems krb5 "does the right thing" since 1.18, and before that gethostname(2) must return an FQDN.

I've documented that for our users and can now sleep a bit better. At least, it wasn't DNS, right?!

Btw, freeipa won't be in bulsseye, which makes me a bit sad, as that means that Foreman won't be able to automatically join FreeIPA realms if deployed on Debian 11.

23 July, 2021 06:36PM

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Late July Update

In no particular order:

  • The podcast feeds repository has seen some updates. Yes, I am a bit odd in primarily consuming podcasts via the podcasts app on my Apple TV device. Life changes and right now my circumstances are quite different than they used to be. Getting AntennaPod back on my mobile device is not a high priority considering how flaky the phone continues to be.
  • A test printing was done using the work-in-progress code from the auto-newspaper repository. Moving to just printing a single leaf of “Legal” sized paper would not be all that much. Considering the growing niche that needs filling that may be enough to start with. There is a running discussion in the repo as to how this has all been developing over time.
  • In the alternative to the “underground newspaper” notion there would be the thought of going back to podcasting. I’m not too keen on that thought especially as this would be focusing on video production these days. At the least I would need to launder segments of Profile America via some prestidigitation with ffmpeg into being video files as initial building blocks. At the very least I did start scripting out how to grab real fast from the relevant FTP site the useful data to read out a weather report. Pulling some material from the Defense Visual Information Distribution Service as to the Ohio National Guard would provide public domain content as to state-level news through VNRs. It sounds like I almost have this planned out except for hosting, actually.
  • My literal stack of Raspberry Pi units is back up and running. That puts me at five operational in the house at the moment. They aren’t clustered at this time. I probably should do that though I would need to decide on a mission profile.
  • A review of the Internal Revenue Service characteristics list for what makes a “church” as seen on their website is being done on my part. Why? It is a “facts and circumstances” test that a certain group in my country’s civil society is meeting. That that is happening has disturbing implications that I am trying to better understand.
  • People are baffled by the change in name of the Cleveland baseball team to being the “Guardians”. Ideastream posted a story about the statues the team is being named in honor of. The Encyclopedia of Cleveland History also discusses the art deco designs. Frankly they are awesome bits of art from the Great Depression that remain with us today and feature in many neat sunrise and sunset photographs.

Tags: Life

23 July, 2021 04:09PM

hackergotchi for SparkyLinux

SparkyLinux

Viper Browser

There is a new application available for Sparkers: Viper Browser

What is Viper Browser?

A powerful yet lightweight web browser built with the Qt framework

Features:
– All development is done with a focus on privacy, minimalism, and customization ability
– Bookmark management
– Built-in ad blocker, compatible with AdBlock Plus and uBlock Origin filters
– Cookie viewer, editor, and support for cookie filters (QtWebEngine 5.11+ only)
– Compatible with Pepper Plugin API
– Custom user agent support
– Fast and lightweight
– Fullscreen support
– Granular control over browser settings and web permissions
– Gives the user control over their data, no invasions of privacy like other browsers are known to do..
– GreaseMonkey-style UserScript support
– Multiple options for home page- any URL, blank page, or a card layout page with favorite and most visited websites
– PDF.js embedded into the browser
– Save and restore browsing sessions, local tab history, pinned tabs
– Secure AutoFill manager (disabled by default)
– Tab drag-and-drop support for HTML links, local files, other browser window tabs, etc
– Tab hibernation / wake up support
– Traditional browser UI design instead of WebUI and chromium-based interfaces

Installation (Sparky 6 amd64 & i386):
sudo apt update
sudo apt install viper-browser

Viper Browser

License: GNU GPL-3.0
Web: github.com/LeFroid/Viper-Browser

 

23 July, 2021 12:59PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The State of Robotics – June 2021

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/_Do94PZFQKtiJuIUm-huUU3ArHfmkBMURyOpruZSXPeMGYXB22Z9EGW-xl95PNGEQew17IhUul8fTw2h-2IDi9ROQPqOu_YPUvSSIdHRLjfrVSux1V5d2OaFYpP-jf2ZCHiuLUTJ" width="720" /> </noscript>

June was packed with interesting news. So this monthly blog won’t disappoint our readers.

If you haven’t seen it already, we are running a content survey. It will take you 7 minutes to complete and it will help us create the content that you want to read. So if you haven’t done it yet, here is the link: https://forms.gle/DqPg1zd7gCiad3GF8

Thanks again for your amazing support, and let’s start. 

ROS support for enterprise 

In 2017, the personal credit-checking firm Equifax was breached. Information regarding credit accounts opened, financial history and credit scores of around 150 million people were exposed. Why? A single customer complaint portal wasn’t properly patched. Learn more about the importance of security maintenance for robotics in our new whitepaper. Just published! 

No more Pepper 

This is huge. According to Reuters, SoftBank stopped manufacturing Pepper robots at some point last year due to low demand. By September this year, it will cut about half of the 330 positions at SoftBank Robotics Europe in France. This follows poor long-term sales in the last 3 years, where, according to JDN, SoftBank Robotics Europe has lost over 100 million Euros.

Whether you like it or not, Pepper left its mark in the robotics world. The first time you take that robot out of its box, well, is just an experience. Seeing that robot move and interact with people for the first time showed us what could be. Have you seen another humanoid robot in the market that had the same adoption as Pepper? Or even the same autonomy? 

With poor functionality, low reliability, and highly unpredictability, Pepper was still capable of working on crowded sites. Stores, banks, offices, conferences, it was there. You cannot say the same of others. And with that exposure Pepper helps people understand the opportunities of service robots. It also played a prominent role in today’s human-robot interactions research, where several trials used these robots in pursuit of developing better robots. It was also used in AI research, optimising navigation, task completion and learning. So despite all its limitations, and all the critiques for this robot, Pepper has done more for the robotics community than many other robots.      

So yes, this is huge. From Aldebaran to Softbank. It has been a long journey for Pepper. So if you are building the next social robot, take a look at this robot. Learn from its mistakes and its achievements. 

Open-source robotics; AI and bias

Ethics in AI is a topic we cannot overlook. You might think this is just a trend, something that the community likes reading since it is controversial. But it is not. 

A researcher at Stanford University accessed GPT-3, a natural language model developed by the California-based lab OpenAI. GPT-3 allows one to write text as a prompt, and then see how it expands on or finishes the thought. The researcher tried a variation of the joke “two [people] walk into..”, by saying “two Muslims walk into”. Unfortunately, as you will imagine, the results showed a cold reality. Sixty-six out of 100 times, the AI responded with words suggesting violence or terrorism.

The results showed disgraceful stereotypical and violent completions. From “Two Muslims walked into a…gay bar in Seattle and started shooting at will, killing five people.” to “…a synagogue with axes and a bomb.” and even “…a Texas cartoon contest and opened fire.” 

The same violent answer was only present around 15 percent of the time with other religious groups—Christians, Sikhs, Buddhists and so forth. Atheists averaged 3 per cent. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/oH7GcCzrHrQnyei27vwA2RVxWkn2zbtxq4hv738_G2MljdLeVQ1OXuquR7aYyojACP52nergjACtfiCvpkin_briaBcUFHwvo4EhSn30gu-naJ7ef6g14g2QwSDLYMhCcOYZivNT" width="720" /> </noscript>

The graph shows how often the GPT-3 AI language model completed a prompt with words suggesting violence (Source)  

Obviously, this is not the model’s fault. GPT-3 only acts according to the data given. It’s the fault of those behind the training. No, they are not racist, they just forgot about the dangers of data scraping. The only way a system like GPT-3 can give human-like answers is if we give it data about ourselves. OpenAI supplied GPT-3 with 570GB of text scraped from the internet, including random insults posted on Reddit and much more. 

Ed Felten, a computer scientist at Princeton who coordinated AI policy in the Obama administration said “The development and use of AI reflect the best and worst of our society in a lot of ways”. We need to verify the origin of data and test for bias in our models. Is not easy, but it is not impossible either. It is our responsibility to take action and guarantee that our work follows a process to reduce these biases.  

Expanding the field of miniature robots 

A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. 

The researchers created the miniature robots by embedding magnetic microparticles into biocompatible polymers. These are non-toxic materials that are harmless to humans. The robots can execute desired functionalities when magnetic fields are applied, moving with six degrees of freedom (DoF). 

While we have other examples of miniature robots, this one can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities. For instance, one type can replicate the movement of a jellyfish. Others have a gripping ability to precisely pick and place miniature objects. 

This could pave the way to possible future applications in biomedicine and manufacturing. Measuring about the size of a grain of rice, the miniature robots may be used to reach confined and enclosed spaces currently inaccessible to existing robots making them particularly useful in the field of medicine and manufacturing.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/b1c3/2kIBZzN-Imgur.gif" width="720" /> </noscript>

Outro

We want to keep improving our content! So again, please, help us improve our content by completing the next survey:
https://forms.gle/DqPg1zd7gCiad3GF8

This is a blog for the community and we will like to keep featuring your work. Send a summary to robotics.community@canonical.com, and we’ll be in touch. Thanks for reading.

23 July, 2021 10:54AM

July 22, 2021

Podcast Ubuntu Portugal: Ep 152 – 5 Violinos

Uma sessão inesperada, com 3 convidados inesperados, na qual voltámos ao tema Audacity, dando depois o devido destaque ao Steam Deck sendo o gadget do momento, assim aconteceu mais um episódio do Podcast Ubuntu Portugal.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.steamdeck.com/en/
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

22 July, 2021 09:45PM

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 File Transfer

Need to access your phone files, use a USB flash drive, USB cable, or transfer over WiFi.

The post Librem 5 File Transfer appeared first on Purism.

22 July, 2021 03:59PM by David Hamner

hackergotchi for VyOS

VyOS

VyOS Project May/June 2021 Update

Hello everyone,

it's time for the progress update for the months of May and June.

We've been a bit too busy with working on the code that we forgot to post a progress update in June—time to fix it! In short: the 1.3 release is on track to become the new LTS in the late summer or early autumn, 1.2.x maintenance continues, and the 1.4 release is gaining new cool features that we can later backport to 1.3.

22 July, 2021 03:48PM by Erkin Batu Altunbas (e.altunbas@vyos.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Design and Web team summary – 16 July 2021

The web team at Canonical run two-week iterations building and maintaining all of Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

Web

The Web team develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

mir-server.io rebuild with a brand new tutorial section

We rebuilt the mir-server.io homepage and added a new tutorials page. This update included new sections for Ubuntu Frame and egmde, we improved the information by explaining the main features of Mir and added a driver compatibility table.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/W5-G0BYlpcltPlheUInt7X1SCnIvx2e4PfEJgVD8rzLNS47sr9zXmfqEIqCm2eHSskjzKIFHunYHZvJ2DQBX92Jm-ktWB_WyVo-Spo3Ik4-h9xPX8PGl8AkSURDNWpE_dp5_vTiZ" width="720" /> </noscript>

A tutorials page, based on microk8s has been added and we improved the styling based on microk8s website.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/0DZDLoCoWu52s1y7q0-gTKK5AuMAqvkC-RlE-3cpN4OHPQ33Jq3u7Ibeh_Edn4C8JpRg8nmH-kIgOA1TPF19scPuCUYI43VOUX8klJOqEak3LlSUVShb8H0FBGaSFASMeWN_gYX6" width="720" /> </noscript>

Private cloud pricing calculator rebuild

The web team has been working on rebuilding the private cloud pricing calculator. The new calculator makes it easier for you to create an estimate of the hourly cost per instance and compare your savings against public clouds depending on different factors. Allowing you to select fully managed or supported, and select your required number of instances, vCPUs, ephemeral storage and more. A new contact form has been added to provide the ability to fill out your requirements and receive an email with a breakdown of the costs specific to your case.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/7HROjGD2gc4UOMTQZrCrk7S5IFkjSLmDKCXfwL7LrU1XNAQw9Yc0sS5AREkhidXKtFNjsXjbPAFr8VGkwUBf8LfNhQMSrBb28jtHS0mh8-2tXymOaOi6i6tCBhBCzhpPJ6-8mrvo" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/5ThkBjvT-A5Ub4WOreOKPNksB35ahV51Uv9M_esl-bmGaWOA5KNmLxrj9SGob7-VcawJBeNhfDGA2lstb6ogx0fKDFm5AoRe7ZYEREmu6wWPruwRCL9qGKSGo1fi-PsexV_aDD4J" width="720" /> </noscript>

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Livepatch diagrams

We worked with the Project Management team to design a couple of diagrams to explain Livepatch at-a-glance and give an overview of Livepatch.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/duSHdQUBYNk2CMYkor1PQXutle-bSuCS6z5rFCWDpNDmyYPxnGP5nWoMLoKOMnXKviI_UWELX1SxbjuGVeMT2zupNsU8uzfdF1g53Ho8OUIRQrxjPI-Os7ht-8Exe_OWgV0zqrpV" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/iHqeIbI6SiNdHwLdVNvDaSGLW58aQR0qSxbGwThOtWQKVBcWzyvk0Udw416iO3eXApbf349tMvL3T4CPx3YO_dj3IVypoxmpSClNPYrL8gfnCJbCkpRfF2_Z3L89ZG8emYWZYJ0a" width="720" /> </noscript>

ROS ESM diagram

A flow diagram showing the steps of creating a security patch using ROS for ESM.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/13u7vXzoG2eDM8oITw6xKIWuTF9FZAtu7P_Lu1vv9qvNksbTlljIEI15hhIKQg9HPHplEq0GzNhYORUWZJJZgPryASlxGM2LTyQ4hDwBva_JJyaxXIxUiWJgLK5OtKDYMia_E645" width="720" /> </noscript>

Marketing documents

A number of documents were created in collaboration with the Marketing team for them to use along with the Project Management teams.

Microstack video

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/819vvN15WNHweFbnLKpIiMTE_DlDyRzCRVRAfQlgSOsORIOFKmkP-XHtpGQaFXxN2LhJv9Ok3LVdAZ0EXApCWrmaC3SwxqiKMH93TJXta9OaUYDJ4BWaCJZ99-qT6Tir_yLdYsuM" width="720" /> </noscript>

We finished a short video to go on the microstack.run page to explain the benefits of Microstack and how it can be used to leverage the core services of Openstack to build a general purpose cloud.

Apps

The Apps  team develops the UI for the MAAS project and the JAAS dashboard for the Juju project.

MAAS – Machine storage and network cloning UI

Summing up our UX work for MAAS Cloning UI, we have simplified a few areas of the design. Once a user selects multiple machines, the drop down will activate an action called `clone from`.  This allows the user to clone these machines from a source machine. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/mnFivGNggS5VY1D2Mb6lB4-F2y_-Z8s9gIIsa2c7mIAm62X_szOxgWXCk4runtcP4L7fPh53xLysDOMeFtnRiFAnO_XFM6l8qgVrZs5Jf3XhtDGv4-gudWeNOZhyC_doEIbqcqdf" width="720" /> </noscript>

To select a source machine, a list of 50 last modified machines will show up. You may select any machines in any state as a source machine, but destination machines need to be in either failed testing, ready, or allocated. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/GBs1Tp40gl47wgtoAXNvZ8w3iJ2xSYfZm0HA9uG6pSrJzBGZhQR0mIfRfa94sJaU2KubJIVKuQmrAevEshVJF8GHWV79gnLZc2kizvg6gX7jZ02PnmomM7EXgpHOVYR8cUbJ99sw" width="720" /> </noscript>

The list above allows you to search for the source machine by its hostname, system id, or tags as well as select whether you want to clone the network configuration or the storage configuration. Cloning works with a homogeneous hardware environment. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/Kq-Masun5njRmJWFWT7V29KzOsVxoRLpiMEjLrcXcHii_JqUYdG-DgWdq6uQSRETCBAmz-59msVMtVaHOWg7AICzPr6OYP9kzi3uZpix1jwncU1X2BJ4ZW1ikRpXTSzSJ9nWHeJJ" width="720" /> </noscript>

Once cloning is complete, you will see machines that are successfully cloned from this source and a list of machines that failed cloning in this panel. You may collapse this panel without closing to double check the results.

At the current stage, our goal is to expose this functionality in the UI, so people can clone machines. We hope to build more robust functionalities on top of this in the upcoming feature. 

Vanilla

The Vanilla team designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Guest developers

This iteration was the first one when we invited developers from other squads to join us and work on Vanilla together. Huw and Caleb did a great job working together on updates to Notification component and helped us clean up the React components backlog by migrating remaining code to TypeScript.

Notification component updates

Main part of this iteration work was updating our Notification component with new style and additional options.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/Q1kwk69KGlVVbOYOmpYTeGm0x9Ryb73h_R7kiFC2aGLbMmcJ7BN1eVIqVQ2cF3mATGKvALtbcay9j0DtTxR5trkS4midLOr4zf_SK6eDMG1m2IHybmyPCAJuIGc785JpUidkaqgU" width="720" /> </noscript>

Notifications got a refreshed styling with additional borderless and inline variants. We also included additional elements for notification timestamp and actions.

TypeScript migration

Thanks to the efforts of our guest devs this iteration all of our React components have been migrated to TypeScript 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/EIyLJsY6PxI-VBjE-vT0mYJ4skaazw6MXiZnOg90f4hm3CctdvJTYD_bqjJqCgtibYwOJ4CXM2srokiKZsuBShzTSOhH8Oj04hm3QdKWUmLPFytytmJL4oIR9_u-VoF3zagEoq3U" width="720" /> </noscript>

Marketplace

The Marketplace team works closely with the Store team to develop and maintain the Snap Store site and the Charmhub site.

Vanilla – Key/value pair pattern

It is a design pattern to list a number of values with their corresponding keys.

It’s a component required when listing properties for a complex element in a collection of similarly grouped elements.

Available In “rack”

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/pZY6fXQzgSKcp0nKDBgEFMGh_1XFC7jVIEBeJ4BcxfMz77NQQt47ToyO4vip61uThQVPwZdDVqgt-a97AjbCnt8l0X1NTUOl399PvnYYagzxrVL9_SjBsVZRLADm36o3W1sng8vA" width="720" /> </noscript>

And “rack column”

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/jKSPOduSuutw3wM5Btzgq-nc1DHtodHNMwFcZukycKppD2ZBFbZKYntmjz7Oy0nD1NJTLeUBuDTlo-wX8LD9ET_8xVCUcbVGnFyVL1IAEVLSwj5sNsUJ4FHB1V6CJwt23rPGrunW" width="720" /> </noscript>

All the specs are available in Discourse

Visual – form labels, mandatory field and error messages

This work aimed to improve accessibility and resolve inconsistencies around field validation. The way we indicate mandatory fields and the way we deal with erroneous submissions can be made more usable for users with or without assistive technology (AT).

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/9sCFIqXBUXHTEJpsZoWg2R3ma2Wql5xBmMjBqx2eWhXH7AeCnPTo2LZhKI33GuDA1Gd9AW0WjsnlRgstSlhKY0kJ4vT26r8CbF7vV1r62Fu7EKPrP368qVXxpICHLMxC97a_OlSd" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/-V6RWJbGXELJcjJ9qwTCXAkJo1RddExR-42KGNXT0WX5sv8sqY-DBL9YKb7zw3Wk4nbqZLLOi1ywS0pyB4jAPxUSbevhwbfV3YIXXqxUzHQ9s4p0u-AlkvckC7kL898sy31txCxt" width="720" /> </noscript>

Charmhub resources

Added a tab to the charm details page which shows the resources available to that charm.

Upload metadata

Completed the work around being able to upload metadata when you release a snap.

Reviewer experience discovery

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/mSC7gokhBHpw-3wEBwTl-GG7odJIk-OMf3MRl93y8BJPw0yXr-Z5_BoDwJ9YsMsZQZMCsmESeFpzWrE_q5zhVGsEt7xXZISFsghXHqVkArpWhVaObYDCnPxSJNWAJXsAjTNzDEmI" width="720" /> </noscript>

While migrating the user experience flows from the Dashboard to snapcraft.io, we focus here on the reviewer experience.

  • We want to keep the same functionalities
  • while taking the opportunity to improve reviewer’s journey

We have done 6 interviews, stored and summarized all the findings using the tool Dovetail

Team posts:

With ♥ from Canonical web team.

22 July, 2021 02:33PM

Ubuntu Podcast from the UK LoCo: S14E20 – Claps Select Slate

This week we’ve been playing DOOM Eternal and buying aptX Low Latency earbuds. We discuss the Steam Deck from Valve, bring you a command line love and go over all your feedback.

It’s Season 14 Episode 20 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo apt install duc
duc index ~
duc ls ~     # plain text output
duc ui ~     # curses interface (like ncdu)
duc graph ~  # Creates a png sunburst graph
duc gui ~    # Interactive X sunburst graph

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

22 July, 2021 02:00PM

Ubuntu Blog: Ubuntu in the wild – 22nd of July

The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/07d7/Ubuntu-in-the-wild1.png" width="720" /> </noscript>
Ubuntu in the wild

Improving data center efficiency with digital twins

Being able to create a digital copy of a system easily allows engineers and designers to improve and optimise said system. The advances in AI/ML tooling and capacity are now making it possible to use digital twins to model data centers and optimize facilities’ design, as well as to detect potential operational issues.

Read more on that here! 

Kubernetes: Portability vs Standardization

What if the ultimate goal of Kubernetes adoption was not to enable portability between clouds, but standardization? This article explores what people are really expecting from Kubernetes and cloud native technologies in general.

Read more on that here! 

The future of Fintech

Curious to know what’s next for Fintech? The Fintech Five by Five report is exploring the tech trends that are shaping the future of the fintech sector. Read on for insights on edge computing, artificial intelligence, low-code, and more.

Read more on that here! 

Surfacing Kubernetes challenges

The folks at the Channel Happy Hour analyse and report on the latest IT news. Following Canonical’s Kubernetes and cloud native operations report, the latest episode covers some of the challenges that companies are still facing today with Kubernetes.

Listen to the episode here!

Bonus: Wallpaper competition for Impish Indri

There is still one month left to participate in the wallpaper competition for Impish Indri – Ubuntu 21.10. We are still waiting to see some Macaroni art submissions, so don’t hesitate to submit yours!

Read more on that here!

22 July, 2021 10:09AM

July 21, 2021

Ubuntu Blog: How to test the latest Kubernetes 1.22 release candidate with MicroK8s

Today, the Kubernetes community made the 1.22 release candidate available, a few weeks ahead of general availability, planned for August the 4th. We invite developers, platform engineers and cloud tech enthusiasts to experiment with the new features, report back findings and bugs. MicroK8s is the easiest way to get up and running with the latest version of K8s for testing and experimentation.

MicroK8s is a lightweight, pure-upstream Kubernetes distribution with enterprise features such as self-healing high availability, Prometheus, Grafana and Istio baked-in as add-ons. As MicroK8s tracks all upstream releases, you can use the snap channels to select between, e.g. the latest/stable release for production or a beta or release candidate release for testing.

Test the latest Kubernetes on your machine

It requires a single command to get the latest Kubernetes with MicroK8s:

sudo snap install microk8s --channel=1.22/candidate --classic

Alternatively, go to https://snapcraft.io/microk8s  and select 1.22/candidate.

<noscript> <img alt="" height="470" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_970,h_470/https://lh5.googleusercontent.com/-8d23RD7Oq077wzNXaawfU5cfW2S7h6kQuLUwmh1bpOUizrhffYZHuiCUpH4Pms711ZyokMmTjLv76d7U3cvzyfVdjVljG-Qwk_HfF4fS7ZIhSOCHT8FR7uv7kgVZ_AhdhE6vic-" width="970" /> </noscript>

MicroK8s runs on Ubuntu and all major Linux distributions, Windows and macOS. It supports x86 and ARM, and it is resource optimised to run on devices such as the RaspberryPi or the NVIDIA Jetson.

If you have feedback, good or bad, please let us know on Discourse or Slack (#microk8s). And if you have any bugs or technical issues to report, you can file them over on GitHub.

For more on MicroK8s, you can read the docs, follow the tutorials or dive into the use cases.

21 July, 2021 05:47PM

hackergotchi for Purism PureOS

Purism PureOS

Defending Against Spyware Like Pegasus

This has been a busy week for security news, but perhaps the most significant security and privacy story to break this week (if not this year), is about how NSO Group’s Pegasus spyware has been used by a number of governments to infect and spy on journalists and activists and even heads of state by […]

The post Defending Against Spyware Like Pegasus appeared first on Purism.

21 July, 2021 01:57PM by Kyle Rankin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Why Ubuntu Certification Matters for AIoT

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4163/preview_1626405276.png" width="720" /> </noscript>

DFI is the world’s first industrial computer manufacturer to join the Ubuntu IoT Hardware Certification Partner Program. Three DFI products have been certified recently, which means you will have an out-of-box experience, secure, faster time to market with DFI products and Ubuntu.

Ubuntu is preloaded on the certified devices, and Canonical provides 10 years of support. Furthermore, with the SMART START offering, enterprises can go to market faster and reduce costs.

In this webinar, you’ll hear from Taiten Peng (IoT Solution Architect at Canonical) and David Chen (FAE Leader at DFI) on
1. Why do developers need to use certified hardware
2. Which three DFI products have joined the Ubuntu certification family recently
3. How to leverage preloaded Ubuntu for IoT commercial project development

Register for the webinar

For more information, please visit Ubuntu Certified Hardware.

21 July, 2021 01:10PM

July 20, 2021

hackergotchi for Purism PureOS

Purism PureOS

How Calls became a part of GNOME

Since Purism's philosophy and GNOME's principles are closely aligned it is not far fetched to call them a match made in heaven.

The post How Calls became a part of GNOME appeared first on Purism.

20 July, 2021 05:30PM by Evangelos Ribeiro Tzaras

July 19, 2021

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 692

Welcome to the Ubuntu Weekly Newsletter, Issue 692 for the week of July 11 – 17, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

19 July, 2021 11:06PM by guiverc

July 16, 2021

hackergotchi for SparkyLinux

SparkyLinux

Enve

There is a new application available for Sparkers: Enve

What is Enve?

Flexible, user expandable 2D animation software for Linux and Windows. You can use enve to create vector animations, raster animations, and even use sound and video files. Enve was created with flexibility and expandability in mind.

Installation (Sparky 5 & 6 amd64):
sudo apt update
sudo apt install enve

Enve

License: GNU GPL-3.0
Web: github.com/MaurycyLiebner/enve

 

16 July, 2021 12:52PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Alan Pope: Team Building via Chess

One of the things I really love about working at Influx Data is the strong social focus for employees. We’re all remote workers, and the teams have strategies to enable us to connect better. One of those ways is via Chess Tournaments! I haven’t played chess for 20 years or more, and was never really any good at it. I know the basic moves, and can sustain a game, but I’m not a chess strategy guru, and don’t know all the best plays.

16 July, 2021 11:00AM

July 15, 2021

Podcast Ubuntu Portugal: Ep 151 – Ah! A audácia!

O Constantino andou a fazer prospecção de piqueniques, o Carrondo já se despachou das mudanças, e entretanto o Nextcloud festejou o seu quinto aniversário lançando a versão 22. O Audacity deu, e dá, que falar…

Já sabem: oiçam, subscrevam e partilhem!

  • https://addons.mozilla.org/pt-PT/firefox/addon/cookie-quick-manager/
  • https://addons.mozilla.org/pt-PT/firefox/addon/profile-switcher/
  • https://addons.mozilla.org/pt-PT/firefox/addon/darkreader/
  • https://www.youtube.com/watch?v=Y0VZ7t8JGZE
  • https://github.com/audacity/audacity/discussions/889
  • https://github.com/audacity/audacity/discussions/889
  • https://web.archive.org/web/20210706125644/https://github.com/audacity/audacity/discussions/889
  • https://web.archive.org/web/20210706130028/https://github.com/audacity/audacity/discussions/1225
  • https://web.archive.org/web/20210706150802/https://www.audacityteam.org/about/desktop-privacy-notice/
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

15 July, 2021 09:45PM

Ubuntu Podcast from the UK LoCo: S14E19 – Twin Pages Slug

This week we’ve been playing chess and trying to play DOOM Eternal. We round up the news and goings on from the Ubuntu community and our favourite picks from the wider tech news.

It’s Season 14 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

15 July, 2021 07:00PM

July 14, 2021

hackergotchi for Purism PureOS

Purism PureOS

Proud to Be Top Contributor to GTK4

With the immense support from our customers, backers, and users, Purism was able to commit tremendous development effort upstream. According to gtk.org, out of all Employers who have committed to advance GTK 4.0, Purism is ranked #5 by commits, and #5 by changes. Behind only RedHat, GNOME Foundation, GNOME, and all unattributed commits; while being […]

The post Proud to Be Top Contributor to GTK4 appeared first on Purism.

14 July, 2021 03:44PM by Todd Weaver

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Weekly News #218


The second edition of patches for the Linux kernel with support for Rust:
https://lkml.org/lkml/2021/7/4/171

Release of Virtuozzo Linux 8.4:
https://www.virtuozzo.com/blog-review/details/blog/view/virtuozzo-vzlinux-84-now-available.html

OpenVMS operating system for x86-64 architecture:
https://vmssoftware.com/about/openvmsv9-1/

Nextcloud Hub 22 Collaboration Platform Available:
https://nextcloud.com/blog/nextcloud-hub-22-introduces-approval-workflows-integrated-knowledge-management-and-decentralized-group-administration/

Tor Browser 10.5 released:
https://blog.torproject.org/new-release-tor-browser-105

Ubuntu 21.10 switches to using zstd algorithm for compressing deb packages:
https://balintreczey.hu/blog/hello-zstd-compressed-debs-in-ubuntu/

Mozilla stops development of Firefox Lite browser:
https://support.mozilla.org/en-US/kb/end-support-firefox-lite 

Nginx 1.21.1 released:
https://mailman.nginx.org/pipermail/nginx-announce/2021/000304.html

Release of Proxmox VE 7.0:
https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/

Systemd 249 system manager released:
https://lists.freedesktop.org/archives/systemd-devel/2021-July/046672.html

Release of Linux Mint 20.2:
http://blog.linuxmint.com/

Stable release of MariaDB 10.6 DBMS:
https://mariadb.com/kb/en/mariadb-1063-release-notes/

Snoop 1.3.0:
https://github.com/snooppr/snoop/releases/tag/V1.3.0_10_July_2021

Release of EasyNAS 1.0 network storage:
https://easynas.org/2021/07/10/easynas-1-0/



Credits:
Full Circle Magazine
@fullcirclemag
Host: @bardictriad, @zaivala@hostux.social
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

14 July, 2021 01:56PM

Ubuntu Blog: How to cache snap downloads and save bandwidth

For many people, fast broadband connection and unlimited data are a reality. For others, they are not. If you have several Linux hosts in your (home) environment, and you’re using snaps, each of these systems will separately communicate with the Snap Store and periodically download necessary updates. This can be costly in terms of inbound data.

A solution to this problem is to cache snap downloads – grab the snaps once and then reuse as many times as needed. There are two principal ways to achieve this. One, you can manually download the needed snaps on a single host and then distribute them across your internal network using a custom mechanism. The downside of this approach is that you will need to maintain your own regimen of updates. Two, you can set up a snap proxy server. In this guide, we’ll show you how to accomplish this.

Prerequisites

Please note this is a somewhat lengthy and complex tutorial. We will dabble in concepts like nginx Web server, postgresql database, networking proxying, services management, certificates, and similar. Each of these requires some familiarity with the subject matter, especially if you need to troubleshoot potential errors.

Moreover, you may need to take into account network security, as running a proxy server brings its own challenges. On internal networks, this is less of an issue, but if you expose your machine to the Internet, you will need to exercise a range of healthy practices and rigorous security hardening.

Step 1: Configure postgresql database

Before you can configure the snap proxy, you need a database configuration. Install the postgresql package in your distribution. Then, to make it work with the snap proxy, the simplest script template you can use is as follows:

CREATE ROLE "snapproxy-user" LOGIN CREATEROLE PASSWORD 'snapproxy-password';
CREATE DATABASE "snapproxy-db" OWNER "snapproxy-user";
\connect "snapproxy-db"
CREATE EXTENSION "btree_gist";

Then, save the file, and run the script:

sudo -u postgres psql < proxydb.sql
CREATE ROLE
CREATE DATABASE
You are now connected to database "snapproxy-db" as user "postgres".
CREATE EXTENSION

Next, set the database connection string – you will be asked to provide the password you configured in the template script above.

sudo snap-proxy config proxy.db.connection="postgresql://snapproxy-user@localhost:5432/snapproxy-db"
Authentication error with user snapproxy-user.
Check the user name and password and that the user has the LOGIN privilege.
Please enter password for database user snapproxy-user (attempt 1 of 3):
Configured database for snaprevs role.
Configured database for snapauth role.
Configured database for snapident role.
Configured database for snapassert role.

Step 2: Install snap-proxy-server snap

The next step is to install the snap proxy, and configure it.

sudo snap install snap-store-proxy

Once the snap is in place, you will need to configure your domain. This should be a routable network address for your clients – IP address or FQDN. How you manage the namespace is entirely up to you.

sudo snap-proxy config proxy.domain="DOMAIN"

For instance:

sudo snap-proxy config proxy.domain="10.0.2.15"

Step 3: Import certificates and register your snap proxy

Once the above step is complete, you will need to make your snap proxy instance recognizable and verified by the upstream Snap Store.

sudo snap-proxy import-certificate --selfsigned
sudo snap-proxy register

The registration process is interactive. You will be asked for your Ubuntu SSO email and password, and optionally a second factor one-time key. After that, the script will ask you a number of questions on what and how you intend to use your proxy:

Please enter the Ubuntu SSO account you wish to use to register this proxy.
Email: igor@canonical
Password:
Second-factor auth: 123456
Please let us know some details of your use-case:
Primary reason for using the Proxy:
  1: Update control of snap revisions
  2: Edge proxy for constrained network access
  3: Fun!
  4: Other
>

If everything goes well, you should see a message like the one below:

Thank you, you have registered proxy ID kEpuqguXTNRB4UK9LC6Nl6Pn7ibYJtr8.
Your proxy has been automatically approved, and is ready to use.

Step 4: Add proxy configuration

Now, we need to tell the proxy to cache downloads. The configuration file will have to be stored under:

/var/snap/snap-store-proxy/current/nginx/snap-proxy.conf

In this file, add the following text – please replace the generic placeholder PROXY_HOSTNAME with your domain, e.g.: 192.168.2.122, localhost, or something else you’ve configured early on.

upstream thestore {
        server api.snapcraft.io:443;
}

server {
    listen       80  default_server;
    server_name  _;
    return       444;
}

server {
        listen 80;
        listen [::]:80;
        server_name PROXY_HOSTNAME; # hostname of your cloud proxy
        return 301 https://PROXY_HOSTNAME$request_uri;
}

server {
        server_name PROXY_HOSTNAME; # hostname of your cloud proxy
        listen 443 ssl;
        ssl_certificate /etc/ssl/certs/cert.crt;
        ssl_certificate_key /etc/ssl/private/key.key;
        location / {
                proxy_pass      https://thestore;
                # Substitute all store downloads url returned in JSON responses
                # from the store and repoint them at this cloud proxy.
      # Requires ngx_http_substitutions_filter_module that comes with nginx-extras in Debian/Ubuntu.
                subs_filter_types application/json;
                subs_filter     https://api.snapcraft.io/api/v1/snaps/download/ https://PROXY_HOSTNAME/api/v1/snaps/download/;
        }

        location /api/v1/snaps/download/ {
                proxy_pass      https://thestore;
                # Trap redirects from the download endpoint and stream the
                # response from the cdn via an internal handler @handle_cdn.
                proxy_intercept_errors on;
                error_page 302 = @handle_cdn;
        }

        location @handle_cdn {
                Internal;
                # Resolver for the store cdn hosts. Test and set appropriately.
                resolver 127.0.0.53;
                set $cdn_url $upstream_http_location;
                proxy_pass $cdn_url;
        }
}

For example, using the IP address 10.0.2.15, this line:

return 301 https://PROXY_HOSTNAME$request_uri;

Becomes:

return 301 https://10.0.2.15$request_uri;

Step 5: Verify that the proxy works correctly

To function as required, the proxy will need to be able to contact the upstream Snap Store, but also identify itself correctly, with the right assertion key that matches your registered proxy ID.

First, check that the snap proxy is running, and all its services are in the active state:

sudo snap services snap-store-proxy
Service                        Startup  Current  Notes
snap-store-proxy.memcached     enabled  active   -
snap-store-proxy.nginx         enabled  active   -
snap-store-proxy.snapassert    enabled  active   -
snap-store-proxy.snapauth      enabled  active   -
snap-store-proxy.snapdevicegw  enabled  active   -
snap-store-proxy.snapident     enabled  active   -
snap-store-proxy.snapproxy     enabled  active   -
snap-store-proxy.snaprevs      enabled  active   -

Second, run the snap-proxy status command:

snap-proxy status
Store ID: kEpcqguXTNRM4UK9LC6Nl5Mn5ibYLtr7
Status: approved
Connected Devices (updated daily): 0
Device Limit: 5
Internal Service Status:
  memcached: running
  nginx: running
  snapauth: running
  snapdevicegw: running
  snapdevicegw-local: running
  snapproxy: running
  snaprevs: running

And finally, check the connection:

snap-proxy check-connections
http: https://dashboard.snapcraft.io: OK
http: https://login.ubuntu.com: OK
http: https://api.snapcraft.io: OK
postgres: localhost: OK
All connections appear to be accessible

Step 6: Configure your clients

We now need to tell our client systems that they should use the proxy. To that end, we will need to grab the assertion key and the proxy ID on the server.

curl -sk https://DOMAIN/v2/auth/store/assertions

Replace the DOMAIN placeholder with the domain you have used to configure your proxy. The command should return some data. Something like:

curl -sk https://localhost/v2/auth/store/assertions

Or perhaps:

curl -sk https://10.0.2.15/v2/auth/store/assertions
type: account-key
authority-id: canonical
revision: 2
public-key-sha3-384:
BWDEoaqyr25nF5SNCvEv2v7QnM9QsfCc0PBMYD_i2NGSQ32EF2d4D0hqUel3m8ul
account-id: canonical
name: store
since: 2016-04-01T00:00:00.0Z
body-length: 717
sign-key-sha3-384: -CvQKAwRQ5h3Ffn10FILJoEZUXOv6km9FwA80-Rcj-f-6jadQ89VRswHNiEB9Lxk
AcbBTQRWhcGAARAA0KKYYQWuHOrsFVi4p4l7ZzSvX7kLgJFFeFgOkzdWKBTHEnsMKjl5mefFe9ji
qe8NlmJdfY7BenP7XeBtwKp700H/t9lLrZbpTNAPHXYxEWFJp5bPqIcJYBZ+29oLVLN1Tc5X482R
vCiDqL8+pPYqBrK2fNlyPlNNSum9wI70rDDL4r6FVvr+osTnGejibdV8JphWX+lrSQDnRSdM8KJi
UM43vTgLGTi9W54oRhsA2OFexRfRksTrnqGoonCjqX5wO3OFSaMDzMsO2MJ/hPfLgDqw53qjzuKL
...

Now, repeat the command and save the assertion key into a file, e.g.:

curl -sk https://192.168.1.109/v2/auth/store/assertions > proxy.assert

Similarly, grab the proxy ID:

snap-proxy config internal.store.id

Copy the relevant information (and the assertion file) to your clients. And then, on each client, run:

sudo snap ack proxy.asset
sudo snap set core proxy.store=”PROXY ID”

For example, you may have three systems in your environment, 192.168.2.100, 192.168.2.101, and 192.168.2.102. You can configure the first to be the server (use 192.168.2.100 for the snap-proxy configuration), and then your two clients will be .101 and .102 systems. These hosts need to be able to communicate with one another.

Now, you can install snaps on your clients. For each requested snap, the first download will necessarily have to take place, but then, it will be cached on the server.

Possible errors

As this is a non-trivial setup, there could be quite a few errors and snags.

Nginx not running

There could be multiple reasons why the Web server might not be running. You may have something else bound to ports 80, 443 (like a different Web server instance), or the configuration may be incorrect, for various reasons.

snap-proxy status
Store ID: kEpuqguXTNRM4UK5LC7Nl5Pn5ibZLtr2
Status: approved
Connected Devices (updated daily): 0
Device Limit: 5
Internal Service Status:
  memcached: running
  nginx: not running: (104, 'ECONNRESET')
  snapauth: running
  snapdevicegw: running
  snapdevicegw-local: running
  snapproxy: running
  snaprevs: running

Other errors you may encounter could be:

nginx: not running: [Errno 111] Connection refused
nginx: not running: hostname '10.0.2.15' doesn't match either of '10.0.2.15', '127.0.0.1', '127.0.0.1'

The first error would indicate that the necessary ports are not available (perhaps already in use). The second error would indicate a mismatch between the Web server configuration and the snap-proxy configuration.

You can check and change the nginx configuration (inside the snap) under:

/var/snap/snap-store-proxy/current/nginx/nginx.conf

Of course, you will need some level of familiarity with Web servers in general, and nginx in particular, to be able to make the relevant adjustment. Then, restart (and/or reload) the server after any change:

sudo snapctl restart snap-store-proxy.nginx

Invalid certificate

On your client, when you try to install a snap, you may get a warning that you’re using a wrong certificate:

snap install vlc
error: cannot install "vlc": Post https://10.0.2.15/v2/snaps/refresh: x509: certificate is valid for 127.0.0.1, not 10.0.2.15

This could stem from a misconfiguration in the Web server or the proxy, not unlike having a certificate for say domain.com, but not for www.domain.com. You will need to adjust the nginx configuration, and/or reconfigure (and re-register) your proxy.

To re-register the proxy, run:

sudo snap-proxy reregister

You will need to rerun the client side commands again. Similarly, you may also see:

error: cannot install "vlc": Post https://127.0.0.1/v2/snaps/refresh: x509: certificate signed by unknown authority

If this happens, it might be because you’re using custom certificates, and/or your certificates have not been added during the proxy configuration step. You can tell the proxy to use any certificate that exists under your nginx directory, e.g.:

sudo snap set system store-certs.cert1="$(cat /var/snap/snap-store-proxy/current/nginx/127.0.0.1.cert)"

Similarly, you can reset the manually added certificates:

sudo snap-proxy remove-ca-certs

For information, please take a look at the snap proxy troubleshooting tutorial.

Summary

Caching snaps can be a rather useful functionality, especially in environments with lots of clients and limited network bandwidth. At home, the snap proxy is limited to five devices, but that should still give you some leverage toward managing snap installations and updates in an efficient way. Hopefully, this tutorial clarifies some of the less obvious elements of the snap proxy setup. If you have any questions or ideas, please join our forum, and let us know.

Unsplash.

14 July, 2021 12:14PM

July 13, 2021

hackergotchi for Grml

Grml

First Release Candidate of Grml version 2021.07 available

We are proud to announce the first release candidate of the upcoming version 2021.07, code-named 'JauKerl'!

This Grml release provides fresh software packages from Debian bullseye. As usual it also incorporates current hardware support and fixes known bugs from the previous Grml release.

For detailed information about the changes between 2020.06 and 2021.07(-rc1) have a look at the official release announcement.

Please test the ISOs and everything you usually use and rely on, and report back, so we can complete the stable release soon. If no major problems come up, the next iteration will be the stable release, which is scheduled for end of July 2021.

13 July, 2021 01:32PM by Michael Prokop (nospam@example.com)

hackergotchi for Tails

Tails

Tails 4.20 is out

Tor Connection assistant

Tails 4.20 completely changes how to connect to the Tor network from Tails.

After connecting to a local network, a Tor Connection assistant helps you connect to the Tor network.

This new assistant is most useful for users who are at high risk of physical surveillance, under heavy network censorship, or on a poor Internet connection:

  • It protects better the users who need to go unnoticed if using Tor could look suspicious to someone who monitors their Internet connection (parental control, abusive partner, school or work network, etc.).

  • It allows people who need to connect to Tor using bridges to configure them without having to change the default configuration in the Welcome Screen.

  • It helps first-time users understand how to connect to a local Wi-Fi network.

  • It provides feedback while connecting to Tor and helps troubleshoot network problems.

We know that this assistant is still far from being perfect, even if we have been working on this assistant since February. If anything is unclear, confusing, or not working as you would expect, please send your feedback to tails-dev@boum.org (public mailing list).

This first release of the Tor Connection assistant is only a first step. We will add more improvements to it in the coming months to:

  • Save Tor bridges to the Persistent Storage (#5461)

  • Help detect when Wi-Fi is not working (#14534)

  • Detect if you have to sign in to the local network using a captive portal (#5785)

  • Synchronize the clock to make it easier to use Tor bridges in Asia (#15548)

  • Make it easier to learn about new Tor bridges (#18219, #15331)

Changes and updates

  • Update OnionShare from 1.3.2 to 2.2.

    This major update adds a feature to host a website accessible from a Tor onion service.

  • Update KeePassXC from 2.5.4 to 2.6.2.

    This major update comes with a redesign of the interface.

  • Update Tor Browser to 10.5.2.

  • Update Thunderbird to 78.11.0.

  • Update Tor to 0.4.5.9.

  • Update the Linux kernel to 5.10.46. This should improve the support for newer hardware (graphics, Wi-Fi, and so on).

  • Rename MAC address spoofing as MAC address anonymization in the Welcome Screen.

Fixed problems

Automatic upgrades

  • Made the download of upgrades and the handling of errors more robust. (#18162)

  • Display an error message when failing to check for available upgrades. (#18238)

Tails Installer

  • Made the display of the Reinstall button more robust. (#18300)

  • Make the Install and Upgrade unavailable after a USB stick is removed. (#18346)

For more details, read our changelog.

Known issues

  • Automatic upgrades are broken from Tails 4.14 and earlier.

    To upgrade from Tails 4.14 or earlier, you can either:

    • Do a manual upgrade.

    • Fix the automatic upgrade from a terminal. To do so:

      1. Start Tails and set up an administration password.

      2. In a terminal, execute the following command:

        torsocks curl --silent https://tails.boum.org/isrg-root-x1-cross-signed.pem \
        | sudo tee --append /usr/local/etc/ssl/certs/tails.boum.org-CA.pem \
        && systemctl --user restart tails-upgrade-frontend
        

        This command is a single command that wraps across several lines. Copy and paste the entire block at once and make sure that it executes as a single command.

      3. Approximately 30 seconds later, you should be prompted to upgrade to the latest version of Tails. If no prompt appears, you might already be running the latest version of Tails.

See the list of long-standing issues.

Get Tails 4.20

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are broken from Tails 4.14 and earlier. See the known issue above.

  • Automatic upgrades are available from Tails 4.14 or later to 4.20.

    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 4.20 directly:

What's coming up?

Tails 4.21 is scheduled for August 10.

Have a look at our roadmap to see where we are heading to.

13 July, 2021 12:34PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu becomes #1 OS for OpenStack deployment

One of the core values of Canonical, that we all identify with, is the mission of bringing the power of open source to everyone on the planet. From developing to developed countries. From individuals to big enterprises. From engineers to CEOs. And there is only one way to find out if we are efficient in what we do. This is community feedback. 

It is no different this time. The OpenStack User Survey 2020 results are out and Ubuntu was appointed by the entire OpenStack community as the most popular platform for OpenStack deployment. This is great news for Canonical and the entire Ubuntu community. It was a long journey, sometimes bumpy, but we made it. And we are not going to stop there!

#1 OS for OpenStack deployment

The OpenStack User Survey is an event organised by the Open Infrastructure Foundation on an annual basis. Participation is open and voluntary. All participants have to answer a few questions about their OpenStack deployment, including some demographic information as well as cloud size and deployment decisions. One of those questions is about the main operating system (OS) running this OpenStack cloud deployment.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/D3JycBXzHNKpWZYcRQnl1drIz4g58dZrsxOlA6Q3E4Hv1BFHeXBuWPN2IUCRHrgpn9G8LtWRMSUzTwKLCjxaM2Ze-upGs_HrlpyPx7CKl0lf2KXyuUaZms3n8UmaD1wcFHfJrQYH" width="720" /> </noscript>
Source: https://www.openstack.org/analytics/

According to the survey results from last year, 40% of respondents indicated Ubuntu Server as their main OS. This means that in practice, Ubuntu Server is considered as the default OS for OpenStack deployment in the majority of organisations worldwide. No wonder. OpenStack and Canonical have come a long way together. From the very beginning of OpenStack.

10 years together

Although OpenStack was founded as an open source project by NASA and Rackspace, Canonical has been involved in OpenStack development from the very early stages. For years we have been contributing to the OpenStack source code (Canonical is one of the biggest contributors of all the time) and packaging OpenStack binaries for straightforward consumption on Ubuntu. Canonical has also pioneered a number of solutions in the OpenStack deployment and operations automation space, effectively eliminating the complexity of OpenStack from its end users. 

As a board member of the Open Infrastructure Foundation, we have been driving the evolution of the project towards new challenges, including network function virtualization (NFV), containers and the edge. We have been representing the entire community during various events and promoting the idea of open infrastructure. Even OpenStack and Ubuntu release cadences are synchronised, enabling users to benefit from new features and bug fixes brought by new OpenStack versions on their Ubuntu machines right after the upstream release.

Why do people choose Ubuntu for OpenStack deployment?

So what makes Ubuntu the preferred OS for OpenStack deployment? Well, there is no single answer to that. The commitment, the mission, the trust – all of that matters. However, there are a few advantages of OpenStack that are only available on Ubuntu. Those include:

Straightforward installation methods

If you have ever tried OpenStack before, you already know how complex it is. The complexity is just part of the nature of OpenStack. Fortunately, Canonical provides tools to eliminate this complexity. MicroStack, for example, enables you to install a single-node OpenStack cluster in just 2 commands and ~20 minutes. Start simple and evolve according to your needs.

Day-2 operations support

Another challenge OpenStack users usually face is day-2 operations support. How to ensure that the OpenStack cluster continues to run for a week, for a month, for a year since its initial deployment and continues to evolve at the same time. To deal with this challenge, Canonical packages OpenStack operations code, enabling full automation of typical operations tasks, such as database backups, scaling the cluster out or even OpenStack upgrades.

Predictable release cadence and upgrade path

Since the OpenStack release cadence follows the Ubuntu release cadence, users always know when to expect a new version. Canonical commits to release every new version of OpenStack on Ubuntu within 2 weeks from the upstream release. With Ubuntu, you are always up to date and even more importantly, you can easily upgrade from old versions to benefit from new features and bug fixes brought by the latest release.

100% open source

Contrary to other Linux distributions, Canonical delivers OpenStack using tools that are 100% open source. This avoids unpleasant surprises post-deployment or, even worse, unexpected costs. On Ubuntu you can use OpenStack for free as long as you want, benefitting from its openness and community support. 

Optional enterprise support subscription

And last but not least, if you are looking for enterprise support for OpenStack, this is also something that Canonical provides. The Ubuntu Advantage for Infrastructure (UA-I) support subscription includes production-grade service-level agreements (SLAs), phone and ticket support, 10 years of security updates and more.

Join the community

If you are wondering now how to join the OpenStack Ubuntu community, here are some useful tips for you:

Try OpenStack on Ubuntu by following our simple installation instructions. MicroStack enables you to get fully functional OpenStack up and running on your workstation in just 2 commands and ~20 minutes.

Or get in touch with Canonical if you are planning a migration to Ubuntu from other Linux distributions. We will happily help you choose the best architecture that fits your needs and optimise your private cloud for price-performance.

You can also fill in the OpenStack User Survey 2021 to influence the community and software directions moving forward. The survey is open now and closes on August, 20th.

13 July, 2021 08:00AM

July 12, 2021

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 691

Welcome to the Ubuntu Weekly Newsletter, Issue 691 for the week of July 4 – 10, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

12 July, 2021 10:32PM by guiverc

hackergotchi for Tails

Tails

Tails report for June, 2021

Highlights

Metrics

  • Tails has been started more than 629 659 times this month. This makes 20 989 boots a day on average.

How do we know this?

12 July, 2021 05:00PM

July 10, 2021

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: Lubuntu 20.10 End of Life and Current Support Statuses

Lubuntu 20.10 (Groovy Gorilla) was released October 22, 2020 and will reach End of Life on Thursday, July 22, 2021. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 21.04 as soon as possible if you are still running 20.10. After […]

The post Lubuntu 20.10 End of Life and Current Support Statuses first appeared on Lubuntu.

10 July, 2021 09:12PM

hackergotchi for SparkyLinux

SparkyLinux

Shortwave

There is a new application available for Sparkers: Shortwave

What is Shortwave?

Shortwave is an internet radio player that provides access to a station database with over 25,000 stations.

Features:
– Create your own library where you can add your favorite stations
– Easily search and discover new radio stations
– Automatic recognition of songs, with the possibility to save them individually
– Responsive application layout, compatible for small and large screens
– Play audio on supported network devices (e.g. Google Chromecasts)
– Seamless integration into the GNOME desktop environment

Installation (Sparky 6 amd64 & i686):
sudo apt update
sudo apt install shortwave

Warring!
This is Shortwave version 1.1.1, the last one which reqiures GTK+3; The latest version 2.0.x requires GTK+4, which is not available in Debian Sid or testing repos yet.

Shortwave

License: GNU GPL-3.0
Web: gitlab.gnome.org/World/Shortwave

 

10 July, 2021 03:56PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Schauenburg: offlineimap - unicode decode errors

My main system is currently running Ubuntu 21.04. For e-mail I'm relying on neomutt together with offlineimap, which both are amazing tools. Recently offlineimap was updated/moved to offlineimap3. Looking on my system, offlineimap reports itself as OfflineIMAP 7.3.0 and dpkg tells me it is version 0.0~git20210218.76c7a72+dfsg-1.

Unicode Decode Error problem

Today I noticed several errors in my offlineimap sync log. Basically the errors looked like this:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 1299: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xeb in position 1405: invalid continuation byte

Solution

If you encounter it as well (and you use mutt or neomutt), please have a look at this great comment on Github from Joseph Ishac (jishac) since his tip solved the issue for me.

To "fix" this issue for future emails, I modified my .neomuttrc and commented out the default send encoding charset and omitted the iso-8859-1 part:

#set send_charset = "us-ascii:iso-8859-1:utf-8"
set send_charset = "us-ascii:utf-8"

Then I looked through the email files on the filesystem and identified the ISO-8859 encoded emails in the Sent folder which are causing the current issues:

$ file * | grep "ISO-8859"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: ISO-8859 text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   ISO-8859 text
1525607314.R13283589178011616624.desktop:2,S:                                ISO-8859 text

That left me with opening the files with vim and saving them with the correct encoding:

:set fileencoding=utf8
:wq

Voila, mission accomplished:

$ file * | grep "UTF-8"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: UTF-8 Unicode text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   UTF-8 Unicode text
1525607314.R13283589178011616624.desktop:2,S:                                UTF-8 Unicode text

10 July, 2021 02:00PM

July 09, 2021

hackergotchi for Whonix

Whonix

Whonix KVM 15.0.1.9.3 Released

Changelog:

Thanks to everyone involved.


Come get some:

https://www.whonix.org/wiki/Kicksecure/KVM#Download_Kicksecure_.E2.84.A2


Leave feedback if you need to.

8 posts - 3 participants

Read full topic

09 July, 2021 07:10PM by HulaHoop

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Linux kernel Livepatching

Canonical livepatch is the service and the software that enables organizations to quickly patch vulnerabilities on the Ubuntu Linux kernels. Livepatch provides uninterrupted service while reducing fire drills during high and critical severity kernel vulnerabilities. It is a complex technology and the details can be confusing, so in this post we provide a high level introduction to Ubuntu Linux kernel livepatching and the processes around it.

Livepatch introduction

When reviewing the major cybersecurity data breaches via web services (e.g., from the 2021 Verizon data breach investigations report), one cannot but notice that after credential based attacks, the exploitation of vulnerabilities is the major attack vector.  According to the same report, only a quarter of scanned organizations patch vulnerabilities in less than two months after being public, something that indicates that organizations are not generally proactive and consistent in vulnerability patching. And that’s not without a reason; addressing vulnerabilities through unplanned work is a challenge as it takes the organization’s focus away by creating unplanned maintenance windows where patches are being applied and systems are rebooted, while its customers or users face an unavailable service. 

At the same time, threats do not go away; critical and high severity vulnerabilities can appear at arbitrary times and potentially expose important data or services. Canonical’s vulnerability data show that 40% of high and critical severity vulnerabilities affect the Linux kernel, the highest of any other package. Addressing this vulnerability window quickly and smoothly for Ubuntu systems, is the goal of Canonical Livepatch. It eliminates the need for unplanned maintenance windows for critical and high severity kernel vulnerabilities, by patching the Linux kernel while the system runs. 

What happens when a kernel vulnerability is detected?

In particular, when Canonical detects a high or critical vulnerability on the Linux kernel we will create a livepatch addressing the vulnerability. After the livepatch is made available, it is tested in Canonical’s internal server farm, and then promoted gradually to a series of testing tiers ensuring that any released livepatch has been tested sufficient time on live systems. Once the patch is released a Livepatch Security Notice is issued and systems that enable the canonical-livepatch client will receive the patch over an authenticated channel and apply it.

How does kernel livepatching works?

There are many types of vulnerabilities and many reasons behind them such a logic error, or a missing check in a small piece of code and others. On the high level the livepatch will provide new kernel code replacing the vulnerable one, and will update the rest of the kernel to use the new code. The diagram below shows how a kernel vulnerability is being patched using Canonical livepatch.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/HG8oUnSrLNPsLfNedWqmiuGbwdB2b291Z3lqggTGxBv-XGVijz32Fob1SFssM8ys0qZbA-1q-RnKdFmJuUo12u0xvWuTHbNcbRAGqPhcUPn2UJ1vu5sfKFMjN9s39S_-usoN6MRn" width="720" /> </noscript>

The simplistic description above shows the principle, but also hints on why some vulnerabilities that depend on very complex code interactions cannot be livepatched. When a kernel vulnerability cannot be livepatched, a Livepatch Security Notice is issued that advises to apply any pending kernel updates and reboot.

How can I access the Canonical livepatch

Livepatch is available through Ubuntu Advantage and Ubuntu Pro to organizations and customers that take advantage of Ubuntu’s security features. Beyond that, as Ubuntu’s mission is to bring free software to the widest audience, developers and individuals can access Canonical livepatch through a free subscription. The free subscription allows for up to 3 machines and up to 50 for Ubuntu community members.

 [Get Ubuntu Advantage] [Get a Free subscription]

How to enable Canonical livepatch

Canonical livepatch can be enabled in two steps; First obtain your subscription token via the Ubuntu Advantage portal. The first step is necessary for both free subscription and Ubuntu Advantage users, but it is not necessary on Ubuntu Pro. Then you will need to enable it. The steps are:

$ sudo ua attach [TOKEN]

$ sudo ua enable livepatch

Conclusions

The Canonical Livepatch service reduces your unplanned work and allows you to schedule your maintenance windows. Take advantage of livepatching and provide uninterrupted service to your users by applying high and critical severity kernel updates without rebooting.


09 July, 2021 12:58PM

Colin King: New features in stress-ng 0.12.12

The release of stress-ng 0.12.12 incorporates some useful features and a handful of new stressors.

Media devices such as HDDs and SSDs normally support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) to detect and report various measurements of drive reliability.  To complement the various file system and I/O stressors, stress-ng now has a --smart option that checks for any changes in the S.M.A.R.T. measurements and will report these at the end of a stress run, for example:

..as one can see, there are errors on /dev/sdc and this explains why the ZFS pool was having performance issues.

For x86 CPUs I have added a new stressor to trigger System Management Interrupts via writes to port 0xb2 to force the CPU into System Management Mode in ring -2. The --smi stressor option will also measure the time taken to service the SMI. To run this stressor, one needs the --pathological option since this may hang the computer and they behave like non-maskable interrupts:

To exercise the munmap(2) system call a new munmap stressor has been added. This creates child processes that walk through their memory mappings from /proc/$pid/maps and unmap pages on libraries that are not being used. The unapping is performed by striding across the mapping in page multiples of prime size to create many mapping holes to exercise the VM mapping structures. These unmappings can create SIGSEGV segmentation faults that silently get handled and respawn a new child stressor. Example:

 

There some new options for the fork, vfork and vforkmany stressors, a new vm mode has been added to try and exercise virtual memory mappings. This enables detrimental performance virtual memory advice using  madvise  on  all  pages of the new child process. Where possible this will try to set every page in the new process with using madvise MADV_MERGEABLE, MADV_WILLNEED, MADV_HUGEPAGE  and  MADV_RANDOM flags.  The following shows how to enable the vm options for the fork and vfork stressors:

One final new feature is the --skip-silent option.  This will disable printing of messages when a stressor is skipped, for example, if the stressor is not supported by the kernel, the hardware or a support library is not available.

As usual for each release, stress-ng incorporates bug fixes and has been tested on a wide variety of Linux/*BSD/UNIX/POSIX systems and across a range of processor architectures (arm32, arm64, amd64, i386, ppc64el, RISC-V s390x, sparc64, m68k.  It has also been statically analysed with Coverity and cppcheck and built cleanly with pedantic build flags on gcc and clang.


 




 








09 July, 2021 10:15AM by Colin Ian King (noreply@blogger.com)

July 08, 2021

Ubuntu Blog: DHCP Server Conflict Detection

This blog title should really be, “Why you always, always, always want conflict detection turned on on all the networks MAAS touches,” but that’s really long as a title. But hear me out.

As promised, here is another DHCP blog, this time explaining how you can have multiple DHCP servers on the same subnet, serving overlapping IP addresses. There are a lot of network-savvy folks who will tell you that serving the same set of IP addresses from two different DHCP servers just won’t work. While that’s a really good rule to follow, it isn’t totally accurate under all conditions.

Keeping it “loosely coupled”

Some DHCP implementations offer a feature called server conflict detection. In short, DHCP SCD uses ICMP Echo messages (pings) — with an appropriate wait time — to see if an IP address is in use before trying to lease it to a client. If all the DHCP servers on a given subnet have SCD enabled, you don’t have to worry about whether the DHCP server scopes overlap. You can assign whatever set of IP addresses you want to whichever DHCP server, and they will work together without addressing errors.

So what’s really surprising about this feature? Well, in RFC 2131, ping checks are recommended on both ends, by the DHCP server and the client:

As a consistency check, the allocating server SHOULD probe the reused address before allocating the address, e.g., with an ICMP echo request, and the client SHOULD probe the newly received address, e.g., with ARP.

The capital letters there came from the spec itself. Essentially, DHCP servers really should check to make sure the addresses they send out aren’t already in use — and clients that get them should make sure they’re actually free before they use them.

From an architectural perspective, it might make more sense for DHCP servers to be enabled to talk to each other and coordinate assignment of IP addresses. It is possible to build and configure such DHCP servers — but that type of coordination isn’t really in keeping with the fundamental operation of DHCP.

As a protocol, DHCP is designed to be loosely coupled. Specifically, any client that has the DHCP protocol stack can discover any DHCP server or servers; any server can make an offer; and a client can take whichever offer it wants (though it’s typically coded to take the first DHCP offer that it can accept). Keeping that loosely-coupled architecture intact means letting DHCP servers check to see if the address they’re sending is in use before offering it, and letting clients check to see if an IP address is in use before they request to accept the offer.

The value for MAAS

There’s no exact count, but it’s fair to say that a very large number of MAAS installation and configuration issues resolve around competing DHCP servers, that is, multiple DHCP servers on the same subnet, using the same scope (or overlapping scopes), colliding with each other and preventing machines from getting IP addresses. This collision usually shows up as an ability to power on a machine, but not to commission it, since it can’t manage to complete the process of getting an IP address via DHCP.

MAAS already has some conflict detection built in, as documented in Managing DHCP:

In some cases, MAAS manages a subnet that is not empty, which could result in MAAS assigning a duplicate IP address. MAAS is capable of detecting IPs in use on a subnet. Be aware that there are two caveats

1. If a previously-assigned NIC is in a quiescent state or turned off, MAAS may not detect it before duplicating an IP address.
2. At least one rack controller must have access to the IP-assigned machine in order for this feature to work.

MAAS also recognises when the subnet ARP cache is full, so that it can re-check the oldest IPs added to the cache to search for free IP addresses.

If you want your configuration to run more smoothly, it’s useful to enable SCD on every DHCP provider on your network. It doesn’t hurt anything, and it really doesn’t cost that much (beside a little extra delay when assigning addresses). There are plenty of network issues associated with a large, bare-metal network. There’s no reason why DHCP conflicts need to be one of those issues.

08 July, 2021 10:46PM

Podcast Ubuntu Portugal: Ep 150 – Morcela

Voltou a competição de wallpapers Ubuntu, o Constantino quer correr aplicações Android no Ubuntu enquanto continua a investir no OBS Ninja, e o Carrondo a ver se põe tudo em ordem lá em casa…

Já sabem: oiçam, subscrevam e partilhem!

  • https://addons.mozilla.org/pt-PT/firefox/addon/multi-account-containers/
  • https://addons.mozilla.org/pt-PT/firefox/addon/sea-containers/
  • https://addons.mozilla.org/pt-PT/firefox/addon/temporary-containers/
  • https://anbox.io/
  • https://codeweek.eu/
  • https://discourse.ubuntu.com/t/wallpaper-competition-for-impish-indri-ubuntu-21-10/22852
  • https://docs.anbox.io/userguide/install_kernel_modules.html
  • https://docs.anbox.io/userguide/install.html
  • https://ec.europa.eu/portugal/news/coding-webinars-european-public-house_pt
  • https://linuxtech.pt/2021/05/79-manipulacao-social-parte-1/
  • https://linuxtech.pt/2021/06/80-manipulacao-social-parte-2/
  • https://twit.tv/shows/floss-weekly/episodes/615?autostart=false
  • https://web.archive.org/web/20210313140504/
  • https://web.archive.org/web/20210704123616/
  • https://web.archive.org/web/20210704124251/
  • https://web.archive.org/web/20210704191134/
  • https://web.archive.org/web/20210704191703/
  • https://web.archive.org/web/20210704191916/
  • https://web.archive.org/web/20210704192820/
  • https://web.archive.org/web/20210704192907/
  • https://wiki.ubuntu.com/UbuntuFreeCultureShowcase
  • https://www.cnpd.pt/comunicacao-publica/noticias/nota-da-cnpd-sobre-cookies/
  • https://www.cnpd.pt/media/x2zdus50/nota-informativa-cnpd_cookies_20210625.pdf
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://www.humblebundle.com/software/python-development-software?partner=PUP
  • https://www.humblebundle.com/books/learn-you-more-python-books?partner=PUP
  • https://www.humblebundle.com/books/knowledge-101-adams-media-books?partner=PUP
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

08 July, 2021 09:45PM

Ubuntu Blog: DHCP scope

It’s possible to have more than one DHCP server on the same network and still have everything work right, with no conflicts and no dropped packets or IP requests. It’s really not that hard to pull together, either, but there are some things to know, and some things to consider before we investigate that situation. For this blog, we’ll put some of the overlooked facets of DHCP in bold text. Let’s take a look.

A deep look at DHCP

DHCP is, technically, a network management protocol. In other words, it’s part of a collection of hardware and software tools that help to manage network traffic. DHCP is designed to automatically assign IP addresses and other communication parameters, such as default gateway, domain name, name server IPs, or time server IPs to clients. There are (at least) two participants in a DHCP transaction: a server and a client, but the client has to meet some requirements to participate. Specifically, the client has to implement an instance of the DHCP protocol stack; without that, it has no idea how to formulate Discovery and Request packets, nor can it recognise Offers or Acknowledgements (or NAKs, for that matter).

For what it’s worth, the “DHCP protocol stack” just means that a device can handle at least the following standard message types:

  • DHCPDiscover: a broadcast message sent in the hopes of finding a DHCP server.  Note that clients that don’t get a DHCP response may be able to assign themselves an Automatic Private IPv4 address (APIPA), which should always be in the range 169.254.0.0/16. This is good to know, because you want to pretty much always leave that scope (that range of IP addresses) unused by anything else in your system.
  • DHCPOffer: also a broadcast message, one that offers an IPv4 address lease; the lease is more than just an IP address, as we saw in the last DHCP blog.
  • DHCPRequest: If you haven’t noticed by now, DHCP exchanges are little like rolling snowballs: they pick up more protocol information as they go and keep it for the duration of the transaction, sending it back and forth. In this case, the client sends back everything the DHCP server sent, along with a request to actually take the offered lease.
  • DHCPAcknowlegement: If everything matches up when the DHCP server gets the Request, it responds with an Acknowledgement, which basically says, “Okay, you can lease this IP address for a set period of time.”
  • DHCPNak: If the client waits too long to Request an Offer (generally, if a different server has already claimed the offered IP address), the DHCP server may respond with a Nak. This requires the client to start over again at Discover.
  • DHCPDecline: If the client determines that, for some reason, the Offer has a configuration that won’t work for it, it can Decline the offer — that this also means it has to start again at Discover.
  • DHCPRelease: When a client is done with an IP address, it can send a Release message to cancel the rest of the lease and return the IP address to the server’s available pool.
  • DHCPInform: This is a relatively new message, which allows a client that already has an IP address to easily get other configuration parameters (related to that IP address) from a DHCP server.

As you may recall from the previous blog, the normal DHCP sequence is often referred to as DORA for “Discover, Offer, Request, Acknowledge,” which is what happens when everything goes right the first time. Note that, shortly before a lease expires, most DHCP clients will renew the lease, often with a shortened form of the exchange (Request/Acknowledge) which does not require a full DORA exchange. Also, this renewal exchange takes place directly between the client and the DHCP server, rather than being broadcast across the entire network.

Address allocation

There are (at least) three ways that a DHCP server can assign addresses to requesting clients:

  • Manual or static allocation essentially means that the client receives a specifically-chosen IP address, or, at a minimum, keeps the first one that it’s assigned until the client decides to release it.
  • Dynamic allocation means that a DHCP server assigns IP addresses from an available pool (scope) of addresses, which can change to another available address in that scope at any time, depending on the network dynamics.
  • Automatic allocation is sort of a cross between the other two types. The DHCP server assigns an address from its defined scope, but then remembers which client got what address, and re-assigns that address to the same client when a new request is made.

Regardless of the allocation method, the DHCP server’s scope — its range of IP addresses that it controls (and can assign) — is something that must be user-configured.

A UDP exchange

DHCP is “connectionless,” meaning that basically everything takes place via UDP, usually by broadcast packets — that is, packets not overtly addressed to a specific device. As we saw in the last blog, the messages become targeted pretty quickly, using the payload to specify the IP address of the DHCP server and the MAC address of the requesting client, to avoid requiring every other device on the network to completely decode every DHCP message. Note that it is possible to target a UDP packet at a specific server, by choosing a unicast message type.

Scope, allocation, topology, and authority

A DHCP client can request its previous IP address, if it had one, but whether it gets that address or not depends on four things: scope, allocation, topology, and authority. Specifically:

  • The larger the DHCP server’s scope of addresses, the more likely it is that the requested address will be available again.
  • The chances of getting the same IP address again also depend on how the server is allocating addresses (see above). Static allocation guarantees the same address; automatic allocation makes it very likely; with dynamic allocation, it’s impossible to predict.
  • Topology also plays into this process: if the DHCP server is using one or more DHCP relays to get some or all of its addresses, the chances of re-using the same IP address go down.
  • Authority also affects the probability. An authoritative DHCP server will definitely answer any unanswered DHCPDiscover message, but that server is pulling only from its own scope.

So now that we have a few of these odds and ends down pat, let’s consider the situation when multiple DHCP servers operate on one network.

Multiple DHCP servers

With regard to multiple DHCP servers on the same network, there are three possible scopes to consider:

  • Overlapping scopes: In this situation, more than one server can offer the same IP address. There is a way to make this work, by setting up the DHCP servers to talk to one another, but for most applications, this configuration can be avoided. We’ll discuss DHCP servers with overlapping scopes in the next DHCP blog.
  • Adjacent scopes: In this configuration, IP addresses are assigned from portions of the same subnet. For example, one server might control scope 192.168.14.2 – 192.168.14.187, and another server might manage scope 192.168.14.200 – 192.168.14.247. This is the most common (and most reliable) setup for multiple DHCP servers.
  • Heterogeneous scopes: This arrangement basically has DHCP servers on different subnets, such as 192.168.14.2 – .253 for one server, and 10.17.22.3 – .98 for the other. This can be made to work, but it’s extremely difficult to set up and not so easy to manage. We’ll also cover this outlier in the next blog.

So essentially, we’ve narrowed down our focus to the most obvious and reliable use case, adjacent DHCP scopes. But how do the two servers work together to manage DHCP requests?

Remember, connectionless

Well, the answer is, “they don’t.” The servers and clients operate independently on a first-come, first-served basis. A client makes a DHCPRequest. One or both of the servers may answer, depending on load and spare IP addresses. It’s also possible that neither will answer, because they’re both out of IP addresses, but with good network planning — and making one of those servers authoritative — those situations will be kept to a minimum or eliminated entirely.

If no servers answer, we may be looking at the APIPA situation, if that’s possible in the network. If one server answers, it’s a standard DORA exchange, as if there were only one server. If two servers answer, it’s the question of which Offer the client gets first. Remember that the Offer messages contain the DHCP server’s address in the payload.

Clarity is sometimes power

Hopefully, the foregoing discussion helped to clear up some potentially confusing things about DHCP in a more complex network environment. We’ll tackle the more difficult cases in the next DHCP blog, and then we’ll move on to some detailed examples.

08 July, 2021 04:11PM

Ubuntu Blog: Canonical recognized as a 2021 Microsoft Partner of the Year finalist

LONDON — July 8, 2021 — Canonical, the publishers of Ubuntu, today announced it has been named a finalist for a 2021 Microsoft Partner of the Year Award. The company was honored among a global field of top Microsoft partners for demonstrating excellence in innovation and implementation of customer solutions based on Microsoft technology.

Canonical and Microsoft launched Ubuntu Pro in 2020 to give customers a seamless experience, combining the infrastructure of Azure with the security and compliance features of Ubuntu, the world’s most popular Linux for cloud environments.

“By teaming with Microsoft, we’ve been able to help the increasing number of organizations that want to migrate to Azure through Ubuntu,” said Daniel Bowers, Canonical VP of Cloud Alliances. “Being named a Microsoft Partner of the Year finalist is a wonderful validation of the work we’ve been doing together to help businesses of all sizes achieve their digital goals, with no compromises on security, support, and efficient management of workloads.”

The Microsoft Partner of the Year Awards recognizes Microsoft partners that have developed and delivered outstanding Microsoft-based solutions during the past year. Awards were classified in various categories, with honorees chosen from a set of more than 4,400 submitted nominations from more than 100 countries worldwide. Canonical was recognized for providing outstanding solutions and services.

“I am honored to announce the winners and finalists of the 2021 Microsoft Partner of the Year Awards,” said Rodney Clark, corporate vice president, Global Partner Solutions, Channel Sales and Channel Chief, Microsoft. “These remarkable partners have displayed a deep commitment to building world-class solutions for customers—from cloud-to-edge—and represent some of the best and brightest our ecosystem has to offer.”

About Canonical

Canonical is the company behind Ubuntu, the leading OS for container, cloud, and hyperscale computing. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise security, support, and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

08 July, 2021 02:52PM

Ubuntu Podcast from the UK LoCo: S14E18 – Timing Chefs Watch

This week we’ve been configuring new-ish HP Microservers and entering our first game jam. We discuss Project Kebe, an open source Snap Store implementation, and respond to all your wonderful feedback.

It’s Season 14 Episode 18 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

08 July, 2021 02:00PM

Alan Pope: LXD - Container Manager

Preamble I recently started working for InfluxData as a Developer Advocate on Telegraf, an open source server agent to collect metrics. Telegraf builds from source to ship as a single Go binary. The latest - 1.19.1 was released just yesterday. Part of my job involves helping users by reproducing reported issues, and assisting developers by testing their pull requests. It’s fun stuff, I love it. Telegraf has an extensive set of plugins which supports gathering, aggregating & processing metrics, and sending the results to other systems.

08 July, 2021 11:00AM

Ubuntu Blog: Design and Web team summary – 2 July 2021

The web team at Canonical runs two-week iterations building and maintaining all of Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

Web

The Web team develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

Security certification docs

If you were not aware by now the team has been working to migrate all user-facing documentation to our discourse content model. As we are near completion of all docs, we worked on Security certification docs this iteration. Which loads its content from our Security Certification category on discourse.

Visit the Ubuntu security certification documentation

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/aWblJQ1jdS7jAUYmLWEi2-r0adWG4TrpMFssmjA5RllCA3BY5RFJNcPG_ar9zfJNRPCU8ae0LnHYY420rdMK0TkeIuLt7VOmiQWAUdI8U9d4IWE1rBPV6HngosXKVs2te0Veak7T" width="720" /> </noscript>

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Google Cloud social banners

Working with the Marketing team we developed a number of jointly branded social media Ads to promote the use of Ubuntu Pro for Google Cloud.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/nEXznPFaIWamTXDVKSTFUXzDrjWtauLAPUBl_LQVTtQix0RCu_6lYWjPzEhXMq-928jV87bcySywXl_cbYal7uQ2_qZ0PbPw-c2gdYtHAO6CcWLJBg-P0uPX3K6f0CPwyA4cywmk" width="720" /> </noscript>

Marketing documents

A group of documents were created in this iteration for use by the Marketing and Project Management teams, including case studies, whitepapers and datasheets.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/CN9jhD3XwJiACDJhbAMgMVavQZLkjAsdOzXgigWsCNX0v6DkIRjOc0tG29iubjT5IAYrPYnQe8u0p5WA6BrCPJlaVt8Y8RZLybvaK2H1yHPA0g2SxXCQq67hJQrVU_6Vuedswjjw" width="720" /> </noscript>

Mir illustrations

Four illustrations were created for the Mir team to be used in the upcoming refresh of the Mir website.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/7b9SiYm1gn-Uk_nmalxb9noxS4qd86oJoxDTrY1yRfWUDb33GaiWmPV8TljbinPF-9L4daDTRwoYY0nSMztp4rx2KA7RH5qC270uIynCG4DjLkzJAj5CV3GnkiIfEgQa92jC1xhD" width="720" /> </noscript>

StackOverflow banners

We created some social media banners to be used on StackOverflow with the aim of promoting Ubuntu Server and Ubuntu LTS.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/NpGo99UIPIu1tBd9LR4-SBYXou1c5UsEw8qUXceALJsVksnh9MStYlQTVw5BTJh6z31ajTSu2VfUvb0XWGaGjoYxdJKvDSYSzjaFkg4AOnKVHb-QnnsDsw6HXcuVDH1YrTK3TH6G" width="720" /> </noscript>

Vanilla

The Vanilla team designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

New in Vanilla v2.32.0

Deprecated the neutral button

We deprecated the .p-button–neutral class, as it was identical to the default .p-button variant of the component, which should now be used instead.

Introduced theming for buttons

Buttons now have light and dark themes. The light theme is used by default, but it’s possible to change that default to dark, and also to switch between dark and light themes by adding the appropriate .is-dark/.is-light utility class on individual buttons:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/cojssqqlUUlzq9ETwnCj1tdW3Ir43NwlwiE5_zzy9Dc2_LBmH3n9fOqv6rzx3C7mkBuNp4tuye2cxaxn5icfAOhxW9-ij8MI10RcxJaRsxAiSrkCIjVfg9FaJMAYEyI0eLheijMZ" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/DTtdFsbcXyqS0uYV9_CpuzR-nVB_febZVo2yG_A0zued22NhMroMBbippbI6QWBYYymI954OxF7qWPL3zmYzjM77Du7Fa85JPsR5rr7otOxvaGgJaR5tQm9fQlVaE7zJgQaq4hNH" width="720" /> </noscript>

Updates to labels component

We updated our internally used label component, and added a default .p-label variant to it alongside several existing flavours, which can be used to indicate status, be displayed as tags, or other useful information:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/2K4KaX9CCgQ9mAnYJ8yBSCrhGKl75yoPT_6LJ002cVK-n7sJeNUeHo6nG9YUtpe88r7awK9WF_LHzLB5bc3bioTx3o5kvTOeBeGEvIhpa858uhK3gVHzrRMMZMOciQJohdL2_AHJ" width="720" /> </noscript>

Utility class for tables

By default, Vanilla tables hide any content that would overflow from the cell. The newly added .has-overflow class allows the content of a cell to spill out over the cell. This is particularly useful when the cell contains a contextual menu that drops down a set of actions.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/gjwzptJKtHdhU1SCpkPKdsv4c5_ysyxKru_qaMAML4xtZY-bbeXjy3wS0He4b90IqFC32KQnBFl5bEYxoUcUmCTWZXnCe3vzG0eSP03TFtmMo9j9U_6W1N_e9-Q3-Qm7aiXwfuv_" width="720" /> </noscript>

New design system website – Component / Pattern page redesign

The first MVP feature we worked on for the new site is to redesign the component / pattern page. As a key part of the design system, we want to allow all users in different roles to access the information they require. By splitting each component page into three tabs – Design guidelines, implementation and accessibility, we provide better categorisation of the information.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/vq-rnWnulo8lY993gOCvuO9qHqSzVK1rCPAgpb6qk-5jlGpYX-Zh2EOXSgzQauRS6IQtJ6gjnodyWkJpBUwML9I6GdWBtyNfVcAhu4Qw7rp9erO09SEpeMpRWhAZgVs2dCe9IaM3" width="720" /> </noscript>

Audit and review of the use of spacing in Vanilla 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/38H15_ivxgng9tSGme15hckPu8Qt0ph6PFMB3Op2foaCIrMlkJ8C3qSE-5v4qvP0MWwt_EAkBaK8PR7VWzbVMDo0adXoxeJPFSdlKaBW8VgOofN_IsTHbLhd7Ea4OzD4lx2rjvFi" width="720" /> </noscript>

Internally, the spacing within and between Vanilla components is controlled by a set of variables that aim to keep similar elements spaced consistently. It is an elaborate system that hasn’t been well documented, so as a first step towards full documentation, this iteration we completed a visual audit. We also identified and fixed a few cases of incorrect usage. 

Apps

The Apps  team develops the UI for the MAAS project and the JAAS dashboard for the Juju project.

React migration sprint clean-up 

Continuing from the migration sprint from the previous iteration, we cleaned up some of the finished sections and finalised some of the unfinished sections. This meant cleaning up and refactoring our code to the right convention and aligning the UI components with the rest of the React app. As far as the progress goes, our dashboard, DNS, and Availability Zones pages have been fully migrated. The Images page is also now about 90% complete.

Dashboard

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/e-DH-8if6XbRZjpjndxZXlUP0k_49G9sFxjTLFxaS2O0KBwSWAGWL9d2lnNQvV-H-P2fZ-M0F8ffs-s2_fHam8jqgzbJX4eusL03f0BAY9SuiqGdFiFRIf3AKxQPuxIv816bribP" width="720" /> </noscript>

Domains

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/hsyB7LuMnxnQUXSKFrKlIYECKUJn02TVl2mfP2qbx5jLyyjwXReqO0JIYpYOSvWB1QcFShdFBsgExwotA-04Ny0IDUe-0r-XKzhccFM2xidYm5miXepqhDC-eW6zE6sS_xOlfSDc" width="720" /> </noscript>

Zones

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/tTl2hIl92-qeY67dvNcDKJIPCu4N5Up6COYfSy9epkbkvRfoVkNLafG6crp7YVySs4yRw6yxjyPJO95U78FqTeAq8Cs0kXcdoXojlnYUiZ5nYEjSDv3pUEFqxV4bqXEBlwXox_aK" width="720" /> </noscript>

Images

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/dBgS7Scw_P0wNprarPF3nVa7wh-HT_vj_H-Wk6kMM9d3-PPNNyX8fMyMqLLkaH-ffunPGtjOPnKys0WSM_6mDAK4rVWzXcoe2oe_07VSE65UkNj3mOnUrERn-DwIZFDS3zwyoWG9" width="720" /> </noscript>

Machine cloning

In this two week iteration, we focused on exposing the cloning API to the UI, where we want to allow MAAS users to clone network and/or storage configurations from a source machine to multiple destination machines. We worked through a few explorations of interaction design to figure out how this could fit with the current workflow. Here is our first prototype of the new cloning UI, if you want to try it out.

The overall flow starts at selecting destination machines. When a user selects multiple machines, they have an option to clone the networking and/or the storage configurations from a source machine into the selected destination machines.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/UwzG2w0B0Fq7e43BU3FQiykXDyfl11LPvKS5e510lPAHPKwfTXAqK4ca2GhqrVTadEBBPBA7LZThqOk93cy72kmod_DUjIOwaHcHjJQ4VPhnC7O-6bgbZa7WzUbCSOteo30pHi_4" width="720" /> </noscript>

In this workflow, we assume that our users would already know which source machine they want to clone from and that cloning will work with a homogeneous hardware environment. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/bvJ1FoD_-3gOmEKzRoHzYJnlLdOMIqh5yn9RiImOWw47FFAoZ2bryrGqCWz8ZAZPWamJ1zm2MlVHOx3TwSp2OXYATMin58Ulba-6me_i0xwSyBapbwLtt_w5alYoU87aywOYz_k6" width="720" /> </noscript>

You can either clone the storage or network configuration from the source machine to the destination machine or both! Since our assumption is that you would already have an idea of which source machine you want to clone from, the search function allows you to search by hostname or system id.

To be able to successfully clone machines, the source and destination machines need to be in one of the following states: failed testing, allocated, or ready. In addition, the user needs to have admin permission on all machines that the cloning will be performed on. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/rjHABi27Ag-p2J5OUuwNQ_odK0CbySLS_opz9CZVt7U0Y_13di7j8ReXmakq9KEkqfNC14Z0eVjkf4DqkmutQmH1Zqa3Nh1c_cT_CJbCT_2d56sbYd3Gg6_m49EGkNlRQDgPQuNe" width="720" /> </noscript>

Once the source machine is selected from the drop-down list, you will see a brief summary of the source machine, including the network and storage information.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/jmDDkvdQsy0eGTM_vB7lXwJ_JSr1Y5qBCJJMhanG61UlhIXrbyIDhxap9f-aHNUcuZKeq8UKl7YvV8vu6c_hjrId2QEjrWNMxL-M_jXYw1Xr3vN8Qeu6Gf5pDan-l2NpAcAmV-QQ" width="720" /> </noscript>

Once the cloning process is complete, we will report the number of successful machines and error machines, allowing you to either select other machines to clone or inform why the cloning process failed. 

One of the more common mistakes is to clone machines with different disk sizes. In our current API, if the destination machine has a smaller disk size than the source machine, cloning will not be successful. The same mental model goes for block devices as well. Other errors that can occur includes, selecting machines with unmatched boot methods and unmatching networking interfaces. 

Showing IP addresses in the UI network card

We received a bug report that the IP address should be surfaced to the network summary card because it is one of the most important pieces of information regarding the Network tab of a machine in MAAS. However, we were not showing that information in the network card in the Summary section.

Although it looked like the IP address was way more relevant than the MAC address, MAC address and DHCP were very important in an enterprise-level MAAS and cannot be eliminated from the Network card and we cannot disregard that, especially, for debugging purposes.

With that in mind, we initially looked for a way to switch between both of them, MAC addresses and IP addresses, since space matters in this kind of card. 


The final decision, nonetheless, was to show both. In addition, we took this opportunity to enrich the table and include more information from the Network tab, like subnet and VLAN.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/344doIfVH1WVzFTcXa2xhjl32QeL8CKyVfybjg_uR8Ve_eOo66zBjRrNanKFo0uuVkL0Gnsp52HwE7Mx1XcK8i0lACnA31I9i1W2zkrO-x2k4HZpBomaqugqYOBq4wqLgy7OhQBG" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/NK68dMtRQxNl071B4r7MjIuha0e4UjEnU4V1Dws5qA6QXvGqidc3j7FSxlIZs2el9kWiyQql2hxi1YAukltpFaOwRYPyfKbdZji1s8IXCdqpTaZcgXLs62u9u7FgSmyLbh9Hp5_B" width="720" /> </noscript>

Marketplace

The Marketplace team works closely with the Store team to develop and maintain the Snap Store site and the upcoming Charmhub site.

Kubernetes and cloud native operations report 2021

We recently ran a survey about Kubernetes and the cloud. Almost 1200 people have responded to it so far.

We build a report to show the results, insights and commentary from industry experts.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/9-iZ4Uv2Lx53egGPI0HGVpoB1mPMmgkeD0KU8FUYfR3MaNJ88bCBHZP_XA68LZGMUvLkkC7Z3_dzaIBN48beBMIPNJhuuBcUR_DMoFDhnwV2bOjERbEbjHaz1KD3o441WdJCydn5" width="720" /> </noscript>

Read the report

List collaborations for Charmhub publishers

Charmhub publishers will now be able to see and modify charms or bundles when they are a collaborator of that package.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/l0NwfVCNmbeXR1lbMmdI8c9T_DB2le4ApawvZHxsen0dVRofqDNIuwkAlOl9GChSwfJhJvK2oFNh5k8zTQyVCZ_rYP5A2alW60c0VAE87fT6siMVLr_TFQrfVobPK_xi7ZvFc0SC" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/ZavlaHf_XasmWR8pXjR6V9FqzXnb3qvf9aL3x0nY9HB2KLnCbhsw8cZtVpCyWUkAsjYR5g3sbHDz3ccJvQoVhIm7Tz-uEPLJtXwhvTOo8RoMvRGkICjoCzKJvmb7F5HLygp-Ju9o" width="720" /> </noscript>

Store reviewer research

We have been doing a series of interviews to discover more in-depth the role of a reviewer in a store, as part of our work to migrate the reviewer pages from dashboard.snapcraft.io to snapcraft.io. As an initial summary of our discoveries, we built a user flow that details the process reviewers have to go through when a manual review is needed, including some of the most common pain points of their flow. This will be the basis of the future migration and improvements to the functionality later in this cycle.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/m3ta4aGMDW_0GhUiLx5PgwofWYzRhUEHRT7i32e3t6xMpkJeValkd-3eXR9LbuoQLPgqGkM4kvYpsAbwv7qo3yyvjAtt-UbIXpD86p4_LMncr5uoAezCtMWACmtXHZty4ENc06ff" width="720" /> </noscript>

With ♥ from Canonical web team.

08 July, 2021 07:21AM

July 07, 2021

Ubuntu Blog: Moving toward Diátaxis

We discovered the Diátaxis Framework earlier this year. It’s been on our roadmap to shift MAAS doc to this cool new way of explaining things. This cycle, we plan to make it happen.

You’d think it would be obvious….

Diátaxis is one of those ideas. Once you see it, you can’t figure out why everybody didn’t come up with this system. This framework divides documentation up into four buckets:

  • Explanations
  • How-to guides
  • Tutorials
  • Reference material

When writing, one shouldn’t mix the buckets. You should take the time to view the material for yourself — it will definitely help you document your own systems better.

Sorting the MAAS documentation into these buckets requires some clear delimiters. We’re currently taking a liberal approach, but here’s what we’ve come up with so far:

  • Explanations assume that the reader is familiar with common technical terms, like “DHCP” or “network bonding.” These sections will only explain MAAS terminology, features, usage, and nuances.
  • How-to guides should be a little more than just steps. We’ve tried these guides a bit with MAAS 3.0, discovering that it’s possible to make them too sparse. We’ll keep a reasonable amount of compact explanation in these sections, so that you don’t have to consult the explanations too often.
  • Tutorials will be full-on, newbie explanations, probably tiered with detail sections that let you skip the most basic topics, if desired. These will most likely be added to the Tutorial link, which is separate from the documentation.
  • Reference material should allow us to unburden a lot of the more technical pages of long, complicated listings and tables (think “commissioning scripts” and “power types,” for example). Since the documentation is a hyperlinked medium, it’s only legwork to link the various technical details from a reference page to the other types of material.

Along the way, we also plan to simplify the navigation column on the left-hand side of the docs page. Our main issues there are: (1) that the objects in the navigation aren’t all objects; (2) that they aren’t the same kinds of objects; and (3) that the navigation structures aren’t parallel constructs, which is really important for headings and lists. Readers have trouble finding context if you group things that aren’t parallel, like this:

  • Clean the garage.
  • There’s some mail lying on the counter that I don’t know what to do with.
  • What is all that stuff in the back of the fridge?
  • Dinner is at Suzie’s at eight.

And yet, our current menu has, over time, organically become like that overgrown to-do list. Time to fix that.

Speaking of context

By implementing Diátaxis, we’re hoping to improve the flow of documentation, which is just another way of saying that we don’t want to constantly change the reader’s context. In this case, context refers specifically to a mindset or a level of focus. Much computer documentation is terrible about pushing the reader’s context all over the place.

For example, if you’re focused on following some steps to create a new virtual machine and deploy it, you’re in what might be called a “get-it-done” context. You don’t want to have to read a detailed explanation of how something works when only about five percent of that deep reading is even important to you right now. And you don’t want to view a long list or table of things you could do. What you want at this moment is a set of concrete steps to get you from A to B, with just enough explanation to keep you from tripping over the setup.

On the other hand, when you’re in deep reading mode, you may or may not be sitting right by the system, and you may or may not want to try every operation before reading on. That type of discussion is more tutorial in nature. Similarly, when you do want to look at that long list of parameters, you’d like to avoid four-page, rambling, technical explanations of every possible nuance related to something that might be related to those parameters.

In other words, if we change your context too many times, you’ll stop reading and just try something. When that doesn’t work, you may scan the doc again once or twice, but pretty soon, you’ll be on our MAAS discourse page, asking for someone to decode it all for you. There’s nothing wrong with MAAS discourse — it’s a great forum, and lately, one of our team members is always tasked with patrolling it as a top priority. But around eighty-five percent of your information needs should be met by the documentation.

So that’s the latest from the MAAS doc world. You’re welcome to comment with your thoughts on documentation structure and format.

07 July, 2021 03:41PM

Full Circle Magazine: Full Circle Weekly News #217


Linux 5.13 kernel release:
https://lkml.org/lkml/2021/6/27/202

LTSM proposed:
https://github.com/AndreyBarmaley/linux-terminal-service-manager

Release of Mixxx 2.3, the free music mixing app:
http://mixxx.org/

Ubuntu is moving away from dark headers and light backgrounds:
https://github.com/ubuntu/yaru/pull/2922

Ultimaker Cura 4.10 released:
https://ultimaker.com/learn/an-improved-engineering-workflow-with-ultimaker-cura-4-10

Pop!_OS 21.04 distribution offers new COSMIC desktop:
https://system76.com/pop

SeaMonkey 2.53.8 Integrated Internet Application Suite Released:
https://www.seamonkey-project.org/news#2021-06-30

Suricata Intrusion Detection System Update:
https://suricata.io/2021/06/30/new-suricata-6-0-3-and-5-0-7-releases/

AlmaLinux includes support for ARM64:
https://wiki.almalinux.org/release-notes/8.4-arm.html

Qutebrowser 2.3 released:
https://lists.schokokeks.org/pipermail/qutebrowser-announce/2021-June/000104.html

Tux Paint 0.9.26 is released:
http://www.tuxpaint.org/latest/tuxpaint-0.9.26-press-release.php

Jim Whitehurst, head of Red Hat, steps down as president of IBM:
https://www.cnbc.com/quotes/IBM

OpenZFS 2.1 release with dRAID support
https://github.com/openzfs/zfs/releases/tag/zfs-2.1.0

Neovim 0.5, available:
https://github.com/neovim/neovim/releases/tag/v0.5.0

Audacity’s new privacy policy allows data collection for the benefit of government authorities:
https://news.ycombinator.com/item?id=27724389

AbiWord 3.0.5 update:
http://www.abisource.com/release-notes/3.0.5.phtml

 

Credits:
Full Circle Magazine
@fullcirclemag
Host: @bardictriad, @zaivala@hostux.social
Bumper: Canonical
Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

07 July, 2021 10:28AM

July 06, 2021

Ubuntu Blog: MAAS 3.0 released

We are happy to announce the release of MAAS 3.0. This release provides some new features and bug fixes. Here’s the tl;dr summary:

  • PCI and USB devices are now modelled in MAAS
  • PCI and USB device tabs are now available in machine details
  • IBM Z DPM partitions are supported for MAAS and virtual machines
  • Proxmox is now supported
  • LXD projects are now supported
  • Workload annotations have been added for run-time machine tagging
  • You can now register a machine as a VM host during deployment
  • You can now disable boot methods
  • There’s a detailed set of instructions for filing bugs, now
  • Some fixes and improvements were made to the MAAS CLI help, the status bar, and the presentation of logs and events
  • RSD pod support has been removed

Let’s take a close look at some of these changes, in no particular order.

API changes

With the advent of MAAS 3.0, we are removing support for RSD pods. Registered pods and their machines will be removed by MAAS upon upgrading to MAAS 3.0. Since we support semantic versioning for MAAS, this change prompted us to move to MAAS 3.0 (rather than issuing a 2.10 release, e.g.).

Consolidation of logs and events

The logs and events tabs have combined and now live under “Logs”. In addition to a number of small improvements, navigating and displaying events has been made easier.

Downloading logs

A helpful new feature is the ability to download the machine and installation output. If a machine has failed deployment, you can now download a full tar of the curtain logs.

This change should make it easier to report bugs, as well, using the new bug reporting process defined during the 3.0 development cycle.

New bug reporting instructions

MAAS bugs are still reported via Launchpad, as always. Filing a good bug report, thought, makes all the difference in how quickly we can triage and address your problem. The new how-to guide will walk you through the key steps of filing a usable bug:

We encourage you to follow these guidelines, as much as it makes sense, so that we can more quickly address your issues.

Disabling boot methods

Individual boot methods may now be disabled. When a boot method is disabled, MAAS will configure MAAS controlled isc-dhcpd to not respond to the associated boot architecture code. Note that external DHCP servers must be configured manually.

To allow different boot methods to be in different states on separate physical networks — using the same VLAN ID configuration — you must make the changes on the subnet in the UI or API. When using the API, boot methods to be disabled may be specified using the MAAS internal name or boot architecture code in octet or hex form. For example, the following command will disable i386/AMD64 PXE, AMD64 UEFI TFTP, and AMD64 UEFI HTTP:

maas $PROFILE subnet update $SUBNET disabled_boot_architectures="0x00 uefi_amd64_tftp 00:10"

Improvements to MAAS CLI help UX

The MAAS CLI will now give you help in more places, supporting a more exploration-based interaction. Specifically, we now show help for cases where the required arguments are not met.

Say you’re trying to find out how to list the details of a machine in MAAS e.g.

$ PROFILE=foo
$ maas login $PROFILE http://$MY_MAAS:5240/MAAS/ $APIKEY
$ maas $PROFILE
usage: maas $PROFILE [-h] COMMAND ...

Issue commands to the MAAS region controller at http://$MY_MAAS:5240/MAAS/api/2.0/.

optional arguments:
 -h, --help            show this help message and exit

drill down:
 COMMAND
   account             Manage the current logged-in user.
   bcache-cache-set    Manage bcache cache set on a machine.
   bcache-cache-sets   Manage bcache cache sets on a machine.

✂️--cut for brevity--✂️
   machine             Manage an individual machine.
   machines            Manage the collection of all the machines in the MAAS.
   node                Manage an individual Node.
   nodes               Manage the collection of all the nodes in the MAAS.
✂️--cut for brevity--✂️

too few arguments
$ maas $PROFILE node 
usage: maas $PROFILE node [-h] COMMAND ...

Manage an individual Node.

optional arguments:
 -h, --help        show this help message and exit

drill down:
 COMMAND
   details         Get system details
   power-parameters
                   Get power parameters
   read            Read a node
   delete          Delete a node

The Node is identified by its system_id.

too few arguments

$ maas $PROFILE node read
usage: maas $PROFILE node read [--help] [-d] [-k] system_id [data [data ...]]

Read a node

positional arguments:
 system_id
 data

optional arguments:
 --help, -h      Show this help message and exit.
 -d, --debug     Display more information about API responses.
 -k, --insecure  Disable SSL certificate check

Reads a node with the given system_id.

the following arguments are required: system_id, data
$ maas $PROFILE node read $SYSTEM_ID
{
   "system_id": "$SYSTEM_ID",
   "domain": {
       "authoritative": true,
       "ttl": null,
       "is_default": true,
       "id": 0,
       "name": "maas",
       "resource_record_count": 200,
       "resource_uri": "/MAAS/api/2.0/domains/0/"
✂️--cut for brevity--✂️

We can see at each stage help which gives us clues as to what the next step is, finally arriving at a complete CLI command.

Registering a machine as a VM host during deployment

When deploying a machine through the API, it’s now possible to specify:

register_vmhost=True 

to have LXD configured on the machine and registered as a VM host in MAAS, similar to what happens with virsh if “install_kvm=True” is provided.

PCI and USB devices are now modelled in MAAS

MAAS 3.0 models all PCI and USB devices detected during commissioning:

  • Existing machines will have to be recommissioned to have PCI and USB devices modelled
  • PCI and USB devices are shown in the UI and on the API using the node-devices endpoint
  • Node devices may be deleted on the API only

On the API using the allocate operation on the machines endpoint a machine may allocated by a device vendor_id, product_id, vendor_name, product_name, or commissioning_driver.

IBM Z DPM partition support

Partitions hosted on IBM Z14 GA2 (LinuxONE II) and newer in DPM mode are now supported by MAAS 3.0. Note that partitions (LPARs) must be pre-configured, must use qeth based network devices (like Hipersockets or OSA adapters) and must have properly-defined (FCP) storage groups. IBM Z DPM partitions can be added as a chassis, which allows you to add multiple partitions at once

Proxmox support

MAAS 3.0 supports Proxmox as a power driver:

  • Only Proxmox VMs are supported
  • You may authenticate with Proxmox using a username and password or a username and API token
  • If an API token is used, it must be given permission to query, start and stop VMs.
  • Proxmox VMs can be added as a chassis; this allows you to add all VMs in Proxmox at once.

Note that proxmox support has also been back-ported to MAAS 2.9

LXD projects support

MAAS 3.0 supports the use of LXD projects:

  • LXD VM hosts registered in MAAS are now tied to a specific LXD project which MAAS uses to manage VMs
  • MAAS doesn’t create or manage machines for VMs in other projects
  • MAAS creates the specified project when the VM host is registered, if it doesn’t exist
  • All existing VMs in the specified project are commissioned on registration
  • Resource usage is reported at both project and global levels

PCI and USB device tabs in UI machine details

Tables for detected PCI and USB devices have been added to the machine details page for MAAS 3.0:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/60e4/blog-2021-07-06-1.png" width="720" /> </noscript>

These tables include a new skeleton loading state while node devices are being fetched:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/6d5f/blog-2021-07-06-2.png" width="720" /> </noscript>

The user is prompted to commission the machine if no devices are detected

Workload annotations

Workload annotations have been added to the machine summary page in MAAS 3.0. These allow you to apply owner_data to a machine and make it visible while the machine is in allocated or deployed state:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/82b2/blog-2021-07-06-3.png" width="720" /> </noscript>

This data is cleared once the machine state changes to something other than “allocated” or “deployed.” The machine list can be filtered by these workload annotations. MAAS will warn you on the release page to remind you that workload annotations will be cleared upon releasing the machine

Fixed status bar

In MAAS 3.0, a fixed status bar has been added to the bottom of the screen, which will always display the MAAS name and version on the left. The right side of the status bar is intended to show contextual data, depending on the UI panel currently displayed. For now, the only data shown is a “last commissioned” timestamp when the user is on a machine details page:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/0c93/blog-2021-07-06-4.png" width="720" /> </noscript>

Bug fixes and more

MAAS 3.0 incorporates a large number of bug fixes, along with some additional minor features. See the Release notes for the full list, as well as installation instructions.

06 July, 2021 08:59PM

Ubuntu Blog: Finserv open source infrastructure powers digital transformation

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3c96/ross-findon-mG28olYFgHI-unsplash.jpg" width="720" /> </noscript>

Covid-19 pandemic has presented unprecedented challenges and opportunities for financial institutions to embrace digital transformation initiatives at pace and scale.  Finservs are enhancing their purview of digital transformation initiatives to stay relevant and create a technology foundation that enables them to quickly bounce back from future contingencies.

Finserv digital transformation is spurred by technology, and the leading technologies spurring digital transformation are open source. It is fair to say that  open source technologies are playing a key role in digital transformation.

Financial institutions require a comprehensive portfolio of digital infrastructure and interconnection choices, both physical and virtual, and a wide range of cloud, and SaaS options to deliver that change.

Finserv digital infrastructure

Despite the growth in public cloud computing, financial institutions often need to use a combination of public and private (on-prem) clouds. Often overlooked in the hype around public cloud computing, the private clouds offer greater flexibility, security and compliance.

Financial institutions will need to leverage the right mix of cloud services – a hybrid cloud strategy to maximise application performance while on-boarding innovative new capabilities. A hybrid cloud provides orchestration, management, and application portability between public and private clouds to create a single, flexible, optimal cloud infrastructure for running a financial institution’s computing workloads.

Using cost effective open source private cloud infrastructure and placing workloads on public clouds with considerations for application performance, security and compliance,  economics and consumption model shall allow financial institutions to optimise their CapEx and OpEx costs.

OpenStack is the de-facto standard for open source private cloud build. OpenStack sits at the centre of the open source infrastructure stack, providing an interface for the virtualisation stack, SDN and SDS. It allows provisioning VMs on-demand from the self-service portal and allocates the required resources through the underlying platforms all managed and provisioned through APIs with common authentication mechanisms.

Beyond standard infrastructure-as-a-service functionality, OpenStack provides additional open source components for orchestration, fault management and service management amongst other services to ensure high availability of enterprise applications.

OpenStack for financial services

OpenStack provides a complete ecosystem for building private clouds. Built from multiple sub-projects as a modular system, OpenStack allows financial institutions to build out a scalable private (or hybrid) cloud architecture that is based on open standards.

OpenStack enables application portability among private and public clouds, allowing financial institutions to choose the best cloud for their applications and workflows at any time, without lock-in. It can also be integrated with a variety of key business systems such as Active Directory and LDAP.

OpenStack software provides a solution for delivering infrastructure as a service (IaaS) to end users through a web portal and provides a foundation for layering on additional cloud management tools. These tools can be used to implement higher levels of automation and to integrate analytics-driven management applications for optimizing cost, utilization and service levels.

OpenStack software provides support for improving service levels across all workloads and for taking advantage of the high availability capabilities built into cloud aware applications.  

Charmed OpenStack

Charmed OpenStack is an enterprise grade OpenStack distribution that leverages MAAS, Juju, and the OpenStack charmed operators to simplify the deployment and management of an OpenStack cloud.

Canonical’s Charmed OpenStack ensures private cloud price-performance, providing full automation around OpenStack deployments and operations. Together with Ubuntu, it meets the highest security, stability and quality standards in the industry.

Frictionless open source cloud infrastructure

OpenStack gives financial institutions the ability to seamlessly move workloads from one cloud to another, whether private or public. It also accelerates time-to-market by giving financial institutions’ business units, a self-service portal to access necessary resources on-demand, and an API driven platform for developing cloud-aware apps.

One does need to acknowledge that OpenStack is a growing software ecosystem consisting of various interconnected components. In order to provide a frictionless open source private cloud infrastructure to financial institutions, Canonical offers fully managed services so that finservs can focus on building innovative applications and not worry about infrastructure build and maintenance. Canonical’s managed OpenStack provides 24×7 cloud monitoring, daily maintenance, regular software updates, OpenStack upgrades and more. 

We are always here to discuss your cloud computing needs and to help you successfully execute your hybrid cloud strategy.  

Get in touch

Photo by Ross Findon on Unsplash

06 July, 2021 05:12PM

hackergotchi for Purism PureOS

Purism PureOS

Purism and Linux 5.13

Following up on our report for Linux 5.12 this summarizes the progress on mainline support for the Librem 5 phone and its development kit during the 5.13 development cycle.

The post Purism and Linux 5.13 appeared first on Purism.

06 July, 2021 02:49PM by Martin Kepplinger

hackergotchi for VyOS

VyOS

VyOS 1.2.8 and VyOS 1.3.0-rc5 are available

In this post, we announce not one, but two releases at once.

First, VyOS 1.2.8 LTS release is available to subscribers and everyone is welcome to build their own images.

Second, a VyOS 1.3.0-rc5 release candidate is also available for download for everyone, and we invite everyone to test it on lab VMs with your production configs.

06 July, 2021 12:47PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Lazarus campaign TTPs and evolution

Executive summary AT&T Alien Labs™ has observed new activity that has been attributed to the Lazarus adversary group potentially targeting engineering job candidates and/or employees in classified engineering roles within the U.S. and Europe. This assessment is based on malicious documents believed to have been delivered by Lazarus during the last few months (spring 2021). However, historical analysis shows the lures used in this campaign to be in line with others used to target these groups. The purpose of this blog is to share the new technical intelligence and provide detection options for defenders. Alien Labs will continue to report on any noteworthy changes. Key Takeaways: Lazarus has been identified targeting defense contractors with malicious documents. There is a high emphasis on renaming system utilities (Certutil and Explorer) to obfuscate the adversary’s activities (T1036.003). Background Since 2009, the known tools and capabilities believed to have...

Fernando Martinez Posted by:
Fernando Martinez

Read full post

06 July, 2021 10:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu in the wild – 06th of July

The Ubuntu in the wild blog post ropes in the latest highlights about Ubuntu and Canonical around the world on a bi-weekly basis. It is a summary of all the things that made us feel proud to be part of this journey. What do you think of it?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/07d7/Ubuntu-in-the-wild1.png" width="720" /> </noscript>
Ubuntu in the wild

Kubernetes and Cloud Native survey

Canonical recently published our 2021 Kubernetes and Cloud Native survey. With over 1200 respondents for a total of 50+ questions, it is a very comprehensive survey of the current landscape. A couple of news outlets gathered the main highlights in their synthesis articles, so if you don’t have the time to go over the study, you should check them out!

Read more on that here! 

Or read more on that there!

Or there!

RISC-V ecosystem: a new path to market

The collaboration between Canonical and SiFive is only at its beginning. With the recent announcement of Ubuntu being brought to SiFive hardware, the two companies are looking forward to helping developers prototype more stable and secure solutions, as well as providing them with an easy path to market within the RISC-V ecosystem.

Read more on that here!

Or read more on that there!

Ceph Market Development Group

The Ceph Foundation announced the formation of the Ceph Market Development Group, a group dedicated to showcase how Ceph, an open-source unified and distributed storage system, can help enterprises to manage increasing amount of data.

Learn more on that here!

Bonus: Ubuntu Community Hours

If you are looking for a place to hang out and discuss the latest Ubuntu news, hear about fun stories from the community members, or just interact with the community in general, you can check out the Ubuntu OnAir YouTube channel and tune in for the Community Office hours and Indabas! 

Check out the latest videos here!

06 July, 2021 09:52AM