November 29, 2023

hackergotchi for ARMBIAN

ARMBIAN

Armbian 23.11

Armbian 23.11 Topi

We’re excited to announce the latest Armbian release, v23.11! This update comes with a plethora of changes, making the Armbian experience even better.

A Decade of Advancement in Single Board Computer Support

For a decade, Armbian Linux has been at the forefront of advancing and supporting the diverse single board computers (SBC) landscape. With 16 point releases under our belt since the adoption of this methodology, we’re continually enhancing our offerings.

Without further ado, here are the key highlights of this release!

Evolving Support Policies

In our continual quest for improvement, we are refining our support policies to ensure better reliability and comprehensive assistance for select boards:

  • Standard Support: Boards receiving comprehensive, reliable support.
  • Staging Support: Boards undergoing rigorous support validation.
  • Community Maintained: Boards benefiting from the strength of community support.

More info can be found here.

Boards Elevated to Standard Support

We are pleased to include the following boards to our Standard Support tier:

  • Khadas VIM1S
  • Khadas VIM4
  • Texas Instruments TDA4VM
  • Xiaomi Pad 5 Pro

Key Improvements in This Release

  • Addressing numerous bugs for improved functionality for Banana Pi CM4.
  • Mainline Kernel for RK3588 with experimental HDMI support.
  • Fixed Display Managers across all desktops.
  • Experimental EDK2/UEFI Support for RK3588 boards.
  • Introducing Ubuntu Mantic and Debian Trixie as daily image builds.
  • Enhancing quality control through automated tests.

Highlights of Completed Actions

Closed Projects

In this version, we’ve successfully closed several projects, including switching the default login manager, enabling artifacts creation at pull request, and adding support for Hikey 960. Additionally, we’ve updated the edge kernel to v6.6 and introduced new Armbian wallpapers. The support for various boards like NanoPi R6S/R6C, TI SK-TDA4VM, Xiaomi-elish, and more has been added, enhancing the range of compatible devices.

Closed Tasks

Numerous tasks have been completed, ranging from removing vendor-specific patches to adding support for different boards like Tanix TX6, Inovato Quadra, and Mekotronics R58X-Pro. Improvements have been made for specific kernels, such as cleaning up EOL kernels for Rockchip64, updating kernel configs for Waydroid and Redroid support, and enabling Bluetooth support for VIM1S/VIM4.

Solved Bugs

This release addresses various bugs and issues, ensuring a smoother user experience. Bug fixes include resolving errors with specific commands, fixing compilation issues for different kernels, addressing display output problems, and enhancing hardware support for several boards, such as Orange Pi 3 LTS, LicheePi 4A, and Khadas Vim1s.

The complete list of actions can be found here.

Remarkable Contributors, Supporters and Partners

We extend our heartfelt appreciation to the individuals who have contributed immensely to the growth and success of Armbian.

Remarkable Contributors: @adeepn, @amazingfate, @belegdol, @brentr, @chainsx, @efectn, @EvilOlaf, @ginkage, @glneo, @hzyitc, @igorpecovnik, @Kreyren, @marcone, @msdos03, @paolosabatino, @prahal, @pyavitz,rpardini, @schwar3kat, @tdleiyao, @Tonymac32, @viraniac

Support Staff: Didier, Lanefu, Rafal, Adam, Werner, and many others have dedicated their expertise and time to provide support and guidance.

We also extend our gratitude to our esteemed partners. Find out more about them here.

Your contributions and support are invaluable in shaping the Armbian community and its success.

Thank you for being an essential part of the Armbian community!

The Armbian team

29 November, 2023 05:19PM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: KDiagram 3.0.0

KDiagram is two powerful libraries (KChart, KGantt) for creating business diagrams.

Version 3.0.0 is now available for packaging.

It moves KDiagram to use Qt 6. It is co-installable with previous Qt 5 versions and distros may want to package both alongside each other for app compatibility.

URL: https://download.kde.org/stable/kdiagram/3.0.0/
SHA256: 6d5f53dfdd019018151c0193a01eed36df10111a92c7c06ed7d631535e943c21

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg

29 November, 2023 04:05PM

Jonathan Riddell: KWeatherCore 0.8.0

KWeatherCore is a library to facilitate retrieval of weather information including forecasts and alerts.

0.8.0 is available for packaging now

URL: https://download.kde.org/stable/kweathercore/0.8.0/
SHA256: 9bcac13daf98705e2f0d5b06b21a1a8694962078fce1bf620dbbc364873a0efeS
Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

This release moves the library to use Qt 6. It is not compatible with older Qt 5 versions of the library so should only be packaged when KWeather is released or in testing archives.

29 November, 2023 03:56PM

Ubuntu Blog: Generative AI explained

When OpenAI released ChatGPT on November 30, 2022, no one could have anticipated that the following 6 months would usher in a dizzying transformation for human society with the arrival of a new generation of artificial intelligence. Since the emergence of deep learning in the early 2010s, artificial intelligence has entered its third wave of development. The introduction of the Transformer algorithm in 2017 propelled deep learning into the era of large models. OpenAI established the GPT family based on the Decoder part of the Transformer.

ChatGPT quickly gained global popularity, astonishing people with its ability to engage in coherent and deep conversations, while also revealing capabilities such as reasoning and logical thinking that reflect intelligence. Alongside the continuous development of AI pre-training with large models, ongoing innovation in Artificial Intelligence Generated Content (Generative AI) algorithms, and the increasing mainstream adoption of multimodal AI, Generative AI technologies represented by ChatGPT accelerated as the latest direction in AI development. This acceleration is driving the next era of significant growth and prosperity in AI, poised to have a profound impact on economic and social development. CEOs may find detailed advice for adopting Gen AI in my recently published article in Harvard Business Review – What CEOs Need to Know About the Costs of Adopting GenAI.

Definition and Background of Generative AI Technology

Generative AI refers to the production of content through artificial intelligence technology. It involves training models to generate new content that resembles the training data. In contrast to traditional AI, which mainly focuses on recognizing and predicting patterns in existing data, Generative AI emphasizes creating new, creative data. Its key principle lies in learning and understanding the distribution of data, leading to the generation of new data with similar features. This technology finds applications in various domains such as images, text, audio, and video. Among these applications, ChatGPT stands out as a notable example. ChatGPT, a chatbot application developed by OpenAI based on the GPT-3.5 model, gained massive popularity. Within just two months of its release, it garnered over 100 million monthly active users, surpassing the growth rates of all historical consumer internet applications. Generative AI technologies, represented by large language models and image generation models, have become platform-level technologies for the new generation of artificial intelligence, contributing to a leap in value across different industries.

The explosion of Generative AI owes much to developments in three AI technology domains: generative algorithms, pre-training models, and multimodal technologies.

Generative Algorithms: With the constant innovation in generative algorithms, AI is now capable of generating various types of content, including text, code, images, speech, and more. Generative AI marks a transition from Analytical AI, which focuses on analyzing, judging, and predicting existing data patterns, to Generative AI, which deduces and creates entirely new content based on learned data.

Pre-training Models: Pre-training models, or large models, have significantly transformed the capabilities of Generative AI technology. Unlike the past where researchers had to train AI models separately for each task, pre-training large models have generalized Generative AI models and elevated their industrial applications. These large models have strong language understanding and content generation capabilities.

Multimodal AI Technology: Multimodal technology enables Generative AI models to generate content across various modalities, such as converting text into images or videos. This enhances the versatility of Generative AI models.

Foundational technologies of Generative AI

Generative Adversarial Networks (GANs): GANs, introduced in 2014 by Ian Goodfellow and his team, are a form of generative model. They consist of two components: the Generator and the Discriminator. The Generator creates new data, while the Discriminator assesses the similarity between the generated data and real data. Through iterative training, the Generator becomes adept at producing increasingly realistic data.

Variational Autoencoders (VAEs): VAEs are a probabilistic generative method. They leverage an Encoder and a Decoder to generate data. The Encoder maps input data to a distribution in a latent space, while the Decoder samples data from this distribution and generates new data.

Recurrent Neural Networks (RNNs): RNNs are neural network architectures designed for sequential data processing. They possess memory capabilities to capture temporal information within sequences. In generative AI, RNNs find utility in generating sequences such as text and music.

Transformer Models: The Transformer architecture relies on a Self-Attention mechanism and has achieved significant breakthroughs in natural language processing. It’s applicable in generative tasks, such as text generation and machine translation.

Applications and Use Cases of Generative AI

Text Generation

Natural language generation is a key application of Generative AI, capable of producing lifelike natural language text. Generative AI can compose articles, stories, poetry, and more, offering new creative avenues for writers and content creators. Moreover, it can enhance intelligent conversation systems, elevating the interaction experience between users and AI.

ChatGPT (short for Chat Generative Pre-trained Transformer) is an AI chatbot developed by OpenAI, introduced in November 2022. It employs a large-scale language model based on the GPT-3.5 architecture and has been trained using reinforcement learning. Currently, ChatGPT engages in text-based interactions and can perform various tasks, including automated text generation, question answering, and summarization.

Image Generation

Image generation stands as one of the most prevalent applications within Generative AI. Stability AI has unveiled the Stable Diffusion model, significantly reducing the technical barriers for AI-generated art through open-source rapid iteration. Consumers can subscribe to their product DreamStudio to input text prompts and generate artworks. This product has attracted over a million users across 50+ countries worldwide.

Audio-Visual Creation and Generation

Generative AI finds use in speech synthesis, generating realistic speech. For instance, generative models can create lifelike speech by learning human speech characteristics, suitable for virtual assistants, voice translation, and more. AIGC is also applicable to music generation. Generative AI can compose new music pieces based on given styles and melodies, inspiring musicians with fresh creative ideas. This technology aids musicians in effectively exploring combinations of music styles and elements, suitable for music composition and advertising music.

Film and Gaming

Generative AI can produce virtual characters, scenes, and animations, enriching creative possibilities in film and game production. Additionally, AI can generate personalized storylines and gaming experiences based on user preferences and behaviors.

Scientific Research and Innovation

Generative AI can explore new theories and experimental methods in fields like chemistry, biology, and physics, aiding scientists in discovering new knowledge. Additionally, it can accelerate technological innovation and development in domains like drug design and materials science.

Code Generation Domain

Having been trained on natural language and billions of lines of code, certain generative AI models are proficient in multiple programming languages, including Python, JavaScript, Go, Perl, PHP, Ruby, and more. They can generate corresponding code based on natural language instructions.

GitHub Copilot, a collaboration between GitHub and OpenAI, is an AI code generation tool. It provides code suggestions based on naming or contextual code editing. It has been trained on billions of lines of code from publicly available repositories on GitHub, supporting most programming languages.

Content Understanding and Analysis

Bloomberg recently released a large language model (LLM) named BloombergGPT tailored for the financial sector. Similar to ChatGPT, it employs Transformer models and large-scale pre-training techniques for natural language processing, boasting 500 billion parameters. BloombergGPT’s pre-training dataset mainly comprises news and financial data from Bloomberg, constructing a dataset with 363 billion labels, supporting various financial industry tasks.

BloombergGPT aims to enhance users’ understanding and analysis of financial data and news. It generates finance-related natural language text based on user inputs, such as news summaries, market analyses, and investment recommendations. Its applications span financial analysis, investment consulting, asset management, and more. For instance, in asset management, it can predict future stock prices and trading volumes based on historical data and market conditions, providing investment recommendations and decision support for fund managers. In financial news, BloombergGPT automatically generates news summaries and analytical reports based on market data and events, delivering timely and accurate financial information.

AI Agents

In April 2023, an open-source project named AutoGPT was released on GitHub. As of April 16, 2023, the project has garnered over 70K stars. AutoGPT is powered by GPT-4 and is capable of autonomously achieving any user-defined goals. When presented with a task, AutoGPT autonomously analyzes the problem, proposes an execution plan, and carries it out until the user’s requirements are met.

Apart from standalone AI Agents, there’s the possibility of a ‘Virtual AI Society’ composed of multiple AI agents. GenerativeAgents, as explored in a paper titled “GenerativeAgents: Interactive Simulacra of Human Behavior” by Stanford University and Google, successfully constructed a ‘virtual town’ where 25 intelligent agents coexist. 

Leading business consulting firms predict that by 2030, the generative AI market size will reach $110 billion USD. 

Operations of Gen AI

Operating GenAI involves a comprehensive approach that encompasses the entire lifecycle of GenAI models, from development to deployment and ongoing maintenance. It encompasses various aspects, including data management, model training and optimization, model deployment and monitoring, and continuous improvement. GenAI MLOps is an essential practice for ensuring the success of GenAI projects. By adopting MLOps practices, organizations can improve the reliability, scalability, maintainability, and time-to-market of their GenAI models.

Canonical’s MLOps presents a comprehensive open-source solution, seamlessly integrating tools like Charmed Kubeflow, Charmed MLFlow, and Charmed Spark. This approach liberates professionals from grappling with tool compatibility issues, allowing them to concentrate on modeling. Charmed Kubeflow serves as the core of an expanding ecosystem, collaborating with other tools tailored to individual user requirements and validated across diverse platforms, including any CNCF-compliant K8s distribution and various cloud environments. Orchestrated through Juju, an open-source software operator, Charmed Kubeflow facilitates deployment, integration, and lifecycle management of applications at any scale and on any infrastructure. Professionals can selectively deploy necessary components from the bundle, reflecting the composability of Canonical’s MLOps tooling—an essential aspect when implementing machine learning in diverse environments. For instance, while Kubeflow comprises approximately 30 components, deploying just three— Isto, Seldon, and MicroK8s—suffices when operating at the edge due to distinct requirements for edge and scalable operations.

29 November, 2023 01:54PM

hackergotchi for GreenboneOS

GreenboneOS

Supposedly pro-Russian hackers try to exploit Sharepoint vulnerability

A critical vulnerability for Sharepoint (CVE-2023-29357), is being targeted by presumably pro-Russian attackers who are trying to exploit this vulnerability.

The Internet Storm Center has discovered corresponding activity on its honeypots. The severity for this vulnerability is critical (a score of 9.8 out of 10), and the attack complexity is very low, making this vulnerability particularly dangerous. Greenbone customers can benefit from the automatic detection of this vulnerability in our Enterprise Feed. Microsoft offers a security update since June 12, 2023, Microsoft customers who missed the update should install it now.


29 November, 2023 11:42AM by Elmar Geese

hackergotchi for Deepin

Deepin

November 28, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Real-time Linux: a comprehensive guide

<noscript> <img alt="" height="327" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_624,h_327/https://lh7-us.googleusercontent.com/HIiZz6H7I1kVrAkCBvq6Koa-qULkTfzAWI2L_ku8uzrKsAHaclY-nAKGv0U5TpDd9uPyxzeWuo9OX4hOF014j2D7VqPAmQzep4Hbsp-dIih0XBqDI1smIyMYw5J3vcEg3Z20lKDXHMv33j9-XezFx-4" width="624" /> </noscript>

In the mission-critical workloads of modern enterprises, where time boundaries and determinism can make a major difference, the demand for real-time systems has never been more urgent. In our latest whitepaper, we delve into the realm of real-time Linux. The whitepaper is a comprehensive guide to understanding and unlocking the potential of real-time Linux to meet the stringent demands of industrial workloads.

<noscript> <img alt="" height="273" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_387,h_273/https://lh7-us.googleusercontent.com/dcsu2LHEGWqd6iUTJHEUd1CfCcSLAEzlnwQ7zTFzm1UMEjyUmJ-UNrf82BedTx0-4_WN7zVkQ0MugySNiw_1XT-QMqTdB2YeFzemNtBkCrwffhxM5Y9utT82fh0pYQJdEf-PcZ0FHKTmaC3dmFCGfNw" width="387" /> </noscript>

The Essence of Real-Time Systems

At its core, a real-time system guarantees the execution of high-priority processes with deterministic response times. This means that enterprises can confidently run their most demanding workloads, knowing that there is an upper time bound for mission-critical latency requirements. Real-time systems are the bedrock for applications where timing is everything, ensuring that tasks are executed within specified deadlines, without compromise.

Real-Time Linux: The De-Facto Approach

On the software side, real-time Linux, powered by the Linux kernel, is emerging as the go-to solution for enterprises seeking determinism. The integration of the PREEMPT_RT patches makes Ubuntu more preemptive than mainline Linux and delivers industrial-grade performance. This approach is backed by robust support for hardware devices and peripherals, making Real-time Ubuntu a compelling choice for a wide range of applications.

Inside the Whitepaper

Are you an engineer evaluating options for executing threads within specified deadlines? Or an executive keen to understand how real-time Linux aligns with modern market trends? This whitepaper is for you. Here’s a sneak peek into what you’ll discover:

  1. Achieving Deterministic Performance: Learn how to achieve deterministic performance for time-sensitive applications, empowering you to meet the most exacting requirements.
  2. Key Applications and Sectors: Explore the diverse applications and sectors where real-time Linux can make a transformative impact. From manufacturing to finance, discover how real-time systems can elevate performance across industries.
  3. PREEMPT_RT Patches Demystified: Gain a deep understanding of how the PREEMPT_RT patches work to enhance the preemptive nature of the Linux kernel. Uncover the technical details that drive industrial-grade performance.

Accelerate Your Software-Defined Transformation

By running Real-time Ubuntu, you can benefit from a 10-year security update commitment, ensuring that your systems stay robust and secure throughout your software-defined transformation. We invite engineers, executives, and decision-makers to learn how Real-time Ubuntu can propel your enterprise into the next era of computing, be a catalyst for deterministic response times, and provide a future-ready infrastructure.

<noscript> <img alt="" height="200" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_600,h_200/https://ubuntu.com/wp-content/uploads/16fb/RTK-WP-Blog-Banner-grey.png" width="600" /> </noscript>

28 November, 2023 02:00PM

hackergotchi for Tails

Tails

Tails 5.20

Changes and updates

  • Update Tor Browser to 13.0.4.

  • Update Thunderbird to 115.5.0.

  • Stop downloading the AdGuard filter list for uBlock Origin in the language of the session.

    This prevents some advanced browser fingerprinting. (#20022)

Fixed problems

Since many of you are still reporting issues with the new Persistent Storage, we are releasing several improvements to the Persistent Storage and the WhisperBack error reporting tool:

  • Fix an error when activating the Persistent Storage. (#20011)

  • Fix the translation of the WhisperBack interface. (#20040)

  • Improve the interface of WhisperBack to make it easier to report the information we need to troubleshoot issues. (#19351)

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 5.20

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 5.0 or later to 5.20.

    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.20 directly:

28 November, 2023 09:22AM

hackergotchi for Deepin

Deepin

November 27, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Announcing Incus 0.3

Another month, another Incus release!
Incus 0.3 is now out, featuring OpenFGA support, a lot of improvements to our migration tool and support for hot-plug/hot-remove of shared paths in virtual machines.

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

Finally just a quick reminder that my company is now offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship.
You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

27 November, 2023 09:34PM

The Fridge: Ubuntu Weekly Newsletter Issue 815

Welcome to the Ubuntu Weekly Newsletter, Issue 815 for the week of November 19 – 25, 2023. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meetup/Workshop in Africa – A Resounding Success!
  • UbuCon Asia 2023 & Ubuntu Summit 2023 Recap Seminar
  • Ubuntu 23.10 Release Party + InstallFest Review
  • Ubuntu 23.10 Release Party + InstallFest in Busan is successfully completed!
  • LoCo Events
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

27 November, 2023 09:27PM

November 26, 2023

Stéphane Graber: Containers and kernel devrooms at FOSDEM 2024

As has become a bit of a tradition by now, I’ll be attending FOSDEM 2024 in Brussels, Belgium on the weekend of the 3-4th of February 2024.

I’m once again one of the organizers of the containers devroom, a devroom we’ve been running for over 5 years now. And on top of that, will also help organize the kernel devroom. This is going to be our second year for this devroom after a very successful first year in 2023!

The CFPs for both devrooms are currently still open with a submission deadline of December 10th:

If you have anything that’s containers or kernel related, please send it, we have a variety of time slot lengths to accommodate anything from a short demo to a full size talk.

But those are just two of a lot of different devrooms running over the weekend, you can find a full list here along with all the CFP links.

See you in Brussels!

PS: A good chunk of the LXC/Incus team is going to be attending, so let us know if you want to chat and we’ll try to find some time!

26 November, 2023 03:00PM

hackergotchi for Qubes

Qubes

Qubes OS 4.2.0-rc5 is available for testing

We’re pleased to announce that the fifth release candidate (RC) for Qubes OS 4.2.0 is now available for testing. The ISO and associated verification files are available on the downloads page. For more information about the changes included in this version, see the full list of issues completed between RC4 and RC5 and the Qubes OS 4.2.0 release notes.

When is the stable release?

That depends on the number of bugs discovered in this RC and their severity. As explained in our release schedule documentation, our usual process after issuing a new RC is to collect bug reports, triage the bugs, and fix them. This usually takes around five weeks, depending on the bugs discovered. If warranted, we then issue a new RC that includes the fixes and repeat the whole process again. We continue this iterative procedure until we’re left with an RC that’s good enough to be declared the stable release. No one can predict, at the outset, how many iterations will be required (and hence how many RCs will be needed before a stable release), but we tend to get a clearer picture of this with each successive RC, which we share in this section in each RC announcement. Here is the latest update:

At this point, we are hopeful that RC5 will be the final RC.

Testing Qubes 4.2.0-rc5

Thank you to everyone who tested the previous Qubes 4.2.0 RCs! Due to your efforts, this new RC includes fixes for several bugs that were present in the previous RCs.

If you’re willing to test this new RC, you can help us improve the eventual stable release by reporting any bugs you encounter. We encourage experienced users to join the testing team.

A full list of issues affecting Qubes 4.2.0 is available here. We strongly recommend updating Qubes OS immediately after installation in order to apply all available bug fixes.

Upgrading to Qubes 4.2.0-rc5

If you’re currently running any Qubes 4.2.0 RC, you can upgrade to the latest RC by updating normally. However, please note that there have been some recent template changes, which are detailed in the Qubes OS 4.2.0 release notes.

If you’re currently on Qubes 4.1 and wish to test 4.2, please see how to upgrade to Qubes 4.2, which details both clean installation and in-place upgrade options. As always, we strongly recommend making a full backup beforehand.

Reminder: new signing key for Qubes OS 4.2

As a reminder, we published the following special announcement in Qubes Canary 032 on 2022-09-14:

We plan to create a new Release Signing Key (RSK) for Qubes OS 4.2. Normally, we have only one RSK for each major release. However, for the 4.2 release, we will be using Qubes Builder version 2, which is a complete rewrite of the Qubes Builder. Out of an abundance of caution, we would like to isolate the build processes of the current stable 4.1 release and the upcoming 4.2 release from each other at the cryptographic level in order to minimize the risk of a vulnerability in one affecting the other. We are including this notice as a canary special announcement since introducing a new RSK for a minor release is an exception to our usual RSK management policy.

As always, we encourage you to authenticate this canary by verifying its PGP signatures. Specific instructions are also included in the canary announcement.

As with all Qubes signing keys, we also encourage you to authenticate the new Qubes OS Release 4.2 Signing Key, which is available in the Qubes Security Pack (qubes-secpack) as well as on the downloads page under the Qubes OS 4.2.0-rc5 ISO.

What is a release candidate?

A release candidate (RC) is a software build that has the potential to become a stable release, unless significant bugs are discovered in testing. RCs are intended for more advanced (or adventurous!) users who are comfortable testing early versions of software that are potentially buggier than stable releases. You can read more about Qubes OS supported releases and the version scheme in our documentation.

26 November, 2023 12:00AM

November 25, 2023

hackergotchi for SparkyLinux

SparkyLinux

WhatsApp for Linux

whatsapp for linux

There is a new application available for Sparkers: WhatsApp for Linux

What is WhatsApp for Linux?

WhatsApp for Linux is an unofficial WhatsApp desktop application written in C++ with the help of gtkmm and WebKitGtk libraries

Features
– Features come with WhatsApp Web
– WhatsApp specific keyboard shortcuts work with Alt key instead of Cmd
– Zoom in/out
– System tray icon
– Notification sounds
– Autostart with system
– Fullscreen mode
– Show/Hide headerbar by pressing Alt+H
– Localization support in system language
– Spell checking in system language. You need to install the corresponding dictionary to get this working i.e. hunspell-en_us package for US English
– Open chat by phone number

Installation (Sparky 7 & 8 amd64 only):

sudo apt update
sudo apt install whatsapp-for-linux

License: GNU GPL 3
Web: github.com/eneshecan/whatsapp-for-linux

 

25 November, 2023 06:38PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Adding a web UI to the Incus demo service

Introduction

For anyone who hasn’t seen it before, you can try the Incus container and virtual machine manager online by just heading to our website and starting a demo session.

This gets you a dedicated VM on an Incus cluster with a bunch of resources and with a 30min time limit so you can either poke around for yourself or go through our guided showcase.

Now as neat as this is, it’s nothing new and we’ve been offering this for quite a while.

What’s new is that “Try a Web UI” link you can see alongside the console timer, click it and you’ll be getting into a web app that lets you play with the same temporary Incus server as the regular text demo.

What web UI?

Unlike LXD, Incus doesn’t have an official web UI. Instead, it just serves whatever web UI you want.

That means that getting a stateless (javascript + html only) web UI is as simple as unpacking a bunch of files in /opt/incus/ui/ and then accessing Incus from a web browser. For more complex, stateful web UIs (those using dynamic server-side languages or an external database), a simple index.html file can be dropped into /opt/incus/ui/ to then redirect the user to the correct web server.

In a recent livestream, I spent a bit of time packaging the Canonical LXD UI in my Incus package repository so that it’s now as simple as apt install incus-ui-canonical to get that one up and running.

Part of that work was to also do some minimal re-branding, changing some links and updating the list of config options so it matches Incus. That’s handled as a series of patches that are applied during the package build.

How does it all work?

Now to get this available for anyone as part of the online demo service, some work had to be done!

The first part was the easy one, simply get the incus-ui-canonical package installed in our demo image. Those images are generated through a simple shell script, building a new base image every day.

With the package present and Incus configured to listen on the network, the next step was to add a bunch of logic to the incus-demo-server codebase. Each demo session is identified by a UUID. You can see that UUID in the URL whenever you start a demo session.

When a new session is created, a database record is made which amongst other things records the IPv6 address of the instance. Until now, this wasn’t really used other than for debugging purposes.

Now the easy approach would have been to just provide the IPv6 address to the end user and so long as they have IPv6 connectivity, they could just access the web UI directly. There are a few problems with that approach though:

  • Adoption rate for IPv6 is only slightly above 50% when looking at Incus users
  • The target web server (Incus) doesn’t have a valid TLS certificate
  • Authentication in the web UI requires a client certificate in the user’s browser

This would have made for a very high bar for anyone to try the UI, something better was needed. And so that’s where the idea of having incus-demo-server act as a SNI-aware proxy came about.

The setup basically looks like:

  • User hits https://<UUID>.incus-demo-linuxcontainers.org
  • Wildcard DNS record catches *.incus-demo.linuxcontainers.org and sends to HAProxy
  • HAProxy uses a valid Let’s Encrypt wildcard certificate to answer
  • HAProxy forwards the traffic to incus-demo-server on a dedicated port, keeping the SNI (Server Name Indication) value
  • incus-demo-server inspects the SNI value, extracts the instance UUID and gets the target IPv6 address from its database
  • incus-demo-server forwards the traffic to the Incus server running in the instance, using its own TLS client certificate for that connection

This results in the end user being able to access the web UI in their temporary instance with a valid HTTPS certificate and without needing to worry about authentication at all.

You can find the incus-demo-server side of this logic here.

Conclusion

I believe this turned out to be a very elegant trick, making things as easy as humanly possible for anyone to try Incus, letting them mix and match using the CLI or using a web UI.

As mentioned, there is no such thing as an official web UI with Incus, so we’re looking forward to getting some more of the alternative web UIs packaged and will be looking at ways to offer them up on the demo service too, likely by having the user install whichever one they want through the terminal.

25 November, 2023 06:29PM

Bryan Quigley: Lubuntu Memory Usage and Rsyslog

In 2020 I reviewed LiveCD memory usage.

I was hoping to review either Wayland only or immutable only (think ostree/flatpak/snaps etc) but for various reasons on my setup it would just be a Gnome compare and that's just not as interesting. There are just to many distros/variants for me to do a full followup.

Lubuntu has previously always been the winner, so let's just see how Lubuntu 23.10 is doing today.

Previously in 2020 Lubuntu needed to get to 585 MB to be able to run something with a livecd. With a fresh install today Lubuntu can still launch Qterminal with just 540 MB of RAM (not apples to apples, but still)! And that's without Zram that it had last time.

I decided to try removing some parts of the base system to see the cost of each component (with 10MB accuracy). I disabled networking to try and make it a fairer compare.

  • Snapd - 30 MiB
  • Printing - cups foomatic - 10 MiB
  • rsyslog/crons - 10 MiB

Rsyslog impact

Out of the 3 above it's felt more like with rsyslog (and cron) are redundant in modern Linux with systemd. So I tried hitting the log system to see if we could get a slowdown, by every .1 seconds having a service echo lots of gibberish.

After an hour of uptime, this is how much space was used:

  • syslog 575M
  • journal at 1008M

CPU Usage on fresh boot after:

With Rsyslog

  • gibberish service was at 1% CPU usage
  • rsyslog was at 2-3%
  • journal was at ~4%

Without Rsyslog

  • gibberish service was at 1% CPU usage
  • journal was at 1-3%

That's a pretty extreme case, but does show some impact of rsyslog, which in most desktop settings is redundant anyway.

Testing notes:

  • 2 CPUs (Copy host config)
  • Lubuntu 23.10 install
  • no swap file
  • ext4, no encryption
  • login automatically
  • Used Virt-manager and only default change was enabling EUFI

25 November, 2023 02:42AM

November 24, 2023

hackergotchi for VyOS

VyOS

New book by Daniil Baturin: Linux for System Administrators

Hello, Community!

Many people from our generation, including myself and VyOS maintainers and contributors, learned about Linux from books. At the time, there wasn't much else. Now, there's loads and loads of information on the Internet, and experienced admins can easily find what they are looking for in seconds. But for beginners, a book where they can read about all the basics is still a great start.

But what book can we recommend them to read?

24 November, 2023 04:20PM by Yuriy Andamasov (yuriy@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu Explained: How to ensure security and stability in cloud instances—part 3

Ubuntu updates: securing multiple Ubuntu instances while maximising uptime

Most people know that it is important to apply security updates. It can be challenging, however, to accomplish this while maximising the uptime of the services you are running on top. Every change, even applying security patches, carries some risk of disrupting your workloads. You therefore need to be deliberate about your update strategy.

This blog is the third in a three-part series by John Chittum (Engineering Manager, CPC) and Aaron Whitehouse (Senior Public Cloud Enablement Director). In the first part we covered the philosophy of Ubuntu’s releases, the archive’s structure and how packages are updated. In the second part, we explained how to update individual Ubuntu instances. That background will help you understand what we cover below, so please read those first before continuing.  

Ubuntu is a Debian-based Linux distribution which uses two primary package types: snaps and Debian (.deb) packages. Snaps have their own update mechanisms, so this blog series focuses only on the Debian package updates. We aim to give an overview of the best approaches to updating packages across multiple instances. Then you can use this to decide which fit your particular requirements.

Differences between updating single and multiple Ubuntu instances

You can update large fleets of Ubuntu instances in the same way we outlined for single instances in Part 2. If you are worried about service uptime, and have the resources to invest, you have more choices with multiple instances. You may benefit from coordinating your update rollouts to maximise the uptime of your fleet.

We take a number of steps to reduce the risk of updates breaking workloads (see Part 1). It is always possible, however, for an update to have negative consequences in your specific environment. Many of our users with large Ubuntu estates therefore choose to roll out security updates more slowly across production instances. This can let you catch any issues in your environment before the update is widely deployed. The flipside is that it leaves those instances exposed to security vulnerabilities during that rollout period. You need to balance the delay in patching security vulnerabilities against the increased time to notice issues. You should involve your security team in that analysis.

Every production environment is different, but some approaches and things to consider are set out below. If you are managing updates centrally, make sure that you turn off unattended-upgrades on the instances (see Part 2).

Monitoring

Slowing down the rollout of security patches is only likely to help if you will notice if something goes wrong. Do you have automatic health checks in place? Have you set up Observability for your production environments? Do your internal Ubuntu users have a way to let you know if something stops working as expected?

Highly-available services

Imagine a highly-available service that can tolerate one or more instances failing. Updating any one of those instances should be relatively safe. Updating all instances at the same time, however, puts the service at risk.

In such situations you will often want to update related instances one availability zone or update domain at a time. You can monitor updated instances for any impact on your services, ideally with automated health checks. Then, if everything looks as expected, you can roll the update to the next availability zone.

Unattended-upgrades and HA services

The requirements of HA services mean that unattended-upgrades will often be a poor choice for such environments. Unattended-upgrades is not fleet aware and updates occur at uncoordinated times regardless of the health of updated instances. It is possible to configure the time that unattended-upgrades runs in /etc/systemd/system/timers.target.wants/apt-daily-upgrade.timer to ensure that not all nodes are updating at the exact same time and spread updates over a time period. Even so, these instances will update with no awareness of the health of instances that have already received the updates. An update that causes issues for you will continue to roll through your production environments unless you manually intervene.

The risks of automated reboot tools with HA services

Similarly, there are tools that will monitor when instances need to be rebooted and to do this for you. Be careful to ensure that these are also aware of your wider estate of instances and their health. An example of an open source service, not associated with Ubuntu, is Kured. Kured is a Kubernetes daemonset that performs automatic node reboots when needed. You can configure Kured to check for active alerts before rebooting additional nodes, but you need to set this up. We have seen update issues in customer environments that only show up on a reboot. Without health checks, something like Kured can make issues on reboot have a much greater impact on your clusters.

Fleet-aware update tools

There are available update management tools for Ubuntu that are fleet aware. Azure’s Guest Patching Service will roll out updates across availability zones and regions over time. It will use health monitoring to try to identify issues with an update and pause the rollout. We have worked closely with the AzGPS team to make Ubuntu support first-class in this tool. Alternatively, it is possible to control the roll-out of Ubuntu updates across public or private clouds with Landscape. Landscape is the Ubuntu systems management tool included as part of Ubuntu Pro.

Staging environments

If you build a service on Ubuntu, you likely have a staging environment for your service. If so, it can make sense to deploy security updates to that environment first. You can then run your workload tests against the updates before deploying them to your production environments.

You can even selectively upgrade dependencies of your service to packages that are still in the -proposed pocket. This lets you test your application or service on updates that have not yet been deployed to the main archives.

Snapshots in time

We update the standard Ubuntu archives on a rolling basis. As new package updates are made available, we include these in the archive. If there is material time in between running updates on different instances, the available updates could be different. It is even possible that two instances could see different updates at the same time because of phased updates. This can make it hard to roll out updates gradually, testing for any issues before rolling them out further.

You can use a number of techniques to create a coherent set, or snapshot, of packages. This allows you to test and roll out the same packages across your production estate over time. If there are any negative effects of the updates in your environment, this limits the impact. It also lets you pause the rollout while you resolve the issues (or seek help from our support team).

APT pinning

You may find guides recommending apt pins to ensure that critical packages are updated only when required and under supervision. This is a powerful tool, however it has a steep learning curve. You need to understand how pins operate and the ramifications of pinning specific packages. In most cases it probably will not be the best option.

Golden images and image-based updates

In some environments, such as Kubernetes worker nodes, it can make sense to use image-based updates or “Golden Images”. This essentially means that you create an operating system image that includes the security updates. You can then test this image, add the updated nodes to your clusters and remove the old, insecure worker nodes. You treat these images as immutable. Instead of applying updates to them in production, you replace them completely with the next verified image. You can increase stability by learning an image building tool and maintaining a CI/CD pipeline for those images. It is important to automate this as much as possible, as any manual delays in the process increase your security exposure for no stability benefit.

Deb package mirrors

An approach used by many of Canonical’s customers is to create internal Ubuntu deb package mirrors. These synchronise a copy of the archives on a specific, periodic basis. Internal instances then point at that mirror, rather than the main Ubuntu archives. Landscape, mentioned earlier, includes features to make managing Ubuntu repositories much simpler. Maintaining a package mirror does have a relatively high cost of infrastructure, as you store at least one copy of all the packages you mirror.

Ubuntu snapshot service

The exciting news is that we have just announced a new snapshot service. This can provide many of the benefits of mirroring archives without the infrastructure cost or setup overhead. This is integrated into APT in Ubuntu 23.10 and is coming soon to 22.04, 20.04 and Ubuntu Pro 18.04. Julian Andres Klode, the maintainer of APT, released a video showing how to use the new snapshot feature:

Julian Andres Klode demonstrating the new Snapshot service with APT in Ubuntu 23.10

Microsoft is using this service to enable Safe Deployment Practices on Azure. You can learn more about this from the Ubuntu Summit presentation (video now online here). Microsoft helpfully illustrates this approach in their official announcement. Instead of having a different state of the archive for each region update:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/ola4cattkQ7qkeDdFH1lRx_XBIaJg4OH9iXd8bZw4mG6mDH1azAXE6pNbUbl0gjGk9Fcimwj_3Wqf_i-fQI5luA6c6FjF8FvEmlPac-yB7r3AxkpCKYoxIwFU4zavW-YyLFIa86W9YfOe0elcJReSHw" width="720" /> </noscript>

Using snapshots you can have a consistent set of packages being deployed in each region:

<noscript> <img alt="" height="456" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_800,h_456/https://lh7-us.googleusercontent.com/9P0aA-FRqn_dvqYoaGoU5rKCBCygFbvacSbThKvnik9GAYkskwTvG1_sST4Hv7jpAnyqkr4hvATwAtnq67AY5-ftwlhhLeYdGrRSsu6VqW1vphw3ZgA27NONpeGPagKI7aRvCq5_p0OIi0pizVAbHms" width="800" /> </noscript>

Any of these approaches do mean that you are likely to have more unpatched vulnerabilities in your production estate. You are choosing not to apply available patches beyond the snapshot date. Similarly, you will normally roll this snapshot out to different regions or availability zones gradually over time. That means the last instances will not receive the snapshot of security patches until that rollout is complete. These rollouts should therefore be as short as possible while still giving you time to notice any negative impacts of the updates. You should also review the vulnerabilities fixed in the updates you have not yet applied  (USNs and CVEs). You may wish to accelerate your rollout of updates that patch higher-priority CVEs. Ultimately, your organisation needs to balance the risk of delaying security updates against the potential benefits to stability.

Conclusion

If you are running Ubuntu in production, it is crucial that you apply available security updates to your instances. We explained how we reduce the risk that our updates have negative impacts in the first part of this series. People use Ubuntu in so many interesting ways, beyond what even we can imagine. It is therefore impossible to eliminate the risk that an update causes issues somewhere.

If you manage large numbers of Ubuntu instances, particularly in highly-available configurations, you may benefit from controlling your update rollout. You can then roll out updates in a more fleet-aware fashion. By rolling out gradually, you do leave some instances with unpatched security vulnerabilities for longer. The benefit, however, can be limiting the impact of any update on service uptime, pausing rollouts that cause issues. Combined with automation, monitoring and automated health checks, this can increase stability with an acceptable reduction in security.

Managing updates in a distributed environment can be a challenge. There are, however, an increasing number of tools to help you do so. Landscape, part of Ubuntu Pro, includes features to help you manage Ubuntu updates across any public or private cloud. You could use the native APT snapshot integration in your own automation. We have worked with our public cloud partners to make Ubuntu updates work well in their built-in update tooling.

No policy or tool is foolproof, but a deliberate and well-designed update strategy can save you from sleepless nights. By being proactive and making informed decisions, you can ensure that your Ubuntu servers remain secure while protecting service availability.

Further reading:

24 November, 2023 12:00PM

Ubuntu Blog: Building a comprehensive toolkit for machine learning

In the last couple of years, the AI landscape has evolved from a researched-focused practice to a discipline delivering production-grade projects that are transforming operations across industries. Enterprises are growing their AI budgets, and are open to investing both in infrastructure and talent to accelerate their initiatives – so it’s the ideal time to make sure that you have a comprehensive toolkit for machine learning (ML).  

From identifying the right use cases to invest in to building an environment that can scale, organisations are still searching for ways to kickstart and move faster on their AI journeys. An ML toolkit can help by providing a framework for not just launching AI projects, but taking them all the way to production. A toolkit aims to address all the different challenges that could prevent you from scaling your initiatives and achieving the desired return on investment. 

What is an ML toolkit and why do you need one?

<noscript> <img alt="" height="870" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1754,h_870/https://ubuntu.com/wp-content/uploads/a826/image.png" width="1754" /> </noscript>

A machine learning (ML) toolkit represents all the necessary tools and skills that organisations need to get started and scale their ML initiatives. It includes different aspects of the machine learning lifecycle, but it goes beyond to also take into account the capabilities that are needed to efficiently build models, push them to production and maintain them over time.

There are multiple challenges that companies need to address when they start looking at an AI project. These can be grouped into four main categories:

  • People: There is a skills gap on the market that makes hiring difficult. AI projects also have multiple stakeholders that can slow down the project delivery.
  • Operations: Scaling up a project is difficult, mainly because of the associated operational maintenance costs.
  • Technology: There is a fast-changing landscape of new tools, frameworks and libraries that teams need to consider. Identifying the right choices is often overwhelming.
  • Data: Both the continuous growth of data volumes and the need to deal with highly sensitive data are concerns that keep teams up at night. 

The machine learning toolkit looks at many of these areas and helps organisations build an end-to-end stack that addresses the entire machine learning lifecycle.  It considers the hardware layer, as well as the software that goes on top of it, trying to optimise operations and automate workloads. The toolkit enables professionals to securely handle high volumes of data, as well as leverage different types of architectures depending on where the organisation is in their AI journey. Capabilities such as user management, security of the infrastructure or the compute power needed will be further detailed as part of the toolkit.

What goes into a toolkit for machine learning?

Organisations often look only at the computing power needed or the machine learning tooling that’s available to define the toolkit. While those are important, they are only two pieces of the puzzle. Building and scaling an AI project requires additional resources, including the cloud, container layer, and MLOps platform. MLOps engineers and solution architects need to bear in mind that any project, even if it starts small, will likely scale over time.  Therefore, they need a toolkit that gives them the flexibility to add additional tools or compute power to the stack.

Let’s explore the key components of an ML toolkit.

Compute power for machine learning

There’s no denying that compute power is central to AI/ML projects, and it is an essential part of an effective toolkit. This is the main reason AI/ML projects have traditionally been considered high-cost. This is also the very first challenge that organisations identify when it comes to AI projects, assuming often that they need a very powerful stack in order to begin their AI journey. This goes hand in hand with the scarcity of graphical processing units (GPUs) that exists on the market, which has delayed many projects.

Nowadays, this paradigm is beginning to shift as chipsets become more performant and market leaders launch dedicated infrastructure solutions, such as NVIDIA DGX, which empower enterprises to build and scale their AI ecosystems on proven platforms. With NVIDIA DGX now available on Microsoft Azure Marketplace, we are likely seeing the start of a new era for compute power, where organisations will not view it as a bottleneck for AI projects, but rather an accessible component of their ML toolkit.

<noscript> <img alt="" height="293" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_624,h_293/https://lh7-us.googleusercontent.com/pZxAehTTrqTAGG1Yjqa7W7sdmsI8HW1URTpmdsgcH0mrvTk_VkY1f5LZc8tTNTatQGFZBYwUnhnOwjqqSPYnpdtxGcykF_lS0eRPE1Zuy8F7SKTET9UShZYlXFVbpxBK8IsBDTZDteO_SoVglx65NSo" width="624" /> </noscript>

Learn how to run AI at scale with Canonical and NVIDIA.

Download the whitepaper

Clouds for machine learning

All the compute power in the world won’t help if you don’t have the means to store all the data it will be processing. Thus, cloud computing is a natural fit for AI and a crucial component of the toolkit. It takes the burden of provisioning away from solution architects,  enabling them to spin down or turn on large-scale clusters with little downtime. While the experimentation phase requires fewer resources, building enterprise-ready models requires serious computing resources. In order to overcome this barrier, companies take different approaches which have pros and cons. 

This part of the toolkit can be built in different ways. Often, organisations will: 

  • Build their own infrastructure (or private cloud): Solutions such as Charmed Openstack are a great example of a cloud platform that can be used to build and deploy machine learning applications. It is a cost-effective platform that provides enterprise-grade infrastructure for machine learning workloads.
  • Benefit from the public cloud computing power: Machine learning projects are usually run on public clouds because of the low initial cost. Google, Amazon and AWS have different pricing models and offer different types of instances. This diversity in instances is beneficial to companies, as their compute needs vary depending on the amount of data and model training requirements. 
  • Choose a hybrid cloud:  Hybrid clouds unlock the value of data. Data is spread across different clouds and data complexity continues to increase. Hybrid clouds give the necessary flexibility and accessibility to address this complexity. They offer the data foundation needed to scale and operationalise AI, enabling models to be better fed with data, regardless of where it is stored, leading to better accuracy. 
  • Build a multi-cloud scenario: Generative AI is just one of the drivers accelerating multi-cloud adoption, due to the extremely large datasets. Multi-clouds are great because organisations can use the strengths of different clouds to optimise their machine learning projects. 

Choosing the right cloud is often a challenge for organisations. Between the initial cost and the potential to scale, it’s easy to feel lost. When choosing the cloud, enterprises should bear in mind their AI readiness, as well as the security and compliance requirements.  Also, in order to optimise the cost, they should leverage their existing infrastructure and build around it, rather than start a completely new one for their AI projects.

[Read more in our whitepaper]

Kubernetes for machine learning

The ML world relies heavily on cloud-native applications its important to include a container platform in your ML toolkit. As the world’s leading container orchestration platform, Kubernetes is an ideal fit. At the outset of a project, Kubernetes enables reproducibility and portability, and as projects grow, Kubernetes becomes invaluable for scalability and resource management. 

For enterprises that already have a private cloud deployed, Kubernetes is also an option to simplify and automate machine learning workloads. There are many different flavours of Kubernetes available, so it’s important to choose a distribution that is suitable for enterprise AI projects. Canonical Kubernetes can be used as an extension of Charmed OpenStack, enabling enterprises to kickstart AI projects on their existing infrastructure while benefiting from enterprise support.

Machine learning tooling

<noscript> <img alt="" height="542" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1412,h_542/https://ubuntu.com/wp-content/uploads/d71b/image.png" width="1412" /> </noscript>

This might be the most obvious part of the toolkit, but also the most deceptively difficult The machine learning landscape is growing from one day to another, reaching an enormous size. There are so many logos, options and projects that choosing the right tooling can be one of the biggest challenges when building out your toolkit. – especially when what you should really be looking for is a machine learning operations (MLOps) platform.

MLOps is shortly defined as DevOps for machine learning. It ensures the reliability and efficiency of the models. Rather than looking at individual tools, what enterprises actually should include in the ML toolkit is an MLOps platform that consolidates tooling and processes into a single environment.

Open source MLOps gives you access to the source code, can be tried without paying and allows for software contributions. AI/ML has benefited from upstream open source communities since its inception. This led to an open source MLOps landscape with solutions spanning various categories such as:

  • End-to-end platforms:  Kubeflow, MetaFlow or MLFlow 
  • Development and deployment: MLRun, ZenML or SeldonCore
  • Data: Being the core of any AI project, this category itself can be further split into:
    • Validation: Hadoop, Spark 
    • Exploration: Jupyter Notebook 
    • Versioning: Pachyderm or DVC 
    • Testing: Flyte 
    • Monitoring: Prometheus or Grafana 
    • The scheduling system: Volcano 

To sum up…

A machine learning toolkit goes beyond the ML tools that are used. Whereas kickstarting an ML project might be easy, scaling an initiative requires further capabilities across both the hardware and software components. To guarantee the smoothest path to production, you need a comprehensive toolkit that can support you throughout the entire AI journey.

Assembling a toolkit for machine learning can be a challenging undertaking, which is why we have produced a guide detailing our recommendation for an end-to-end toolkit. 

Further reading

24 November, 2023 12:03AM

November 23, 2023

hackergotchi for Proxmox VE

Proxmox VE

Proxmox VE 8.1 released!

We're very excited to announce the release 8.1 of Proxmox Virtual Environment! It's based on Debian 12.2 "Bookworm" but uses a newer Linux kernel 6.5, QEMU 8.1.2, and OpenZFS 2.2.0 ((with stable fixes backported)

Here is a selection of the highlights of Proxmox VE 8.1
  • Debian 12.2 (“Bookworm”), but uses a newer Linux kernel 6.5 as stable default
  • latest versions of QEMU 8.1.2 and ZFS 2.2.0 including the most important bugfixes from 2.2.1 already
  • Software-defined Networking...

Read more

23 November, 2023 02:16PM by martin (invalid@example.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E274 No Cinzelar É Que Está O Delapidado

Este episódio é dedicado às artes e ao tripé que as sustenta: a Ética, a Técnica e a Estética. O Diogo fez parte de um painel. Qual painel? Um painel de São Vicente, imortalizado em arte renascentista? - não; um painel sobre tecnologias. Quais tecnologias? Pois…ouçam para saber e ficarem escandalizados com podcasts feitos com IA. Além disso, neste episódio podem louvar as auditorias supimpas ao Home Assistant, dizer mal da Mazda e pasmarem-se com o delapidar e cinzelar sublime de Ubuntu Containers, qual estátua de David esculpida de um rude bloco de mármore para atingir a perfeição. E que dizer da dieta do novo Kernel de Ubuntu, altius, citius, fortius? Arte! Pura Arte!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

23 November, 2023 12:00AM

November 22, 2023

hackergotchi for Purism PureOS

Purism PureOS

Working with the Librem 14

Last year, I got the idea of showcasing the power of the Librem 14 by making a professional quality commercial video of it. This video would be made entirely with the Librem 14 itself. It would include everything that I love doing, which is 3D animations, 2D (hand drawn) animations, real world cinematic shots, all […]

The post Working with the Librem 14 appeared first on Purism.

22 November, 2023 10:16PM by François Téchené

Celebrating the Librem 5 Success with a Sale- Save Up to $450 when Bundled with AweSIM

This past September Purism reached a milestone thought to be unreachable within the tech industry, completing and delivering a phone (the Librem 5) and operating system (PureOS) that is not Android nor iOS. After reaching this monumental achievement, we are extending the celebration by discounting the Librem 5 smartphone $350 when purchased using this code […]

The post Celebrating the Librem 5 Success with a Sale- Save Up to $450 when Bundled with AweSIM appeared first on Purism.

22 November, 2023 10:08PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: A Season to be Thankful, Thank You!

Here in the US, we celebrate Thanksgiving tomorrow. I am thankful to be a part of such an amazing community. I have raised enough to manage another month and I can continue my job search in less dire circumstances. I am truly grateful to each and every one of you. While my focus will remain on my job hunt, I will be back next week at reduced hours to maintain my work. I have to alter my priorities to keep my hours reduced enough to focus on my job search so I will be contributing as follows:

  • I will ramp up my Debian work to increase my skillset here, as it is an important skill and one that I am seeking employment in. I will be increasing the areas of expertise in packaging different languages, security updates and help the KDE team with the Qt6 transition.
  • I will continue to help where I can with KDE neon, because well, I love KDE neon and our team. If time allows, I would like to help with moving forward with Harald’s initial work to transition us to use Gitlab infrastructure. It will be a big move from Jenkins.
  • Snaps: I will only support our Qt5 snaps at this point. That entails possibly one more release and I will maintain / fix bugs on these. Snaps have been a huge chunk of my time ( 191 snaps! plus content packs, extensions, updates, fixes, solving confinement issues ). I simply cannot do it all over again with Qt6. Unless of course someone wants to fund my work. Then I will reconsider.
  • I am also going to expand my knowledge in the containerized world with Flatpaks and refresher on Appimages to flesh out my resume.

Again, thank you all ever so much for your support. Though, this didn’t end up being my year, I am confident I will find my place in this career path in the near future.

I could still use some funds to make land and car payment or at least partial. We purchased from friends so they won’t take away my wheels or home, I just feel bad I haven’t been able to make payments in awhile. Thanks for your consideration.

https://gofund.me/f9f0fb53

22 November, 2023 03:07PM

Launchpad News: Self-service riscv64 builds

Launchpad has supported building for riscv64 for a while, since it was a requirement to get Ubuntu’s riscv64 port going. We don’t actually have riscv64 hardware in our datacentre, since we’d need server-class hardware with the hypervisor extension and that’s still in its infancy; instead, we do full-system emulation of riscv64 on beefy amd64 hardware using qemu. This has worked well enough for a while, although it isn’t exactly fast.

The biggest problem with our setup wasn’t so much performance, though; it was that we were just using a bunch of manually-provisioned virtual machines, and they weren’t being reset to a clean state between builds. As a result, it would have been possible for a malicious build to compromise future builds on the same builder: it would only need a chroot or container escape. This violated our standard security model for builders, in which each build runs in an isolated ephemeral VM, and each VM is destroyed and restarted from a clean image at the end of every build. As a result, we had to limit the set of people who were allowed to have riscv64 builds on Launchpad, and we had to restrict things like snap recipes to only use very tightly-pinned parts from elsewhere on the internet (pinning is often a good idea anyway, but at an infrastructural level it isn’t something we need to require on other architectures).

We’ve wanted to bring this onto the same footing as our other architectures for some time. In Canonical’s most recent product development cycle, we worked with the OpenStack team to get riscv64 emulation support into nova, and installed a backport of this on our newest internal cloud region. This almost took care of the problem. However, Launchpad builder images start out as standard Ubuntu cloud images, which on riscv64 are only available from Ubuntu 22.04 LTS onwards; in testing 22.04-based VMs on other relatively slow architectures we already knew that we were seeing some mysterious hangs in snap recipe builds. Figuring this out blocked us for some time, and involved some pretty intensive debugging of the “strace absolutely everything in sight and see if anything sensible falls out” variety. We eventually narrowed this down to a LXD bug and were at least able to provide a workaround, at which point bringing up new builders was easy.

As a result, you can now enable riscv64 builds for yourself in your PPAs or snap recipes. Visit the PPA and follow the “Change details” link, or visit the snap recipe and follow the “Edit snap package” link; you’ll see a list of checkboxes under “Processors”, and you can enable or disable any that aren’t greyed out, including riscv64. This now means that all Ubuntu architectures are fully virtualized and unrestricted in Launchpad, making it easier for developers to experiment.

22 November, 2023 02:00PM

Ubuntu Blog: Canonical releases Charmed Kubeflow 1.8

<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/dcf5/Generally-Available-1.png" width="1920" /> </noscript>

Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.8. Charmed Kubeflow is an open source, end-to-end MLOps platform that enables professionals to easily develop and deploy AI/ML models. It runs on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release also offers the ability to run AI/ML workloads in air-gapped environments. 

Run AI workloads in air-gapped environments

MLOps platforms often need to be connected to the network, which can be problematic for organisations that have strict cybersecurity and compliance policies. Charmed Kubeflow allows users to run their workloads offline in an air-gapped environment, in addition to public clouds and on-prem data centres. With the inclusion of this feature, Charmed Kubeflow provides additional security for organisations in highly regulated industries or projects that handle sensitive data. This allows them to complete most of the machine learning workflow within one tool, and avoid time spent on connecting tools and ensuring compatibility between them.

Enhanced capabilities to customise MLOps tools

Every AI project is different and so are the tools, frameworks and libraries that organisations use. While some might prefer traditional options such as Tensorflow or Pytorch, others might go with industry-specific ones such as Nvidia NeMo. Charmed Kubeflow 1.8 brings new enhancements that allow end users to customise their MLOps platform.

Users can add any image within their Jupyter Notebook. This gives professionals freedom to choose their preferred tools and libraries and focus on developing machine learning models rather than maintaining their tooling. Users can plug tools or components in and out depending on the use case to work efficiently.

This capability differentiates Charmed Kubeflow from the upstream project. Organisations are more likely to be able to move beyond experimentation using Canonical’s supported solution, since they can add their own Notebook images and develop models using them.

Build models for production. Reproduce experiments with ease.

Kubeflow as a project was designed to run AI at scale. It is able to run the entire machine learning lifecycle within one tool. Kubeflow Pipelines are the heart of the project, since they are specialised in automating machine learning workloads. This is one of the reasons organisations that are looking to scale AI projects prefer Kubeflow over its alternatives. 

Charmed Kubeflow benefits from the upstream project’s newly introduced Kubeflow Pipelines 2.0, which further simplifies the automation process. Some capabilities such as the directed acyclic graphs (DAG) have been available for a while in beta, but other capabilities have been introduced in Kubeflow 1.8. For example, Kubeflow now abstracts the pipeline representation format, so that it can run on any MLOps platform. This translates into smoother migrations from the upstream project to distributions or tools that can offer enterprise support, security patching or timeline bug fixes. 

“I’m thrilled to be part of the upstream community’s Kubeflow 1.8 release and proud of the Charmed Kubeflow team for driving the release as well as providing feedback along the way”, said Kimonas Sotirchos, Working Group Lead in the Kubeflow Community. “Charmed Kubeflow 1.8 is a great way for newcomers and experienced users to try out all the latest and greatest features in Kubeflow, like KFP V2 and PVC browsing”, he added. 

Innovate at speed with Canonical MLOps

Charmed Kubeflow is the foundation of a growing ecosystem that addresses different needs for AI projects. The MLOps platform is integrated with leading open source tools. For example, Charmed Kubeflow integrates with Charmed MLflow to facilitate experiment tracking and model registry. MLFlow is a lightweight machine learning platform that enables professionals to quickly get started locally or on the public cloud and then easily migrate to an open source fully-integrated solution.   Charmed Kubeflow can also be integrated with KServe and Seldon for model serving. 

Download Charmed Kubeflow 1.8 for free 

Contact Canonical for enterprise support or managed services. 

Further reading

22 November, 2023 12:39PM

hackergotchi for GreenboneOS

GreenboneOS

IT security update November 2023: Critical vulnerabilities and threats

In the November 2023 commVT Intelligence Update, several critical vulnerabilities and security threats have come to light. Cisco’s Internetworking Operating System (IOS) XE Software Web User Interface (UI) was found to be vulnerable to two actively exploited critical vulnerabilities, allowing attackers to execute arbitrary code remotely. The curl command-line tool, widely used across various platforms, faced a serious vulnerability that could result in arbitrary code execution during SOCKS5 proxy handshakes. VMware is urging immediate updates for its vCenter Server due to a critical vulnerability potentially leading to remote code execution. Multiple vulnerabilities were found in versions of PHP 8; one is a particularly critical deserialization vulnerability in the PHAR extraction process. Additionally, SolarWinds Access Rights Manager (ARM) was found susceptible to multiple critical vulnerabilities, emphasizing the urgency to update to version 2023.2.1. Lastly, two F5 BIG-IP vulnerabilities were discovered to be actively exploited, with mitigation options available and outlined below.

Cisco IOS XE: Multiple Critical Vulnerabilities

Two actively exploited critical CVSS 10 vulnerabilities were discovered in Cisco’s Internetworking Operating System (IOS) XE Software Web User Interface (UI); CVE-2023-20198 and CVE-2023-20273. Combined, they allow an attacker to remotely execute arbitrary code as the system user and are estimated to have been used to exploit tens of thousands of vulnerable devices within the past few weeks. Greenbone has added detection for both the vulnerable product by version [1], and another aimed at detecting the BadCandy implanted configuration file [2]. Both are VTs included in Greenbone’s Enterprise vulnerability feed.

Cisco IOS was created in the 1980s and used as the embedded OS in the networking technology giant’s routers. Fast forward to 2023, IOS XE is a leading enterprise networking full-stack software solution that powers Cisco platforms for access, distribution, core, wireless, and WAN. IOS XE is Linux-based, and specially optimized for networking and IT infrastructure, routing, switching, network security, and management. Cisco devices are pervasive in global IT infrastructure and used by organizations of all sizes, including large-scale enterprises, government agencies, critical infrastructure, and educational institutions.

Here’s how the two recently disclosed CVEs work:

CVE-2023-20198 (CVSS 10 Critical): Allows a remote, unauthenticated attacker to create an account [T1136] on an affected system with privilege level 15 (aka privileged EXEC level) access [CWE-269]. Privilege level 15 is the highest level of access to Cisco IOS. The attacker can then use that account to gain control of the affected system.
CVE-2023-20273 (CVSS 7.2 High): A regular user logged into the IOS XE web UI, can inject commands [CWE-77] that are subsequently executed on the underlying system with the system (root) privileges. This vulnerability is caused by insufficient input validation [CWE-20]. CVE is also associated with a Lua-based web-shell [T1505.003] implant dubbed “BadCandy”. BadCandy consists of an Nginx configuration file named `cisco_service.conf` that establishes a URI path to interact with the web-shell implant but requires the webserver to be restarted.

Cisco has released software updates for mitigating both CVEs in IOS XE software releases, including versions 17.9, 17.6, 17.3, and 16.12 as well as available Software Maintenance Upgrades (SMUs) and IT security teams are strongly advised to urgently install them. Cisco has also released associated indicators of compromise (IoC), Snort rules for detecting active attacks, and a TAC Technical FAQs page. Disabling the web UI prevents exploitation of these vulnerabilities and may be suitable mitigation until affected devices can be upgraded. Publicly released proof of concept (PoC) code [1][2] and a Metasploit module further increase the urgency to apply the available security updates.

Critical Vulnerability In The Curl Tool

A widespread vulnerability has been discovered in the popular curl command line tool, libcurl, and the many software applications that leverage them across a wide number of platforms. Tracked as CVE-2023-38545 (CVSS 9.8 Critical), the flaw makes curl overflow a heap-based buffer [CWE-122]] in the SOCKS5 proxy handshake that can result in arbitrary code execution [T1203]. Greenbone’s community feed includes several NVTs [1] to detect many of the affected software products and will add additional detections for CVE-2023-38545 as more vulnerable products are identified.

CVE-2023-38545 is a client-side vulnerability exploitable when passing a hostname to the SOCKS5 proxy that exceeds the maximum length of 255 bytes. If supplied with an excessively long hostname, curl is supposed to use local name resolution and pass it on to the resolved address only. However, due to the CVE-2023-38545 flaw, curl may actually copy the overly long hostname to the target buffer instead of copying just the resolved address there. The target buffer, being a heap-based buffer, and the hostname coming from the URL results in the heap-based overflow.

While the severity of the vulnerability is considered high because it can be exploited remotely and has a high impact to the confidentiality, integrity, and availability (CIA) of the underlying system, the SOCKS5 proxy method is not the default connection mode and must be declared explicitly. Additionally, for an overflow to happen an attacker also needs to cause a slow enough SOCKS5 handshake to trigger the bug. All versions of curl are affected between v7.69.0 (released March 4th, 2020) until v8.3.0. The vulnerable code was patched in v8.4.0 commit 4a4b63daaa.

VMware vCenter Server: Multiple Vulnerabilities

CVE-2023-34048 is a critical severity vulnerability that could allow a malicious actor with network access to vCenter Server to cause an out-of-bounds write [CWE-787] potentially leading to remote code execution (RCE). The affected software includes VMware vCenter Server versions 6.5, 6.7, 7.0, and 8.0. VMWare has issued a security advisory to address both vulnerabilities which states that there are no known mitigations other than installing the provided updates. Both vulnerabilities can be detected by Greenbone’s enterprise vulnerability feed [1]. The vCenter Server patch also fixes CVE-2023-34056, a medium-severity information disclosure resulting from improper authorization [CWE-285].

Although there are no reports that CVE-2023-34048 is being actively exploited in the wild attackers have proven adept at swiftly converting threat intelligence into exploit code. Research by Palo Alto Networks Unit 42 threat research group shows that on average an exploit is published 37 days after a security patch is released.

Here are some brief details on both CVEs:

CVE-2023-34048 (CVSS 9.8 Critical): vCenter Server contains an out-of-bounds write [CWE-787] vulnerability in the implementation of the DCERPC protocol. A malicious actor with network access to vCenter Server may trigger this vulnerability to achieve remote code execution (RCE). The Distributed Computing Environment Remote Procedure Call (DCERPC) protocol facilitates remote procedure calls (RPC) in distributed computing environments, allowing applications to communicate and invoke functions across networked systems.
CVE-2023-34056 (CVSS 4.3 Medium): vCenter Server contains a partial information disclosure vulnerability. A malicious actor with non-administrative privileges to vCenter Server may leverage this issue to access unauthorized data.

Multiple Vulnerabilities Discovered In PHP 8

Several vulnerabilities were identified in PHP 8.0.X before 8.0.28, 8.1.X before 8.1.16 and 8.2.X before 8.2.3. Although the group of vulnerabilities does include one critical and two high-severity vulnerabilities, these require particular contexts to be present for exploitation; either deserializing PHP applications using PHAR or else using PHP’s core path resolution functions on untrusted input. Greenbone’s enterprise VT feed includes multiple detection tests for these vulnerabilities across multiple platforms.

Here are brief descriptions of the most severe recent PHP 8 vulnerabilities:

CVE-2023-3824 (CVSS 9.8 Critical): A PHAR file (short for PHP Archive) is a compressed packaging format in PHP, which is used to distribute and deploy complete PHP applications in a single archive file. While reading directory entries during the PHAR archive loading process, insufficient length checking may lead to a stack buffer overflow [CWE-121], potentially leading to memory corruption or remote code execution (RCE).
CVE-2023-0568 (CVSS 8.1 High): PHP’s core path resolution function allocates a buffer one byte too small. When resolving paths with lengths close to the system `MAXPATHLEN` setting, this may lead to the byte after the allocated buffer being overwritten with NULL value, which might lead to unauthorized data access or modification. PHP’s core path resolution is used for the `realpath()` and `dirname()` functions, when including other files using the `include()`, `include_once()`, `require()`, and `require_once()`, and during the process of resolving PHP’s “magic” constants” such as `__FILE__` and `__DIR__`.
CVE-2023-0567 (CVSS 6.2 Medium): PHP’s `password_verify()` function may accept some invalid Blowfish hashes as valid. If such an invalid hash ever ends up in the password database, it may lead to an application allowing any password for this entry as valid [CWE-287]. Notably, this vulnerability has been assigned different CVSS scores by NIST (CVSS 6.2 Medium) and the PHP group CNA (CVSS 7.7 High), the difference being that the PHP Group CNA considers CVE-2023-0567 a high risk to confidentiality while NIST does not. CNAs are a group of independent vendors, researchers, open source software developers, CERT, hosted service, and bug bounty organizations authorized by the CVE Program to assign CVE IDs and publish CVE records within their own specific scopes of coverage.

SolarWinds Access Rights Manager (ARM): Multiple Critical Vulnerabilities

SolarWinds Access Rights Manager (ARM) prior to version 2023.2.1 is vulnerable to 8 different exploits; one critical and two additional high-severity vulnerabilities (CVE-2023-35182, CVE-2023-35185, and CVE-2023-35187). These include authenticated and unauthenticated privilege escalation [CWE-269], directory traversal [CWE-22], and remote code execution (RCE) at the most privileged “SYSTEM” level. Greebone’s Enterprise vulnerability feed includes both local security check (LSC) [1] and remote HTTP detection [2].

SolarWinds ARM is an enterprise access control software for Windows Active Directory (AD) networks and other resources such as Windows File Servers, Microsoft Exchange services, and Microsoft SharePoint as well as virtualization environments, cloud services, NAS devices, and more. The widespread use of ARM and other SolarWinds software products means that its vulnerabilities have a high potential to impact a wide range of large organizations including critical infrastructure.

These and more recent vulnerabilities are disclosed in SolarWinds’ security advisories. Although no reports of active exploitation have been released, mitigation is highly recommended and available by installing SolarWinds ARM version 2023.2.1.

F5 BIG-IP: Unauthenticated RCE And Authenticated SQL Injection Vulnerabilities

Two RCE vulnerabilities in F5 BIG-IP, CVE-2023-46747 (CVSS 9.8 Critical) and CVE-2023-46748 (CVSS 8.8 High), have been observed by CISA to be actively exploited in the wild soon after PoC code was released for CVE-2023-46747. A Metasploit exploit module has also since been published. F5 BIG-IP is a family of hardware and software IT security products for ensuring that applications are always secure and perform the way they should. The platform is produced by F5 Networks, and it focuses on application services ranging from access and delivery to security. Greenbone has added detection for both CVEs [1][2].

CVE-2023-46747 is a remote authentication bypass [CWE-288] vulnerability while CVE-2023-46748 is a remote SQL injection vulnerability [CWE-89] that can only be exploited by an authenticated user. The affected products include the second minor release (X.1) for major versions 14-17 of BIG-IP Advanced Firewall Manager (AFM) and F5 Networks BIG-IP Application Security Manager (ASM).

If you are running an affected version you can eliminate this vulnerability by installing the vendor-provided HOTFIX updates [1][2]. The term “hotfix” implies that the patch can be applied to a system while it is running and operational, without the need for a shutdown or reboot. If updating is not an option, CVE-2023-46747 can be mitigated by downloading and running a bash script that adds or updates the `requiredSecret` attribute in the Tomcat configuration, which is used for authentication between Apache and Tomcat, and CVE-2023-46748 can be mitigated by restricting access to the Configuration utility to allow only trusted networks or devices, and ensuring only trusted user accounts exist thereby limiting the attack surface.


22 November, 2023 10:24AM by Greenbone AG

hackergotchi for Deepin

Deepin

November 21, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical announces the general availability of chiselled Ubuntu containers

Production-ready, secure-by-design, ultra-small containers with chiselled Ubuntu

<noscript> <img alt="" height="627" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_627/https://ubuntu.com/wp-content/uploads/0f18/chiselled-Ubuntu-containers.png" width="1200" /> </noscript>

Canonical announced today the general availability of chiselled Ubuntu containers which come with Canonical’s security maintenance and support commitment. Chiselled Ubuntu containers are ultra-small OCI images that deliver only the application and its runtime dependencies, and no other operating system-level packages, utilities, or libraries. This makes them lightweight to maintain and operate, secure, and efficient in resource utilisation.

Canonical’s chiselled Ubuntu portfolio includes pre-built images for popular toolchains like Java, .NET and Python. Microsoft announced today the general availability of chiselled Ubuntu container images for .NET 6, 7 and 8, the result of a long-term partnership and design collaboration between Canonical and Microsoft.

“There has always been a need for smaller and tighter images. Developers remind us, as a base image provider, of that on a regular basis. Chiselled images leapfrog over approaches we’ve looked at in the past. We love the idea and implementation of Chiselled images and Canonical as a partner. When technical leaders at Canonical shared the first demos of Chiselled images with us, we immediately wanted to be a launch partner, and we’re thrilled that we’re shipping Ubuntu Chiselled images for .NET as part of this GA release.”

Richard Lander, Program Manager, .NET at Microsoft

Trusted provenance, optimal developer experience

According to GitLab’s 2022 Global DevSecOps Survey, only 64% of security professionals had a security plan for containers, and many DevOps teams don’t have a plan in place for other cutting-edge software technologies, including cloud-native/serverless, APIs, and microservices. Running applications securely at scale – with peace of mind – is one of Canonical’s key commitments to the open source world. 

Chiselled Ubuntu containers provide both trusted provenance and an optimal developer-to-production experience, leading to more productive teams as well as more secure applications. At the heart of these containers sits a developer-friendly open source package manager called “Chisel”,  which developers can use to sculpt meticulously precise and therefore ultra-small file systems. 

Chisel relies on a curated collection of Slice Definition Files. These files are related to the upstream packages from the Ubuntu archives, and define one or more slices for any given package. A package slice details a subset of the package’s contents (comprising its maintainer scripts and dependencies) needed at run-time.

<noscript> <img alt="" height="391" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_696,h_391/https://ubuntu.com/wp-content/uploads/548f/Untitled-presentation.png" width="696" /> </noscript>

Chisel effectively layers reusable knowledge on top of traditional Ubuntu debian packages through a developer-friendly CLI and fine-grained dependency management mechanism.

The lack of unnecessary bits in the final image (as well as unused system utilities and excess package contents) reduces bloat, making it more efficient, as well as reducing their attack surface and mitigating entire classes of attacks. Faster network transfers, caching and startup, as well as reduced run times resource utilisation are guaranteed as applications carry only the dependencies they absolutely need. 

With Chiselled Ubuntu organisations can simplify their containerisation journey with a smooth transition from development to production.

Key benefits include:

  • Bug-for-bug compatibility of containers and their contents from Developer experience through DevOps and DevSecOps to production, as all the containers are built from the same package contents 
  • Smaller containers means fewer dependency headaches across the container CI lifecycle 
  • Chisel CLI for an easy, Ubuntu-like experience as customers build or extend chiselled containers themselves using the same tools as Canonical
  • Simple images means simpler image rebuilds 

Learn more about Canonical containers

Reliable support and release cadence

Chiselled Ubuntu images inherit Ubuntu’s long-term support guarantees and are updated within the same release cycle using the self-same packages as within other LTS components. They are fully supported by Canonical:

  • 5-year free bug fixing and security patching for containers build from the main repository
  • 10-year security patching for Ubuntu Pro customers on all Ubuntu packages
  • Optional weekday or 24/7 customer support
  • 100% library and release cycle alignment with Ubuntu LTS

Prebuilt chiselled images for popular toolchains such as .NET and Java

Chiselled Ubuntu and toolchains come together seamlessly. It’s a developer’s shortcut to creating and deploying secure, super-efficient images for production from their development environment. 

The Chiselled Ubuntu image for the Java Runtime Engine provides a ~51% reduction in the size of the compressed image compared to Eclipse Temurin Java 17 runtime image. The Chiselled Ubuntu image does not degrade throughput or startup performance compared to the evaluated images.

<noscript> <img alt="" height="540" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_960,h_540/https://ubuntu.com/wp-content/uploads/822b/Untitled-presentation-1.png" width="960" /> </noscript>

Chiselled Ubuntu containers for .NET and ASP.NET are now available on AMD64- and ARM-based platforms, as well as s390x, offering precision-engineered, production-destined containers to the .NET community. Shipping only the binaries needed to run .NET applications means a ready-for-production OCI container and lets you focus your added value: layering on your world-class applications and shipping to any platform. 

Microsoft’s chiselled .NET images are now stable and supported for .NET 6, 7 and 8 images

With the release of .NET8, Microsoft and Canonical are joining forces to release chiselled Ubuntu for .NET8, including for AOT – Ahead of Time binaries. With .NET8, users can opt-in to security hardening with chiselled Ubuntu image variants to reduce their attack surface even further, as well as optimal container build, testing and deployment.

“Many .NET developers look to the .NET Team at Microsoft for best practice guidance, particularly if they are new to a domain. Chiselled Ubuntu images are our recommended base image for developers going forward. If you want to just use containers and not learn all the ins-and-outs, just choose chiselled images.”

Richard Lander, Program Manager, Microsoft .NET

Watch our interview with Microsoft on chiselled Ubuntu.

Support and security features with Ubuntu Pro

Organisations can purchase security maintenance and support for chiselled Ubuntu containers with an Ubuntu Pro subscription. Canonical experts offer support for bug fixes and troubleshooting to help manage containers more efficiently. With Ubuntu Pro,  teams can reduce their average CVE exposure time from 98 days to one with 10 years of security maintenance guaranteed.

Learn more at ubuntu.com/pro.

Go off and chisel

21 November, 2023 01:45PM

Ubuntu Blog: Ubuntu Explained: How to ensure security and stability in cloud instances—part 2

Ubuntu updates: best practices for updating your instance

You probably know that it is important to apply security updates. You may not be as clear on the details of how to do that. We are going to explain best practices for applying Ubuntu updates to single instances and what the built-in unattended-upgrades tool does and does not do.

This blog is the second in a three-part series by John Chittum (Engineering Manager, CPC) and Aaron Whitehouse (Senior Public Cloud Enablement Director). In the first part we covered the philosophy of Ubuntu’s releases, the archive’s structure and how packages are updated. That background will help you understand what we cover below, so please read that first before continuing. The next blog will extend the concepts in this blog to give best practices for applying updates to multiple Ubuntu instances.

Ubuntu is a Debian-based Linux distribution which uses two primary package types: snaps and Debian (.deb) packages. Snaps have their own update mechanisms, so this blog series focuses only on the Debian package updates.

At its core, updating packages on an Ubuntu server is straightforward. You simply need to achieve the following:

  1. download the latest package indexes, showing what has been updated;
  2. download the updated packages;
  3. install the updated packages; and 
  4. restart anything that is still using the old versions of the packages.

Updating packages interactively

If you are logged into your Ubuntu server and want to update the packages to the latest version, you can do this with APT. This would look something like (as root or an elevated user):

apt update   # Downloads the latest package indexes
apt upgrade  # Downloads and installs any updated packages or new dependencies

This will download the package indexes, then download any updated packages and install them.

<noscript> <img alt="" height="893" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1245,h_893/https://ubuntu.com/wp-content/uploads/71e8/Screenshot-from-2023-11-08-14-01-22.png" width="1245" /> </noscript>
Terminal window showing updating an Ubuntu instance with APT

This will install updated packages from any enabled repositories (see Part 1 of this series for more information), not just security updates. Installing all updates makes sense, as you are running commands interactively and are able to see what is changing and watch for any ill effects. You can view or change your APT repository configuration in /etc/apt/sources.list and /etc/apt/sources.list.d/.

Running commands manually is fine, but most people want to apply security updates regularly. That requires setting up your server in a way that will install security updates automatically, without you sitting at the keyboard.

Updating packages automatically with unattended-upgrades

By default, we configure Ubuntu server to download and install any new security updates (and only security updates) every day. We also do this for public cloud instances. We do this using a tool called unattended-upgrades.

You can see an instance’s current unattended-upgrade status using recent versions of the pro client by running pro api u.unattended_upgrades.status.v1:

{
"apt_periodic_job_enabled": true,
"package_lists_refresh_frequency_days": 1,
"systemd_apt_timer_enabled": true,
"unattended_upgrades_allowed_origins": [
   "${distro_id}:${distro_codename}",
   "${distro_id}:${distro_codename}-security",
   "${distro_id}ESMApps:${distro_codename}-apps-security",
   "${distro_id}ESM:${distro_codename}-infra-security"
],
"unattended_upgrades_disabled_reason": null,
"unattended_upgrades_frequency_days": 1,
"unattended_upgrades_last_run": null,
"unattended_upgrades_running": true
}

This summarises information from a number of different places on the system. Some key points are as follows:

  1. Settings are contained in /etc/apt/apt.conf.d/50unattended-upgrades and /etc/apt/apt.conf.d/20auto-upgrades on an Ubuntu 22.04 instance.
  2. A systemd.daily unit runs unattended-upgrades each day using these settings.
  3. By default, unattended-upgrades only checks the release (the frozen component), security, and the security portion of ESM. ESM stands for Expanded Security Maintenance and is included in Ubuntu Pro.

You can see more detail on why certain pockets are enabled by default if you read /etc/apt/apt.conf.d/50unattended-upgrades.

Users may disable unattended-upgrades by setting APT::Periodic::Unattended-Upgrade “0”; in /etc/apt/apt.conf.d/20auto-upgrades.

Unattended-upgrades is relatively simple, but provides a good default position for single-node Ubuntu instances from a security perspective. It ensures the vast number of Ubuntu instances in the world receive security fixes automatically in a very timely manner. However, the unattended-upgrades tool runs independently on each single node. That may mean it is not the best tool for the job if you are administering an entire fleet. We will cover best practices for updating multiple nodes in the next part of this blog series. There is no monitoring built into unattended-upgrades or APT. If APT ends up in a broken state, then unattended-upgrades will stop working. You should therefore either add APT tests into your monitoring or run apt update and apt upgrade manually on a periodic basis.

Required reboots and restarting services

Even if you install a new, secure version of a package onto the disk, you can still be running the old, insecure version in memory. The easiest way to ensure that you are only running the updated versions of packages is to reboot the instance, but this of course means disrupting your workloads.

You can tell whether or not any of the updates you have installed have signalled that an update is required by running pro system reboot-required (or use the API endpoint if doing so programmatically). If so, you can normally see which packages signalled the reboot by looking in /var/run/reboot-required.pkgs (it will often be packages like the Linux kernel, libc or dbus).

If a restart is inconvenient, in many cases you can restart individual services instead of the entire system. On recent versions of Ubuntu, a tool called needrestart will run after APT to tell you about these cases and help you do so.

Livepatching your kernel until the next reboot

The Linux kernel is at the heart of your Ubuntu system and it is not really feasible to restart it without restarting your whole system. It also receives frequent security updates. The traditional approach to staying secure is to apply kernel security updates as soon as they are available and immediately reboot the system. This has a significant impact on service availability, as it will often mean a reboot outside a scheduled maintenance window.

Canonical offers a service called Livepatch (as part of Ubuntu Pro) that can help. Livepatch security-patches the version of the kernel running in memory to keep you secure until you reboot in your next scheduled maintenance window. When using Livepatch you still should be applying any available security updates to the kernel packages on disk whenever they become available as normal. The benefit is that you are taking less of a security risk to continue running the old version of the kernel in memory until your next reboot.

Cloud-optimised rolling kernels

Ubuntu improves the user experience on public clouds by maintaining cloud-optimised kernels for our public cloud partners (see a blog on this here). This includes boot time, security and enablement of the latest cloud features. By default, kernels on Ubuntu are normally installed via metapackages, such as linux-generic, linux-generic-hwe, linux-azure, linux-aws, etc. These metapackages point to specific kernel images.

You can find further information on the kernel release cycle and hardware enablement (HWE) kernels here. Importantly, the default cloud-optimised kernels on most of our public cloud partners’ platforms are based on HWE kernels. This means that they will “roll” to a new kernel version approximately every six months for the first two years of the release. It will then stay on the version matching the initial kernel in the next Ubuntu LTS release for the remainder of the release’s support lifetime. As an example, Ubuntu 20.04 LTS started with 5.4, rolled through 5.9 and 5.11 and settled on 5.15.

We use HWE kernels where we have consulted with the public cloud and believe this default to be the best balance for most of their customers. Rolling the kernel, however, can cause issues for people using tools that rely on very specific kernel versions. If you do not wish to use the HWE kernels on a cloud image, you must downgrade your kernel. You can do this by installing the linux-${CLOUD}-lts-${UBUNTU_VERSION} metapackage, e.g. linux-azure-lts-22.04, and removing the more recent specific kernel packages. Your kernel will then stay on the GA version of the kernel that was released with that LTS release (e.g. 5.4 in Ubuntu 20.04 LTS). These kernels receive security maintenance for the life of the release (including ESM for Ubuntu Pro customers), but they may not have enablement for the latest cloud features or instance types. 

Conclusion

If you are running Ubuntu in production, it is crucial that you apply available security updates to your instances. This is straightforward to do manually on demand. In our regular Ubuntu server images, we set unattended-upgrades to download and install available security updates every day, which is a sensible default to ensure that the Ubuntu instances in the world stay secure. While they can never be infallible, we set out in the first part of this blog series some of the steps that we take to minimise the risk that a security update will have negative impacts on our users. 

Even if you are automatically downloading and installing updates, it is possible that you are still using the old, insecure version in memory. The foolproof way to ensure you are running updated versions is to reboot, but this impacts availability. You can use tools like the reboot-required API endpoint, needrestart and Livepatch to reduce unplanned downtime.

Finally, by default many of our public cloud images use rolling kernels. This means that the kernel rolls to a later kernel version approximately every six months for the first two years. This gives you access to the latest cloud features, but can cause issues with tools that require specific kernel versions. In those cases, you can downgrade to the GA kernel.

There is more ability to balance stability and security in your specific environment when you are administering multiple Ubuntu instances. We will cover some of those in the final part of this blog series, so stay tuned for future posts.

Further reading:

21 November, 2023 12:15PM

hackergotchi for VyOS

VyOS

VyOS and VPP - progress and plans

A few months ago, attentive readers may have seen in the blog post that in rolling releases we included the VPP data plane. Most attentive may also notice that we have not published any updates or even related commits from that time. We want to clarify the situation with VPP and explain the future of VyOS with VPP. Also, ask for a little feedback from the community.

Short summary for those who want to save time: The VPP data plane integration is under active development and will be available for VyOS as an addon.

The decision about VPP integration was not spontaneous but was made at a moment when we did not have enough time to finish it before the next stable release date. We know, you waited for the next LTS version for a long time, and do not want to let you wait more before we stabilize VPP integration so much that it can be included in the main build without any risks for reliability.

But, where others see problems, we see opportunities. For many years we have had another long-standing idea - providing an ability for developers to create third-party addons for VyOS. And we thought - this can be a solution! Let's pretend that we are third-party developers and try this on our own skin. This allows us to unlock the 1.4 release date and still offer VPP for those who need it (later).

And we moved with all development to a new cozy place, outside of the main source code tree, where experiments cannot hurt anyone.

Now, after months of work and tests, we think it is time to click a switch: clean up vyos-1x from VPP and present the first (unstable, unreliable, not ready for production) VyOS addon - VPP data plane.

Previous VPP announcements did some buzz in the community, but there was almost no feedback. And we do not think this is because the solution was so perfect and complete that there were no bugs to describe or ideas to share. That is why, this time we ask for a little favor in exchange for a VPP addon access - we need your feedback.

It does not matter if you are an enthusiast who plays with the network long autumn evenings or an engineer in a big corporation who is searching for new solutions, your feedback is important and valuable. It allows us to create a solution that fits you first of all.

You can find a form that we ask you to fill out for access to the VPP addon at the end of this post. Access will be provided for one month (or longer - if you will be active), and you will need to contact us with feedback. We hope this is not such a big price for the ability to increase VyOS performance to previously unthinkable levels.

Meanwhile, here is what you may expect:
- full integration with kernel (transparent forwarding between kernel and VPP) for unicast traffic
- Ethernet interfaces
- Bonding interfaces
- VLANs
- VXLAN
- GRE


Please, fill out this form to get access. We hope you will get joy while playing with the addon!

21 November, 2023 10:50AM by Taras Pudiak (taras@vyos.io)

hackergotchi for Deepin

Deepin

deepin V23 Successfully Adapts to the Domestic Graphics Card MTT S80!

Recently, under the promotion of community enthusiasts and deepin R&D team, we have completed the adaptation work of Moore Threads MTT S80 graphics card and successfully driven MTT S80 graphics card on deepin V23 Beta2 version. It is reported that the adapted Moore Threads MTT S80 graphics card is equipped with a complete "Spring Dawn" chip core, built-in 4096 MUSA stream processing cores, 128 Tensor tensor cores. At 1.8GHz, it is capable of delivering 14.4TFLOPS of single-precision floating-point arithmetic. The deepin MTT S80 trial version image is now available for download and trial. The MTT S80 graphics driver runs stably ...Read more

21 November, 2023 08:11AM by aida

November 20, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 814

Welcome to the Ubuntu Weekly Newsletter, Issue 814 for the week of November 12 – 18, 2023. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Colombia – 18th anniversary and second half of the semester birthday
  • LoCo Events
  • The Open Source Fortress is now live!
  • Anbox Cloud 1.20.0 has been released
  • Modern communication platforms: Matrix testing
  • Netplan brings consistent network configuration across Desktop, Server, Cloud and IoT
  • LXQt 1.4, Arriving at a Lubuntu Backports PPA Near You
  • Lubuntu 23.10 Backports PPA Released with LXQt 1.4
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

20 November, 2023 10:05PM

hackergotchi for Purism PureOS

Purism PureOS

Digital Freedom: No Clicking on “I Agree” to Use Smartphones or PCs Supported by PureOS!

Smartphones, tablet PCs, laptop PCs, or servers supported by PureOS, from Purism, do not require the Purism customer to click on “I Agree” to accept intrusive terms of service (the ones that expose the customer to predatory surveillance practices to the customer). As a matter of fact Purism customers do not have to click “I […]

The post Digital Freedom: No Clicking on “I Agree” to Use Smartphones or PCs Supported by PureOS! appeared first on Purism.

20 November, 2023 07:29PM by Rex M. Lee

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical at AWS re:Invent – What you need to know!

Though the Las Vegas Grand Prix has come to a close, the Canonical team is gearing up for the next big race at AWS re:Invent, slated for November 27-December 1, 2023. After a thrilling event in 2022, we’re excited and proud to be a sponsor once again in 2023. Learn more about the ways that the Canonical team will be driving the conversation.  

Fuel your AWS re:Invent journey with a visit to the Canonical-Ubuntu booth #301 in the Venetian Expo Hall. Join us as we rev up the conversation and explore pivotal topics that will turbocharge the way you approach and optimize your cloud environment.

Our team of experts will be available to offer in-depth insights into the following areas:

  • Initiate your enterprise AI/ML project using open-source tools like Kubeflow or MLFlow on AWS.
  • Elevate your Ubuntu fleet to Pro status with a comprehensive understanding of the benefits and the practical how-tos.
  • Reduce your Total Cost of Ownership (TCO) with insights on smoothly transitioning to Ubuntu Pro
  • Explore the robust security measures of handling FIPS/CIS workloads using Ubuntu Pro FIPS, ensuring the utmost protection for your critical data
  • Leverage the power of container technologies including Ubuntu LTS containers and ROCKS within your cloud ecosystem
  • Review the process of creating personalized golden images based on Ubuntu Pro
  • EKS cluster creation with Ubuntu and Ubuntu Pro

While at our booth, join us in steering the narrative of open source innovation. Become an Open Source Champion by sharing your unique experiences, perspectives, and visions to earn your well-deserved title plus an exclusive Canonical swag bag.

Take a pit stop on Tuesday, November 28 and join Canonical AI/ML Product Manager Andreea Munteanu for a lightning talk in the Developer Solutions Zone – Theater 3 in the Venetian Expo Hall to discover how enterprises are leveraging Ubuntu as a catalyst for their AI projects. And that’s not all! Learn how open source MLOps on AWS can supercharge your journey. 

On Thursday, November 30, our Canonical VP of Cloud, Alex Gallagher will sit down for an exclusive interview with Geekwire to discuss how the future is open source and Canonical’s mission to make it secure and accessible to everyone. Stay tuned for our interview to be featured on the Geekwire YouTube channel. 

So, buckle up for an exhilarating ride through the fast lanes of cloud innovation! Whether you’re looking to fine-tune your approach, optimize your cloud environment, or simply increase your knowledge, our team at Canonical-Ubuntu is here to ensure your journey is nothing short of extraordinary. We’ll see you at the finish line!

Register for the event

20 November, 2023 07:23PM

Launchpad News: Introducing Project-Scoped Access Tokens

Access tokens can be used to access repositories on behalf of someone. They have scope limitations, optional expiry dates, and can be revoked at any time. They are a stricter and safer alternative to using real user authentication when needing to automate pushing and/or pulling from your git repositories.

This is a concept that has existed in Launchpad for a while now. If you have the right permissions in a git repository, you might have seen a “Manage Access Tokens” button in your repository’s page in the past.

These tokens can be extremely useful. But if you have multiple git repositories within a project, it can be a bit of a nuisance to create and manage access tokens for each repository.

So what’s new? We’ve now introduced project-scoped access tokens. These tokens reduce the trouble for the creation and maintenance of tokens for larger projects. A project access token will work as authentication for any git repository within that project.

Let’s say user A wants to run something in a remote server that requires pulling multiple git repositories from a project. User A can create a project access token, and restrict it to “repository pull” scope only. This token will then be valid authentication to pull from any repository within that project. And user A will be able to revoke that token once it’s no longer needed, keeping their real user authentication safe.

The same token will be invalid for pushing, or for accessing repositories within other projects. Also note that this is used for ‘authentication’, not ‘authorization’ – if the user doesn’t have access to a given git repository, their access token will not grant them permissions.

Anyone with permissions to edit a project will be able to create an access token, either through the UI or the API, using the same method as to create access tokens for git repositories. See Generating Access Tokens section in our documentation for instructions and other information.
This feature was implemented on request by our colleagues from the ROS team. We would love to get some feedback whether this also covers your use case. Please let us know.

20 November, 2023 09:31AM

hackergotchi for Deepin

Deepin

November 19, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Andrea Corbellini: Running the operating system that you're currently using in a virtual machine (with Secure Boot and TPM emulation)

In this article I will show you how to start your current operating system inside a virtual machine. That is: launching the operating system (with all your settings, files, and everything), inside a virtual machine, while you’re using it.

This article was written for Ubuntu, but it can be easily adapted to other distributions, and with appropriate care it can be adapted to non-Linux kernels and operating systems as well.

Motivation

Before we start, why would a sane person want to do this in the first place? Well, here’s why I did it:

  • To test changes that affect Secure Boot without a reboot.

    Recently I was doing some experiments with Secure Boot and the Trusted Platform Module (TPM) on a new laptop, and I got frustrated by how time consuming it was to test changes to the boot chain. Every time I modified a file involved during boot, I would need to reboot, then log in, then re-open my terminal windows and files to make more modifications… Plus, whenever I screwed up, I would need to manually recover my system, which would be even more time consuming.

    I thought that I could speed up my experiments by using a virtual machine instead.

  • To predict the future TPM state (in particular, the values of PCRs 4, 5, 8, and 9) after a change, without a reboot.

    I wanted to predict the values of my TPM PCR banks after making changes to the bootloader, kernel, and initrd. Writing a script to calculate the PCR values automatically is in principle not that hard (and I actually did it before, in a different context), but I wanted a robust, generic solution that would work on most systems and in most situations, and emulation was the natural choice.

  • And, of course, just for the fun of it!

To be honest, I’m not a big fan of Secure Boot. The reason why I’ve been working on it is simply that it’s the standard nowadays and so I have to stick with it. Also, there are no real alternatives out there to achieve the same goals. I’ll write an article about Secure Boot in the future to explain the reasons why I don’t like it, and how to make it work better, but that’s another story…

Procedure

The procedure that I’m going to describe has 3 main steps:

  1. create a copy of your drive
  2. emulate a TPM device using swtpm
  3. emulate the system with QEMU

I’ve tested this procedure on Ubuntu 23.04 (Lunar) and 23.10 (Mantic), but it should work on any Linux distribution with minimal adjustments. The general approach can be used for any operating system, as long as appropriate replacements for QEMU and swtpm exist.

Prerequisites

Before we can start, we need to install:

  • QEMU: a virtual machine emulator
  • swtpm: a TPM emulator
  • OVMF: a UEFI firmware implementation

On a recent version of Ubuntu, these can be installed with:

sudo apt install qemu-system-x86 ovmf swtpm

Note that OVMF only supports the x86_64 architecture, so we can only emulate that. If you run a different architecture, you’ll need to find another UEFI implementation that is not OVMF (but I’m not aware of any freely available ones).

Create a copy of your drive

We can decide to either:

  • Choice #1: run only the components involved early at boot (shim, bootloader, kernel, initrd). This is useful if you, like me, only need to test those components and how they affect Secure Boot and the TPM, and don’t really care about the rest (the init process, login manager, …).

  • Choice #2: run the entire operating system. This can give you a fully usable operating system running inside the virtual machine, but may also result in some instability inside the guest (because we’re giving it a filesystem that is in use), and may also lead to some data loss if we’re not careful and make typos. Use with care!

Choice #1: Early boot components only

If we’re interested in the early boot components only, then we need to make a copy the following from our drive: the GPT partition table, the EFI partition, and the /boot partition (if we have one). Usually all these 3 pieces are at the “start” of the drive, but this is not always the case.

To figure out where the partitions are located, run:

sudo parted -l

On my system, this is the output:

Model: WD_BLACK SN750 2TB (nvme)
Disk /dev/nvme0n1: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  525MB   524MB   fat32              boot, esp
 2      525MB   1599MB  1074MB  ext4
 3      1599MB  2000GB  1999GB                     lvm

In my case, the partition number 1 is the EFI partition, and the partition number 2 is the /boot partition. If you’re not sure what partitions to look for, run mount | grep -e /boot -e /efi. Note that, on some distributions (most notably the ones that use systemd-boot), a /boot partition may not exist, so you can leave that out in that case.

Anyway, in my case, I need to copy the first 1599 MB of my drive, because that’s where the data I’m interested in ends: those first 1599 MB contain the GPT partition table (which is always at the start of the drive), the EFI partition, and the /boot partition.

Now that we have identified how many bytes to copy, we can copy them to a file named drive.img with dd (maybe after running sync to make sure that all changes have been committed):

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead),
# and 'count' with the number of MBs to copy
sync && sudo -g disk dd if=/dev/nvme0n1 of=drive.img bs=1M count=1599 conv=sparse

Choice #2: Entire system

If we want to run our entire system in a virtual machine, then I would recommend creating a QEMU copy-on-write (COW) file:

# replace '/dev/nvme0n1' with your main drive (which may be '/dev/sda' instead)
sudo -g disk qemu-img create -f qcow2 -b /dev/nvme0n1 -F raw drive.qcow2

This will create a new copy-on-write image using /dev/nvme0n1 as its “backing storage”. Be very careful when running this command: you don’t want to mess up the order of the arguments, or you might end up writing to your storage device (leading to data loss)!

The advantage of using a copy-on-write file, as opposed to copying the whole drive, is that this is much faster. Also, if we had to copy the entire drive, we might not even have enough space for it (even when using sparse files).

The big drawback of using a copy-on-write file is that, because our main drive likely contains filesystems that are mounted read-write, any modification to the filesystems on the host may be perceived as data corruption on the guest, and that in turn may cause all sort of bad consequences inside the guest, including kernel panics.

Another drawback is that, with this solution, later we will need to give QEMU permission to read our drive, and if we’re not careful enough with the commands we type (e.g. we swap the order of some arguments, or make some typos), we may potentially end up writing to the drive instead.

Emulate a TPM device using swtpm

There are various ways to run the swtpm emulator. Here I will use the “vTPM proxy” way, which is not the easiest, but has the advantage that the emulated device will look like a real TPM device not only to the guest, but also to the host, so that we can inspect its PCR banks (among other things) from the host using familiar tools like tpm2_pcrread.

First, enable the tpm_vtpm_proxy module (which is not enabled by default on Ubuntu):

sudo modprobe tpm_vtpm_proxy

If that worked, we should have a /dev/vtpmx device. We can verify its presence with:

ls /dev/vtpmx

swtpm in “vTPM proxy” mode will interact with /dev/vtpmx, but in order to do so it needs the sys_admin capability. On Ubuntu, swtpm ships with this capability explicitly disabled by AppArmor, but we can enable it with:

sudo sh -c "echo '  capability sys_admin,' > /etc/apparmor.d/local/usr.bin.swtpm"
systemctl reload apparmor

Now that /dev/vtpmx is present, and swtpm can talk to it, we can run swtpm in “vTPM proxy” mode:

sudo mkdir /tpm/swtpm-state
sudo swtpm chardev --tpmstate dir=/tmp/swtpm-state --vtpm-proxy --tpm2

Upon start, swtpm should create a new /dev/tpmN device and print its name on the terminal. On my system, I already have a real TPM on /dev/tpm0, and therefore swtpm allocates /dev/tpm1.

The emulated TPM device will need to be readable and writeable by QEMU, but the emulated TPM device is by default accessible only by root, so either we run QEMU as root (not recommended), or we relax the permissions on the device:

# replace '/dev/tpm1' with the device created by swtpm
sudo chmod a+rw /dev/tpm1

Make sure not to accidentally change the permissions of your real TPM device!

Emulate the system with QEMU

Inside the QEMU emulator, we will run the OVMF UEFI firmware. On Ubuntu, the firmware comes in 2 flavors:

  • with Secure Boot enabled (/usr/share/OVMF/OVMF_CODE_4M.ms.fd), and
  • with Secure Boot disabled (in /usr/share/OVMF/OVMF_CODE_4M.fd)

(There are actually even more flavors, see this AskUbuntu question for the details.)

In the commands that follow I’m going to use the Secure Boot flavor, but if you need to disable Secure Boot in your guest, just replace .ms.fd with .fd in all the commands below.

To use OVMF, first we need to copy the EFI variables to a file that can be read & written by QEMU:

cp /usr/share/OVMF/OVMF_VARS_4M.ms.fd /tmp/

This file (/tmp/OVMF_VARS_4M.ms.fd) will be the equivalent of the EFI flash storage, and it’s where OVMF will read and store its configuration, which is why we need to make a copy of it (to avoid modifications to the original file).

Now we’re ready to run QEMU:

  • If you copied only the early boot files (choice #1):

    # replace '/dev/tpm1' with the device created by swtpm
    qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=raw,file=drive.img \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    
  • If you have a copy-on-write file for the entire system (choice #2):

    # replace '/dev/tpm1' with the device created by swtpm
    sudo -g disk qemu-system-x86_64 \
      -accel kvm \
      -machine q35,smm=on \
      -cpu host \
      -smp cores=4,threads=1 \
      -m 4096 \
      -vga virtio \
      -bios /usr/share/ovmf/OVMF.fd \
      -drive if=pflash,unit=0,format=raw,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd,readonly=on \
      -drive if=pflash,unit=1,format=raw,file=/tmp/OVMF_VARS_4M.ms.fd \
      -drive if=virtio,format=qcow2,file=drive.qcow2 \
      -tpmdev passthrough,id=tpm0,path=/dev/tpm1,cancel-path=/dev/null \
      -device tpm-tis,tpmdev=tpm0
    

    Note that this last command makes QEMU run as the disk group: on Ubuntu, this group has the permission to read and write all storage devices, so be careful when running this command, or you risk losing your files forever! If you want to add more safety, you may consider using an ACL to give the user running QEMU read-only permission to your backing storage.

In either case, after launching QEMU, our operating system should boot… while running inside itself!

In some circumstances though it may happen that the wrong operating system is booted, or that you end up at the EFI setup screen. This can happen if your system is not configured to boot from the “first” EFI entry listed in the EFI partition. Because the boot order is not recorded anywhere on the storage device (it’s recorded in the EFI flash memory), of course OVMF won’t know which operating system you intended to boot, and will just attempt to launch the first one it finds. You can use the EFI setup screen provided by OVMF to change the boot order in the way you like. After that, changes will be saved into the /tmp/OVMF_VARS_4M.ms.fd file on the host: you should keep a copy of that file so that, next time you launch QEMU, you’ll boot directly into your operating system.

Reading PCR banks after boot

Once our operating system has launched inside QEMU, and after the boot process is complete, the PCR banks will be filled and recorded by swtpm.

If we choose to copy only the early boot files (choice #1), then of course our operating system won’t be fully booted: it’ll likely hang waiting for the root filesystem to appear, and may eventually drop to the initrd shell. None of that really matters if all we want is to see the PCR values stored by the bootloader.

Before we can extract those PCR values, we first need to stop QEMU (Ctrl-C is fine), and then we can read it with tpm2_pcrread:

# replace '/dev/tpm1' with the device created by swtpm
tpm2_pcrread -T device:/dev/tpm1

Using the method described here in this article, PCRs 4, 5, 8, and 9 inside the emulated TPM should match the PCRs in our real TPM. And here comes an interesting application of this method: if we upgrade our bootloader or kernel, and we want to know the future PCR values that our system will have after reboot, we can simply follow this procedure and obtain those PCR values without shutting down our system! This can be especially useful if we use TPM sealing: we can reseal our secrets and make them unsealable at the next reboot without trouble.

Restarting the virtual machine

If we want to restart the guest inside the virtual machine, and obtain a consistent TPM state every time, we should start from a “clean” state every time, which means:

  1. restart swtpm
  2. recreate the drive.img or drive.qcow2 file
  3. launch QEMU again

If we don’t restart swtpm, the virtual TPM state (and in particular the PCR banks) won’t be cleared, and new PCR measurements will simply be added on top of the existing state. If we don’t recreate the drive file, it’s possible that some modifications to the filesystems will have an impact on the future PCR measurements.

We don’t necessarily need to recreate the /tmp/OVMF_VARS_4M.ms.fd file every time. In fact, if you need to modify any EFI setting to make your system bootable, you might want to preserve it so that you don’t need to change EFI settings at every boot.

Automating the entire process

I’m (very slowly) working on turning this entire procedure into a script, so that everything can be automated. Once I find some time I’ll finish the script and publish it, so if you liked this article, stay tuned, and let me know if you have any comment/suggestion/improvement/critique!

19 November, 2023 04:33PM

November 17, 2023

hackergotchi for ARMBIAN

ARMBIAN

Armbian Leaflet #14

Dear Armbian Community,

We’re excited to share the latest updates!

Board Support Policy Changes

We’re refining our approach by focusing on select boards for better support and reliability:

  • Standard Support: Boards receiving comprehensive, reliable support.
  • Staging Support: Boards undergoing support validation.
  • Community Maintained: Boards benefiting from community support.

Discover more about our updated support policies and details here.

Supported boards must significantly benefit Armbian, with their statuses confirmed and agreed upon by our team to ensure high-quality standards.

Upcoming Release Highlights

Our next release will prioritize supported and community-supported images. Plus, the introduction of the Armbian-unofficial prefix allows for customized image creation.

The upcoming LTS kernel, version 6.6, will integrate into specific build targets! We’re also experimenting with UEFI EDK2, exploring new possibilities for our ecosystem. Track our progress on GitHub.

Call Out for Help

As we approach the release, your contributions are invaluable. Join us in refining and fixing issues to make Armbian even better:

  • Participate in identifying and fixing bugs here.
  • Contribute to ongoing pull requests and bug fixes on GitHub.
  • Become a sponsor of our work.

Thank you for being an essential part of the Armbian community!

The Armbian team

17 November, 2023 08:07PM by Didier Joomun

hackergotchi for GreenboneOS

GreenboneOS

CVE News: Critical vulnerabilities Atlassian and F5 Big vulnerability tests released by Greenbone

Our developers have provided vulnerability tests for two critical vulnerabilities in widely used enterprise software. Within a very short time, tests for CVE 2023-22518 und CVE 2023-46747 were integrated, and customers of Greenbone’s Enterprise Feed were protected.

Knowledge management tools Confluence and Jira from Australian vendor Atlassian have been hit by a serious security vulnerability, rated 9.8 out of 10 on the CERT scale. Since November 8, CVE 2023-22518 has been actively exploited by attackers gaining unauthorized access to company data, according to media reports.

According to the company, the “authentication flaw” affects all versions of Confluence Data Center and Server, but not the cloud version at Atlassian itself. For anyone else, including users of Jira, but especially all publicly accessible Confluence servers, there is a “high risk and need to take immediate action”, writes Atlassian.

We reacted quickly and provided our customers with appropriate tests before ransomware attacks could be successful. Customers of the Greenbone Enterprise Feed were warned and reminded of the patch via update.

Remote code execution: F5 BIG-IP allows request smuggling

Also at the end of October, security researchers from Praetorian Labs discovered a serious vulnerability (CVE-2023-46747) in the products of application security expert F5. The American company’s solutions are designed to protect large networks and software environments; the software, which was launched in 1997 as a load balancer, is primarily used in large enterprises.

However, according to the experts, attackers can remotely execute code on the BIG-IP servers by adding arbitrary system commands to the administration tools via manipulated URLs. Details can be found at Praetorian; patches are available, and a long list of BIG-IP products of versions 13, 14, 15, 16, and 17 are affected, both in hardware and software.

We reacted quickly and integrated tests into its vulnerability scanners on the same day, which test the BIG-IP installations at Greenbone Enterprise for vulnerable versions and, if necessary, point to the patches listed at F5.

Our vulnerability management products, the Greenbone Enterprise Appliances, offer the best protection.

Professional vulnerability management is an indispensable part of IT security. It enables the early detection of risks and provides valuable instructions for their elimination.
The Greenbone Enterprise Feed is updated daily to detect new vulnerabilities. We therefore recommend that you regularly update and scan all your systems. Please also read this article on IT security and the timeline of common attack vectors.


17 November, 2023 08:16AM by Markus Feilner

November 16, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: Lubuntu 23.10 Backports PPA Released with LXQt 1.4

When users first download Lubuntu, they are presented with two options: Install the latest Long-Term Support release, providing them with a rock-solid and stable base (we assume most users choose this option). Install the latest interim release, providing the latest base with the latest LXQt release. As we have mentioned in previous announcements, Kubuntu and […]

16 November, 2023 10:10PM

Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint


Photo by Pixabay

Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.

Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:

  • 2x less disk space used (1,417MB vs 2,940MB, including initrd)

  • 3x less peak RAM usage for the initrd boot (68MB vs 204MB)

  • 0.5x increase in download size (949MB vs 600MB)

  • 2.5x faster initrd generation (4.5s vs 11.3s)

  • approximately the same total time (103s vs 98s, hardware dependent)


For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:

  • 1.3x less disk space used (548MB vs 742MB)

  • 2.2x less peak RAM usage for initrd boot (27MB vs 62MB)

  • 0.4x increase in download size (207MB vs 146MB)


Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.


This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.


The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.


The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.


Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.


Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:


For questions and comments please post to Kernel section on Ubuntu Discourse.



16 November, 2023 10:45AM by Dimitri John Ledkov (noreply@blogger.com)

Martin-Éric Racine: dhcpcd almost ready to replace ISC dhclient in Debian

A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:

  • Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
  • Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
  • The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.

16 November, 2023 09:38AM by Martin-Éric (noreply@blogger.com)

Ubuntu Blog: Implementing edge computing for V2X use cases in automotive

Vehicles are becoming more and more like mobile data centres. On average, a modern vehicle contains over 60 sensors that monitor various aspects of the vehicle, generating an immense amount of data that is processed on the go. This transformation is creating an unprecedented set of challenges for OEMs.

Edge computing is a new paradigm that is changing how data is processed in these types of environments. It involves decentralising data processing and analysis, bringing computational capabilities closer to where the data is generated. Clouds optimised for edge processing can address the problems posed by the vast volumes of data generated by autonomous systems, ensuring consistent performance regardless of the vehicle’s position. 

This blog will dive into edge computing and study how clouds that are optimised to process data at the edge can address automotive challenges.

What is edge computing?

At its core, edge computing is a distributed computing model that processes data at or near the source of data generation. This means that rather than sending all the data to a remote cloud server for processing, analysis and decision-making, the computations are done as close as possible to where the sensors or operations are located. This provides three main benefits:

  • Reduced latency – Providing compute in proximity to the data source ensures that the latency is reduced to a minimum, which guarantees faster response times and enables real-time processing. 
  • Data pre-processing – Only the required, pre-processed data is transmitted to the centralised cloud servers. Fewer data transfers mean lower bandwidth requirements, enabling massive cost savings. And you benefit from better privacy as the stored data is already curated and protected.
  • Improved scalability and network resilience – The distributed approach makes it easier to scale and enhances network resiliency. You can deploy processing capacity wherever it is needed and with the necessary redundancy based on local requirements. For example, if you need to use additional sensors in a specific context, you can deploy localised clouds for high-performance processing near the sensors.

Relevance of edge computing in automotive

These benefits are uniquely suited to addressing the new challenges facing the automotive industry, especially when it comes to software and data processing. Vehicles are moving objects by definition. Their location is constantly changing, which leads to difficulties in processing the large amounts of data they generate. This is especially true for autonomous vehicles, which depend on real-time data to operate. Let’s explore the different challenges the industry faces and how they relate to edge computing.

Autonomous driving and vehicle-to-everything (V2X) 

With Autonomous Driving (AD) and emerging regulations come new “vehicle-to-everything” (V2X) use cases. Authorities in multiple countries are considering making V2X communications mandatory in future vehicles. For instance, the United States National Highway Traffic Safety Administration (NHTSA) is considering specific V2X technologies for collision avoidance systems.

V2X involves all the possible interactions between vehicles and the surrounding environment, like the local infrastructure (V2I), the surrounding vehicles (V2V), and so on. For features related to accident information sharing for example, it is important for the vehicles to communicate with the road infrastructure itself, even in areas with very limited network coverage. These use cases are very challenging today with a traditional cloud infrastructure.

The real-time requirement is high and critical for the safety of the passengers. The communication between multiple vehicles and an infrastructure point is subject to intermittent connectivity problems. And V2X often involves exchanging of sensitive information, so ensuring that this data remains secure and private is crucial. Edge computing delivers the resilience, security and minimal latency necessary to satisfy these requirements.

Far-edge computing for sensor fusion

AD vehicles are known for generating enormous amounts of data collected through numerous sensors. The volume of data is nearly impossible to transmit to a central cloud for processing. Not only would the duration of the upload be too long before obtaining a processed response,  the cost of cloud processing and data transfer would also be too high. 

This is why all OEMs pre-process the data from within the vehicle first, before sending only relevant information to the cloud. That way, most of the urgent decisions can be done onboard, using pre-trained algorithms and ensuring that the response is taken below a certain time threshold. 

That being said, the raw processing power and the low-latency requirements are still difficult to meet due to the quantity of data generated for sensor fusion. Sensor fusion involves gathering data from different sensors within a vehicle, such as cameras, radars, lidars, ultrasonic sensors, and other detection mechanisms. By combining the information from these diverse sensors, the system can obtain a more comprehensive view of the vehicle’s surroundings. Pre-processing such huge volumes of combined data requires an edge-computing appliance which is as close as possible to the sensors, but far from the central cloud infrastructure – what we call “far-edge cloud”.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/-YrFyMIKGw7z4YF2PhCMEIbQ1obw6q3q5FBWMB823rtiYKKsTp-2BBmb_sDgFxQlME6I6kG1V6ulhjJuvcr1gE66u1lfAfzVa-TK7dG3OYixrlOCKlnIFI1nB2JSaYFu6GqmJxC9TMYk7-W-bvJVngk" width="720" /> </noscript>

Near-edge computing within factories

When it comes to vehicle manufacturing, quality checks often require the analysis of large volumes of precise 3D data. This processing demands substantial 3D capabilities. Traditional cloud-based solutions pose networking challenges as well as security concerns. 

Sending proprietary data off-site for analysis, which could include confidential details on a company’s manufacturing approach, should be avoided as much as possible. In this case, the vehicle manufacturer would benefit from edge clouds that reside within the factories and are directly integrated with the central cloud infrastructure (“near-edge cloud”). The data processing is done in parallel within the facilities and protected by the factory premises.

MicroCloud, the Canonical solution for edge computing 

Edge computing architectures deploy cloud capabilities to the edges, enabling computing and storing features in various distributed locations. There is a growing demand for simple edge cloud solutions due to the rising volume of data generated at the edge. The deployment of cloud capabilities at the edge comes with challenges such as orchestration, security and maintenance.

To solve these challenges, Canonical recently announced MicroCloud. MicroCloud is a low-touch cloud solution designed for scalable clusters and edge deployments. Delivering extensive automation, this solution enables you to deploy your edge cloud with a single command, and significantly simplifies ongoing maintenance. 

MicroCloud offers several distinct advantages over traditional cloud-based processing, by enhancing and simplifying the deployment and operations of clouds in remote locations. This new Canonical product enables the deployment of a lightweight but scalable cloud which is perfectly suited to edge use cases that require security and efficiency.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/pd1papoyXe87qSdoXSAy8K8vgrEUFACPLaKNTay_47vMmDHUmN259OJsKUpQby3NSEiFrhOLSt8tECuYRomDyo5aT3yuHj5PmQKJ82tnf3J4eLRp0zAVKEBBeV_36zHmyOVsHPQJq0XTh1oeaKUYpkA" width="720" /> </noscript>

Pushing further with 5G and AI

It’s possible to combine edge computing with emerging technologies in order to enable even more capabilities. For example, edge computing architectures are invaluable for 5G network slicing. 

5G slices can be configured in order to guarantee low-latency in some areas and high-speed communication in others. The networking architecture can then be tailored and optimised according to the local needs of the overall infrastructure. This will be instrumental for the rise of remote-controlled autonomous vehicles.

AI is also deeply intertwined with edge computing, as algorithms power the decision-making processes within autonomous vehicles. OEMs need to ensure that the trained AI models receive the right data required to ensure safety and efficiency of the vehicles and their occupants. The full stack needs to be defined accordingly so that the models can use the power and latency of edge clouds in the best possible way. 

Indeed, in order to take full advantage of the capabilities offered by edge clouds, there needs to be an optimised software (and hardware) stack. The distributed nature of edge computing architectures makes them a natural deployment paradigm for AI systems.

Conclusion

As the automotive industry continues to evolve towards software, connectivity and autonomy, new challenges arise. Edge computing offers solutions that enhance performance and user experience while addressing security risks and data privacy concerns.

As we move into 2024, expect to see strong investments and integrations of edge computing technologies in automotive factories, V2X applications and autonomous driving features. 

Make sure you use optimised data-driven solutions and learn all that there is to know about MicroCloud, Canonical’s low-touch private cloud optimised for edge use cases. MicroCloud is ideal to drive innovation in automotive with safer, smarter, and connected vehicles, factories and infrastructures.

16 November, 2023 08:30AM

Podcast Ubuntu Portugal: E273 Querida, Encolhi a Cloud

O Miguel, qual Prometeu da Wish, desafiou os deuses e está a pagar por isso; o Diogo andou a fazer experiências de virtualização, mas há também notícias do UBports da OTA-3 do Ubuntu Touch, a Canonical anunciou oficialmente o "novo" Microcloud e o Diogo ainda teve a oportunidade de arengar sobre maleitas do Thunderbird.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

16 November, 2023 12:00AM

November 15, 2023

Scarlett Gately Moore: Farewell for now, again.

I write this with a heavy heart. Exactly one year ago I lost my beloved job. That was all me, I had a terrible run of bad luck with COVID and I never caught up. In the last year, I have taken on several new projects to re-create a new image for myself and to make up for the previous year, and I believe I worked very hard in doing so. Unfortunately, my seemingly good interviews have not ended in a job. One potential job I did not put as much effort into as I should have because I put all my cards into a project that didn’t quite turn out as expected. I do hope it still goes through for the KDE community as a whole, because well it is really cool, but it isn’t the job I thought. I have been relying purely on donations for survival and it simply isn’t enough. I am faced once again with no internet to even do my open source work ( Snaps, KDE neon, Debian and everything that links to those ). I simply can’t put the burden of my stubbornness on my family any longer. Bills are long over due, we have learned to live without many things, but the stress of essential bills, living expenses going unpaid is simply too much. I do thank each and every one of you that has contributed to my fundraisers. It means the world to me that people do care. It just isn’t enough. So with the sunset of Witch Wells, I am sun setting my software career for now and will be looking for something, anything local just to pay some bills, calm our nerves and hopefully find some happiness again. I am tired, broke, stressed out and burned out. I will be back when I can breathe again with my finances.

If you can spare some changes to help with gas, propane, internet I would be so ever grateful.

So long for now.

~ Scarlett

https://gofund.me/1346869d

15 November, 2023 04:53PM

hackergotchi for Tails

Tails

Tails 5.19.1

This release is an emergency release to fix an important security vulnerability in Tor.

Changes and updates

  • Update the Tor client to 0.4.8.9, which fixes the TROVE-2023-006 vulnerability.

    The details of TROVE-2023-006 haven't been disclosed by the Tor Project to leave time for users to upgrade before revealing more. We only know that the Tor Project describes TROVE-2023-006 as a "remote triggerable assert on onion services".

    Our team thinks that this vulnerability could affect Tails users who are creating onion services from their Tails, for example when sharing files or publishing a website using OnionShare.

    This vulnerability might allow an attacker who already knows your OnionShare address to make your Tor client crash. A powerful attacker might be able to further exploit this crash to reveal your IP address.

    This analysis is only a hypothesis because our team doesn't have access to more details about this vulnerability. Still, we are releasing this emergency release as a precaution.

    OnionShare is the only application included in Tails that creates onion services. You are not affected by this vulnerability if you don't use OnionShare in Tails and only use Tails to connect to onion services and don't create onion services using Additional Software.

    More details about TROVE-2023-006 will be available on the Tor issue #40883 sometime after the release.

Fixed problems

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 5.19.1

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 5.0 or later to 5.19.1.

    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.19.1 directly:

15 November, 2023 02:00PM

hackergotchi for Maemo developers

Maemo developers

stb_image_resize2.h – performance

Recently there was an large rework to the STB single-file image_resize library (STBIR) bumping it to 2.0. While the v1 was really slow and merely usable if you needed to quickly get some code running, the 2.0 rewrite claims to be more considerate of performance by using SIMD. So lets put it to a test.

As references, I chose the moderately optimized C only implementation of Ogre3D and the highly optimized SIMD implementation in OpenCV.

Below you find time to scale a 1024x1024px byte image to 512x512px. All libraries were set to linear interpolation. The time is the accumulated time for 200 runs.

RGBRGBA
Ogre3D 14.1.2660 ms668 ms
STBIR 2.01632 ms690 ms
OpenCV 4.8245 ms254 ms

For the RGBA test, STIBIR was set to the STBIR_4CHANNEL pixel layout. All libraries were compiled with -O2 -msse. Additionally OpenCV could dispatch AVX2 code. Enabling AVX2 with STBIR actually decreased performance.

Note that while STBIR has no performance advantage over a C only implementation for the simple resizing case, it offers some neat features if you want to handle SRGB data or non-premultiplied alpha.

0 Add to favourites0 Bury

15 November, 2023 01:50PM by Pavel Rojtberg (pavel@rojtberg.net)

hackergotchi for VyOS

VyOS

VyOS LTS Support Policy

Hello,  Community!

Now that the 1.3.4 LTS release is over and the upcoming 1.4.0 release is on its path to the first early production access release, we'd like to clarify the support policy for our LTS releases to help support customers plan their updates. In short, the three latest minor versions in each LTS release line are supported unconditionally, but if your version is older, we will likely ask you to upgrade before we can take further steps to help you with your issues.

15 November, 2023 05:20AM by Yuriy Andamasov (yuriy@sentrium.io)

hackergotchi for Qubes

Qubes

QSB-097: "Reptar" Intel redundant prefix vulnerability

We have published Qubes Security Bulletin 097: “Reptar” Intel redundant prefix vulnerability. The text of this QSB and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this QSB, please see the end of this announcement.

Qubes Security Bulletin 097


             ---===[ Qubes Security Bulletin 097 ]===---

                              2023-11-14

            "Reptar" Intel redundant prefix vulnerability
                   (CVE-2023-23583, INTEL-SA-00950)

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

On 2023-11-14, Intel published INTEL-SA-00950, "2023.4 IPU Out-of-Band
(OOB) - Intel® Processor Advisory" [3] accompanied by advisory guidance
[4] that states:

| Under certain microarchitectural conditions, Intel has identified
| cases where execution of an instruction (REP MOVSB) encoded with a
| redundant REX prefix may result in unpredictable system behavior
| resulting in a system crash/hang, or, in some limited scenarios, may
| allow escalation of privilege (EoP) from CPL3 to CPL0.

This vulnerability has been assigned CVE-2023-23583. [5]

Impact
-------

On affected systems, a qube running in PV mode can attempt to exploit
this vulnerability in order to escalate its privileges to those of dom0.
In the default Qubes OS configuration, the stubdomains for sys-net and
sys-usb run in PV mode. (Dom0 also runs in PV mode, but it is fully
trusted.)

In addition, any qube can attempt to exploit this vulnerability in order
to crash the system, resulting in a denial of service (DoS).

Tavis Ormandy's write-up [6] suggests that disabling hyper-threading
(which Qubes OS does by default) might reduce the impact to that of a
denial-of-service attack, but we cannot completely rule out the
possibility of privilege escalation even with hyper-threading disabled.

Affected systems
-----------------

Only systems with Intel processors are affected, specifically:

- 10th generation Core and newer processors
- Certain server processors

According to Intel, some recent processor families already have
mitigations. For details, see the tables of affected products in
INTEL-SA-00950. [3]

Patching
---------

The following packages contain security updates that address the
vulnerabilities described in this bulletin:

  For Qubes 4.1, in dom0:
  - microcode_ctl, version 2.1-56.qubes1

  For Qubes 4.2, in dom0:
  - microcode_ctl, version 2.1-56.qubes1

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
initramfs binaries.

Credits
--------

See the Intel security advisory. [3]

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00950.html
[4] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/redundant-prefix-issue.html
[5] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-23583
[6] https://lock.cmpxchg8b.com/reptar.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-097-2023.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmVUDxMACgkQ1lWk8hgw
4Grl2RAAizymbIDjwL+Al/FkxW6DqJvygG/7aborDLcSgDtE6X4jC2b/drCkI++m
gAnY8xr7D4l+g5/c+G0va8iH6Q3wKm9r9IUyXBB/SlbdziSR2hrgGOPZ7V0mg2zU
60G2lZhwFV0hqEiFLA9TDfD4sL61sP52Jtilvg0n1JjOp2JqQXn0m61+T5m6TdU1
c3N/ajN3H09Fi3x0HNKs2569uK8kI5IQDHG8WQStD2Sr8XnK6M/KOj9u+UxNoYvv
6j7eXlUCjkBWx7yzP1/uP1OuIG589tzRUTAwm+JK0kpYu0hzGgBji+C8vVG4BzFk
+aQJ08UFOO3DCmBhg/swjDoMBmXeEG4ld0uhDrkzwdOP4Wf+mCWGThn8GFNhcLpt
s42vP/KCFIs4xFTcfhjYhpBU3Eym/8b/+64BQoUFS7Pj0kjBdlPADwEkw8cU/Isn
CjescoYD1Irxh9+hm+SRDQq4cJv+h6zQQNznIN1tHiyr+oIeqXzKxV3Q/zxUXmc3
VlbjR5vIPGG7iUjKIPfeZ9fDIVPtt1PiHoQkUX1hNw5+zJ+QUTVHtoUehWif4gL7
RGn1l3IAhLkYrhdrz8iY+YEMe+0/XjkpcpsHoIyhPwhUH4OMBiPkt0zG8LKmnjm0
oDVFYd5d+Dv/kA9A1wjtS9B2r+ydxR5voV+1ke//Fe/JHorDfkE=
=jmq4
-----END PGP SIGNATURE-----

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-097-2023.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmVUvlYACgkQSsGN4REu
FJDMDhAArxzeyHKiTkT/pFcgdgcPxzBahlEZtRH6dmvqs7TdiAh+99RnUpzL8sJH
FZnUg0DYXNTvuhCQ0mJtEb1wzpg4z65FuyJ1NXLzv9qRtmaQeEh7kKX96z6p1Ybt
A2o64GmwX1RFL5tpEhwnCgZ9OTlo2Y2eHq/ra8Y+LBTsFAFN5mhVj7+ElvXnVIQ9
uLUyaH6p+aUPyyoI8zYRYt8fPSJuA+fhYFk97AYYL2LA9ZTTD7QirvUgfeJBQ3PR
XozmcEpKPJb0TpDsOB11muE0C0H9Wdz/artWgqtojqFQ0hiJIEPoKQq9pBqYUJo6
33qVQFvX5pmj2DDx8FEgtt5UuJ+AEtlI3Rh8mSkk49tqwAh1Tg4M2vaECog2HHPO
nqp8jeYulAJn64VMlk4lO3vNMdeWuY3yfJHPszKLvIh2v0IbdmN2r4rooz96vSQ0
FG6MMcdCAr7AVzYGaEoQ6a2LZtaIwvN6DFc99ry04wukWikNfCxXGi1i2F045oFq
lgfB6N0ZdUaLPoghJrhuCQbBBIyYjzBiK0L5P565jhI45xHf9sgYbWtHUAeZEXh0
jsSqgNYWerwO7dz2UKZaDaJCf0KaAan+HEWdcsmAPBKwFZw5yL19Ot6AZCPsFdCc
lY+yvMSxpFZkVZlX31QEuCw/ICuubk92JqTJMw44EmenLqImCmI=
=LP5Z
-----END PGP SIGNATURE-----

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-097-2023.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-097), the commands are:

$ gpg --verify qsb-097-2023.txt.sig.marmarek qsb-097-2023.txt
$ gpg --verify qsb-097-2023.txt.sig.simon qsb-097-2023.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-097 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

15 November, 2023 12:00AM

November 14, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: LXQt 1.4, Arriving at a Lubuntu Backports PPA Near You

The Lubuntu Team is happy to announce that the Lubuntu Backports PPA with LXQt 1.4 is now available for general use. You can find details on enabling it below. What is the Lubuntu Backports PPA? Our Backports PPA is modeled after Kubuntu's. It exists to provide the latest LXQt desktop stack on top of a […]

14 November, 2023 10:20PM

hackergotchi for Purism PureOS

Purism PureOS

No Terms of Use Required!

Think back to your last cellphone setup. One of the first things you do is accept a bunch of End User License Agreements. It begs the question, why do you need terms of use for a device you purchased? Shouldn’t you own and control what is done with your device, not the device manufacturer and […]

The post No Terms of Use Required! appeared first on Purism.

14 November, 2023 04:52PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Bringing automation to telco edge clouds at scale

Canonical and Spectro Cloud have collaborated to develop an effective telco edge cloud solution, Cloud Native Execution Platform (CNEP). CNEP is built with Canonical’s open source infrastructure solutions and Spectro Cloud’s Palette containers-as-a-service (CaaS) platform. This technology stack empowers operators to benefit from the cost optimisation and agility improvements delivered by edge clouds in a highly secure and performant way.

<noscript> <img alt="" height="57" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_492,h_57/https://lh7-us.googleusercontent.com/MeyKW22306DfhUrskv82xjtb4Xz0ogoEajJF6r50msDcTjGOVbA06lEDOV34noMODOYH9MA6O7xGWbCyqzwYEOFHm_P7k1NxQDV0Ca0iSdPpaDDK6HhsepYS1bEjk_Wy5xXtaURs1an8NpuILrBRp7Y" width="492" /> </noscript>

Through a single pane of glass provided by Spectro Cloud Palette, operators can deploy, configure and manage all their telco edge clouds centrally, taking full advantage of Canonical’s infrastructure technology. The joint solution brings automation to deployment and maintenance operations at scale and enables fully cloud-native telco edge clouds.

Telco edge clouds

With the softwarisation of network services and the adoption of cloud computing in the telco sector, the architecture of mobile networks has evolved significantly. Modern telecom networks are no longer run by all-in-one systems deployed at a central location. Instead, operators can scale their systems and offer their services closer to users, thanks to highly scalable, distributed and cloud-native architectures.

Telco operators increasingly deploy cloud computing systems at the edge of their networks, which are often referred to as edge clouds. According to the IDC spending guide forecast published in February 2023, service providers will invest more than $44 billion in enabling edge offerings in 2023. This trend has emerged due to the change in infrastructure architecture and the evolution of mobile networking software which is now based on components that run on containers as microservices. 

Edge computing is predicted to grow even more, as the technology has brought efficiency, flexibility and scalability to telecom systems in deployment and operation. STL partner’s revenue forecast notes a prediction of $445bn in global demand for edge computing services in 2030. 

Five key requirements for edge cloud success in telco 

To unlock the benefits of cloud computing, operators need an effective infrastructure stack to host cloud-native software on edge clouds. Telco deployments are highly demanding, and so a suitable infrastructure stack should satisfy these five key requirements: 

Autonomous operations

It is critical to minimise operational maintenance for edge clouds. These clouds are large in number, and it is costly to maintain systems manually, especially when they are deployed close to radio equipment where it is impractical for administrators to visit deployment sites physically. The solution is to ensure that edge clouds can be operated in an autonomous manner.

Secure

Telco networks are part of our critical infrastructure, carrying sensitive user data. Systems must comply with all necessary security standards and have hardening measures to safeguard user information.

Minimal but variable in size

A minimal footprint is one of the defining characteristics of an edge cloud. A few server hardware nodes may be all that is needed to set up a small cloud that would run a number of cell sites. That being said, there is no single-size solution – requirements may change based on what an operator intends to run at its edge network. Therefore, infrastructure must be able to scale as and when needed.

Energy efficient

A telco operator typically runs a large number of sites for its radio networks. Even a 2% reduction in energy consumption translates to significant cost savings. This means that the ideal edge cloud solution must be optimised at every layer of its stack and have features that support running and operating only what is needed with no extras. It should also support advanced hardware and software features to reduce power consumption.

Highly performant

Telco networks must deliver user data quickly and reliably – service quality and reliability depends on it. Solutions at the telco edge must support the latest technology and enhanced features that enable faster delivery of information at every layer of the hardware and software stack.

Challenges

Edge clouds need a software stack that is built with multiple virtualisation technologies, which makes it challenging to integrate and set up a fully functional system. Addressing the five requirements mentioned above with modern open source cloud technologies is a complex task. Despite the clear benefits those technologies bring, there still gaps to fill. Canonical and SpectroCloud worked together to fill these gaps and make the usage of those open source technologies easier and telco-grade. 

Maintaining updates and upgrades in a cloud system is of paramount importance for smooth system operation while ensuring system integrity and security. However, a typical distributed telecom system deployment has many edge sites each running a virtualisation infrastructure. Furthermore, both the virtualisation software and the application workloads that run on a cloud environment have a large set of dependencies. Given this scale and complexity, it is simply not feasible to manually perform updates and upgrades to maintain these systems.

Besides updates and upgrades, operational procedures such as deployment, scaling and runtime maintenance, are highly repetitive across all telco edge cloud sites. Without a scalable system, it is not possible to operate a telco-edge infrastructure in a cost-efficient way.

Automating telco edge clouds at scale

Cloud Native Execution Platform (CNEP), the solution by Canonical and Spectro Cloud, addresses the five key requirements of successful edge clouds when deploying and maintaining their distributed telco cloud infrastructure. It offers a software stack that is efficient, secure, performant and modular.

The technology stack

The solution stack is tailored for the needs of telco edge clouds from bare metal to containers. It consists of Canonical’s Metal-as-a-Service (MAAS) and MicroK8s solutions that together deliver the bare metal performance and orchestration required by the telecom sector while enabling the flexibility and agility of cloud native environments. Integrated with Spectro Cloud’s Palette, the solution provides automation for deployment of Canonical’s cloud native edge cloud stack at scale at multiple edge sites.

<noscript> <img alt="" height="420" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_420,h_420/https://ubuntu.com/wp-content/uploads/7d44/CNEP_transparent_new.png" width="420" /> </noscript>

Cloud Native Execution Platform (CNEP)

Platform features

This resulting solution, named Cloud Native Execution Platform (CNEP) simplifies onboarding, deployment and management of MicroK8s clusters. MicroK8s is a light-weight, zero-ops and purely upstream CNCF certified Kubernetes distribution by Canonical, with high availability, automatic updates and streamlined upgrades. It is the container orchestrator in CNEP, tailored for telco edge clouds, with optimised performance, scalability, reliability, power efficiency and security. 

CNEP offers an array of features that make it ideally suited to telco use cases.

Multi-site automation

CNEP provides multi-site control, observability, governance and orchestration with zero-downtime upgrades. Through Spectro Cloud Palette, operators can seamlessly deploy, configure and manage all their telco edge clouds from a central location.

<noscript> <img alt="" height="227" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_335,h_227/https://ubuntu.com/wp-content/uploads/9774/observable-gui.png" width="335" /> </noscript>

Palette not only manages bare metal automation and provisioning with MAAS but also achieves deployment and management of MicroK8s clusters, all through Cluster API (CAPI). It gives operators rich and fine-grained control over their Day 2 operations, such as patching and configuration changes. The platform also provides full observability and role based access control (RBAC) capabilities.

Repeatable deployments

In CNEP, operators can achieve repeatable and reliable MicroK8s cluster deployments with automation at scale using Palette across multiple geographical sites. With Palette, CNEP achieves decentralised policy enforcement and self-healing for autonomy and resilience at scale. This provides operators with a consistent end-to-end declarative management experience.

<noscript> <img alt="" height="206" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_423,h_206/https://ubuntu.com/wp-content/uploads/52bc/multi-edge-clouds.png" width="423" /> </noscript>

Self-healing by Palette in CNEP is achieved by continuously monitoring the state of the deployed MicroK8s cluster at each site and comparing it against the desired cluster state. Any deviation between the two states is addressed by bringing the cluster to the desired state based on policies.

Cloud native, reliable and software defined

CNEP is cloud native and reliable for containerised workloads. MicroK8s supports Cluster API to meet the complex needs of highly distributed edge node onboarding, secure deployment and substrate provisioning. It also supports all popular container networking interfaces (CNI), including Cilium, Calico and Flannel, as well as Kube-OVN as a CNI for software defined networking. 

For management and control of object, block and file storage, MicroK8s integrates with Canonical Charmed Ceph, which is a flexible software-defined storage controller solution. CNEP provides support for these CNIs and Charmed Ceph out of the box.

Automated hardware at scale

Bare metal hardware provisioning with MAAS enables operators to automate their edge hardware infrastructure, and gain visibility and control over their hardware resources. This provides agility in system deployment with full automation in configuration and operating system deployment. 

MAAS supports CAPI to enable hardware automation operations while deploying and managing MicroK8s clusters. With Palette, CNEP achieves bare metal automation at scale across multiple edge cloud sites through MAAS CAPI.

Secure and compliant

Ubuntu Pro provides security compliance, hardening and auditing, as well as support to the edge cloud infrastructure as a whole and to the cloud native telco workloads running in containers. It provides security patches, hardening profiles, standards compliance and automated CVE patches for an extensive set of open source packages (over 23000). CNEP supports multiple security standards. For instance, both Ubuntu Pro and Palette have conformance to FIPS 140-2.

<noscript> <img alt="" height="230" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_249,h_230/https://ubuntu.com/wp-content/uploads/de41/compliance_benchmarks.png" width="249" /> </noscript>

As CNEP’s container orchestrator, MicroK8s security is mission-critical, and our solution ensures that it is safeguarded. In addition to the security features of Ubuntu Pro, MicroK8s runs in a snap, which is a confined execution environment, effectively isolating it from changes in the host system and other software running on the host. This provides a sandbox environment and protects the container orchestration environment from external threats.

The attack surface is reduced as much as possible to minimise entry points to the platform and protect it from malicious attempts. This is achieved by the opinionated design of MicroK8s, chiselled container images and Ubuntu Core.

MicroK8s has a minimal footprint that includes all necessary components but nothing extra. It is easily extensible with its modular structure as needed. Similarly, chiselled container images include only the packages needed to execute your business applications, without any additional operating system packages or libraries. In constrained environments, Ubuntu has a minimal flavour – Ubuntu Core. This provides operators with an immutable operational environment where the system runs on containerised snaps. 

Besides the security features provided by Canonical’s telco edge cloud stack at each telco site, Spectro Cloud Palette brings additional security capabilities to CNEP. This includes native security scanning for the full deployment stack, conformance scans, and penetration testing. Palette provides further patching and monitoring capabilities, along with role based access control offered as part of CNEP.

Performant

CNEP is highly-performant across the telco infrastructure stack.

At the container orchestration level, MicroK8s supports the latest enhanced platform features that streamline packet delivery between containerised applications and external services. It supports technologies such as GPU acceleration and CPU-pinning.

At the operating system level, Ubuntu Pro brings real-time compute capabilities that meet the stringent requirements of delay-sensitive telco applications and the networking stack. This enables low latency and ultra-reliable communications, which means applications can communicate with users and devices with the fastest possible performance at the OS level.

CNEP runs on bare metal hardware, which makes it ideal for efficiency at the telco edge. Automatic updates provided by Ubuntu Pro’s kernel Livepatch service gives an uninterrupted environment to telco workloads and the networking stack.

Cost-efficient

CNEP is designed to be efficient with minimal energy consumption at the telco edge. 

MicroK8s is modular and can be extensible as necessary; it comes with a sensible set of default modules in place. This enables MicroK8s to be more efficient with the best possible use of system resources. 

Ubuntu Core has the same properties. It is minimal, with services running on snaps, providing a small footprint which consumes much less resources without sacrificing performance.

MAAS enables significant cost reductions on two aspects thanks to its hardware automation capabilities. On one hand, MAAS automates OS provisioning and software deployment on bare metal hardware, reducing operational costs and human errors. On the other hand, system administrators can optimise hardware utilisation based on workload conditions managed by MAAS.

Those automation features are augmented by the multi-site automation capabilities brought by Palette. CNEP achieves cost savings in terms of simplified deployment and management of the edge infrastructure, as engineers no longer need to physically visit deployment sites.

Summary

We are proud to be working alongside Spectro Cloud to introduce CNEP to the market. Powered by Canonical’s industry-leading open source infrastructure solutions, and with automation provided by Palette, CNEP can seamlessly scale across multi-site distributed infrastructure. It is ideal for cloud native telco workloads, edge computing business applications, and mobile networking stack, such as Open RAN CU/DU/RU and distributed 5G user plane. The solution is secure by design thanks to Ubuntu Pro, and highly efficient with support for real-time kernel and other enhanced platform features.

Get in touch 

<noscript> <img alt="" height="140" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_185,h_140/https://ubuntu.com/wp-content/uploads/275f/contact-us.png" width="185" /> </noscript>

Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at ubuntu.com/telco or get in touch.

Learn more

Reducing latency at telco edge clouds with Ubuntu real-time kernel

Safeguarding your telco infrastructure with Ubuntu Pro

How to build carrier-grade infrastructure using enterprise open source solutions

On-demand webinar: Kubernetes on bare metal: ready for prime time!

14 November, 2023 03:01PM

Ubuntu Blog: Netplan brings consistent network configuration across Desktop, Server, Cloud and IoT

<noscript> <img alt="" height="472" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_840,h_472/https://lh7-us.googleusercontent.com/HGsrHZ_XJY4IgKu4hgMtzpFlUJqa31e4Uu0I5QK7TlUba_i-1ogvTyJ97hy5-UA9F6Szjv29BAufBbODRSnDO1PC2liNajN0TtylBxwT0BjXafwm52SYbLm00nDuDLtqBSbqSVOaySgM9XFQiGdlNkk" width="840" /> </noscript>

We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.

Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.

Netplan Desktop integration

On workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.

Migration of existing connection profiles

On installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/. 

The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:

Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan

In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.

The future of Netplan

Netplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.

Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.

Conclusion

Netplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the Access Desktop NetworkManager settings through Netplan tutorial.


If you want to learn more, follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

14 November, 2023 01:12PM

hackergotchi for Purism PureOS

Purism PureOS

Intel AX200 Wi-Fi/Bluetooth Shipping for New Orders

New orders of Librem 14 and Librem Mini v2 are now shipping with Intel AX200 Wi-Fi and Bluetooth cards, replacing the Qualcomm Atheros AR9xxx series. Like the Librem 11‘s integrated AX201, the device firmware is provided by the PureBoot Firmware Blob Jail. We made this decision with care and to promote customer control over their […]

The post Intel AX200 Wi-Fi/Bluetooth Shipping for New Orders appeared first on Purism.

14 November, 2023 01:00PM by Jonathon Hall

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Join Canonical at Open Source Experience Paris 2023

<noscript> <img alt="" height="628" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_628/https://ubuntu.com/wp-content/uploads/45b2/EN-Social-Blog-Announcement-1.png" width="1200" /> </noscript>
Meet Canonical in Paris at OSXP 2023

Date: 6-7 December, 2023

Location: Palais des congrès – Paris, France

Booth: Booth 26

Canonical is excited to attend Open Source Experience (OSXP) 2023, the annual event dedicated to the open source ecosystem – something that speaks directly to our hearts. 

This year the conference has six themes, three of which we will cover in talks across both days of the event. Our experts will be there to address enterprise-grade open source solutions spanning:

  • Technologies (Languages, tools, Cloud, DevOps, Infrastructure, Cyber, Web)
  • AI, Machine Learning, Data
  • Embedded, IoT, Open Hardware & Industry

<noscript> <img alt="" height="57" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_247,h_57/https://ubuntu.com/wp-content/uploads/2f3e/Book-a-meeting.png" width="247" /> </noscript>

What to expect from Canonical at OSXP 2023

96% of new software projects rely on open source components.

Open source is on the rise in the modern enterprise landscape, and as the publisher of Ubuntu, Canonical is at the forefront of this transformation, leading the way in open source security, support and services. 

At OSXP, our team will be sharing their open source expertise across AI, machine learning, cloud and IoT, with demos ranging from a conversational assistant reminiscent of ChatGPT to a wide range of use cases for Ubuntu Core in the realm of industry & IoT.

Join our talks

We’ll be presenting three talks over the course of the conference, each in a different format. 

[Session] How to build LLMs with open source

Join our very own Rob Gibbon, who will present an informative session on automating Large Language Models and developing chat-based and multimodal model assistants in “How to build LLMs with open source”. 

Expect some insightful demonstrations using open source projects such as Spark, Kubeflow, MLFlow and OpenSearch, as well as LLM-specific open models and tools like Falcon, TRL, OpenAssistant and PEFT.

[Panel] Security on Embedded Systems

Our IoT expert and field engineer, Jean-Charles Verdié, will participate in a panel discussion around security on embedded systems. 

Having created Ubuntu Core as the operating system optimised for IoT and Edge, we know the value of a secure, application-centric, tamper-resistant and hardened environment. So join us to learn more about securing your own IoT ecosystem.

[Workshop] Linux, Openstack, Kubernetes & Apps: How to secure open source 

The biggest question around open source is always its security. What if we told you you can have an enterprise-grade, secure full stack without needing to stay up at night worried about security breaches?

In this workshop, Canonical’s Reg Deraed, Continental Europe Sales Director, will take you through modernising your cloud infrastructure with open source and ensuring its security through various scenarios.

If you’re interested in attending OSXP, create your free badge here and join us at booth 26 or book a meeting with our experts



*96% of new software projects rely on open source components – source: https://www.synopsys.com/software-integrity/resources/analyst-reports/open-source-security-risk-analysis.html

14 November, 2023 09:32AM

hackergotchi for Qubes

Qubes

XSAs released on 2023-11-14

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected by at least one of these XSAs.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-445
    • Qubes OS uses only “basic” quarantine mode.

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

14 November, 2023 12:00AM

QSB-096: BTC/SRSO fixes not fully effective (XSA-446)

We have published Qubes Security Bulletin 096: BTC/SRSO fixes not fully effective (XSA-446). The text of this QSB and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this QSB, please see the end of this announcement.

Qubes Security Bulletin 096


             ---===[ Qubes Security Bulletin 096 ]===---

                              2023-11-14

             BTC/SRSO fixes not fully effective (XSA-446)

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

On 2023-11-14, the Xen Project published XSA-446, "x86: BTC/SRSO fixes
not fully effective" [3]:

| The fixes for XSA-422 (Branch Type Confusion) and XSA-434 (Speculative
| Return Stack Overflow) are not IRQ-safe.  It was believed that the
| mitigations always operated in contexts with IRQs disabled.
|
| However, the original XSA-254 fix for Meltdown (XPTI) deliberately
| left interrupts enabled on two entry paths; one unconditionally, and
| one conditionally on whether XPTI was active.
|
| As BTC/SRSO and Meltdown affect different CPU vendors, the mitigations
| are not active together by default.  Therefore, there is race
| condition whereby a malicious PV guest can bypass BTC/SRSO protections
| and launch a BTC/SRSO attack against Xen.

Impact
-------

The impact is the same as it was in QSB-086 [4]:

| On Qubes OS installations with affected CPUs, a VM running in PV mode
| may be capable of inferring the memory contents of other running VMs,
| including dom0. In the default Qubes OS configuration, only the
| stubdomains for HVMs are in a position to exploit this vulnerability
| in order to attack other VMs. (Dom0 also runs in PV mode, but it is
| fully trusted.)

Affected systems
-----------------

Only x86 AMD and Hygon systems are vulnerable.

Patching
---------

The following packages contain security updates that address the
vulnerabilities described in this bulletin:

  For Qubes 4.1, in dom0:
  - Xen packages, version 4.14.6-4

  For Qubes 4.2, in dom0:
  - Xen packages, version 4.17.2-5

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.

Credits
--------

See the original Xen Security Advisory.

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-446.html
[4] https://www.qubes-os.org/news/2022/11/08/qsb-086/

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-096-2023.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmVTb1UACgkQ1lWk8hgw
4Gq4Ww//UOcTUI/oCJx62nakaxcXZDhYsTBpuQasjp2WFuo7Yz3mYXgAp8EkSvMZ
IdMfeMsbcF2D/UNZ7XxZ+F7yUVeffyqDpTPuBoFehlHokI8Vt0MOLEbipxd4DaoI
LB2lFR9o3OYw0TQvUEmqJH/qkvTZMzeGe7l+OAVcwkjSwNzCvObj3wtR50QbMjWk
Fd11X7YfPRjkeDNCYJ3sFMg30AgGUnp/Ed+CkMFMpfGNPZk2bzNyjY16ALscgVwq
61/rfdP+WgrAy4JqdQXngk1i1s7EKRhzibB7/jqlhQ4Vn6FCtTvd260hTjHNaVRd
75OxTaQGkFCtBxToLK3c56umk10Iyn7ckffSDhFcEaJviXsjO7CnGVrlnJDvHazx
Vs1S5G+3hiLo205XSozrmR8bmNopEuOVa4zAshCh6v/Gl4Zmn8Za5dkS6xEBr0jL
TWEi8/G9ZD5g3Iz+4GL9T4n9aTCAC92hcp7qVNytRO5RD8HE3B2JDOQvJ5bNpZQB
a2H5tIWlilgfY2HVndfezafF4D6CV3F5SZ9ZpRlLGw6soYfIFSaj2pmhTlqPHjPA
8Z6RoQ4PJ+ErWL7vRjmyz1Jp5x2lhUxis94+p1of4NkipK6/jVRMXxIe4bQ2r96o
aWcShpnLI3wW5WeMwQf5uNAmYmwxmqhZYQA6LK4EJg6NN5P2Xz8=
=nzwT
-----END PGP SIGNATURE-----

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-096-2023.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmVTNYAACgkQSsGN4REu
FJBLqw/8Cq16ZBx7TJnlF4c1NnqJ3EBUf9VNl8/poZEvUKTVV/68UjJa8uEvD7n+
PHgNko7yrNXIRPAS7zLs4w0gtd8vN7COQetARS8QX2YSLoNb+564ElNm8VXxOdKx
eA6xzNUww6javbyoSCPqwMGia2FkAjcjxY5L4CFCjXx+5uS+Qom9+MaTVskroHAZ
dXE37TILGDsTTBQZnX/phPxUZTQK7gIbKwFlrOSmj0QVslHF2/dRNpM3wmEQqhzj
g/vOuA+TO86mW+rrYOHCIJFarAkrOtJQ7RDDNUP2P9hgOxg8qiCchdEMJPiC+uMW
xpZ+XSzjlJSXVlSdg/4Q0ZQ2SxK1he+mTkRBxgjxYfam1JAiTWLDVMxWjH7zvyeH
Ymk92gIfJRcTrdELCY7K6KCBk+y7dIfm47UmyeSmGtRSuhqFvJUvE9GZl2BPUd7f
cejkJCegJ6lxAWxQmzQ0lOq9j0PaqqbxOcIyq+DBaMfDWExKeOvmzajHFKWPUBqs
TpYGSqvVzhkstgqid5t0vR8WbH1GpHRSUo6ytEREkIJ6KShFGeknju60EiRNDEcA
SvFIp01pjzO/q3px3ywNheZ7zNP/B3wr/P2e7U1mxg0sBy4QMQxqsUtscsJZMpZe
hl1XBjqYLpeppAq9kohZUV1H++q1WMxUJ72PXEiX2DRT8RxLl5w=
=ANw6
-----END PGP SIGNATURE-----

Source: https://github.com/QubesOS/qubes-secpack/blob/main/QSBs/qsb-096-2023.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-096), the commands are:

$ gpg --verify qsb-096-2023.txt.sig.marmarek qsb-096-2023.txt
$ gpg --verify qsb-096-2023.txt.sig.simon qsb-096-2023.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-096 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

14 November, 2023 12:00AM

November 13, 2023

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 813

Welcome to the Ubuntu Weekly Newsletter, Issue 813 for the week of November 5 – 11, 2023. The full version of this issue is available here.

In this issue we cover:

  • Responses Needed: Flavor Participation for 24.04 LTS
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu India Local Community (Ubuntu-In)
  • LoCo Events
  • Working with the new Mir “graphics platform” APIs
  • Rockcraft 1.0.0
  • Debconf 23 Video: Adulting
  • Ubuntu 23.10 InstallFest @University of Macedonia
  • Ubuntu Summit 2023 was a success
  • qqc2-breeze5-style 6 Alpha
  • Oxygen Icons 6 Alpha Released
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • In Other News
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for 20.04, 22.04, 23.04 and 23.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

13 November, 2023 08:58PM by guiverc

hackergotchi for SparkyLinux

SparkyLinux

Sparky Backlight 0.2

sparky backlight 0.2.0

Sparky Backlight has been upgraded up to v0.2.0.

What is Sparky Backlight?

Sparky Backlight is a small tool which lets you change the desktop brightness via the panel’s tray icon. The tool is targeted to small window managers such as JWM or Openbox, but can be used on any desktop.

Changes
– uses xrandr instead of xbacklight now
– autodetects and uses your primary monitor
– let you dim your monitor with -20, -40, -60 and -80% now
– should works on any desktop environment and window manager with a panel’s tray running.

Installation/upgrade (Sparky all):

sudo apt update
sudo apt install sparky-backlight

It autolaunches after next login, or can be manually started via the command:
sparky-backlight

License: GNU GPL
Web: https://github.com/sparkylinux/sparky-backlight

 

13 November, 2023 03:40PM by pavroo