January 16, 2025

hackergotchi for Deepin

Deepin

(中文) Wine 开发系列 —— 如何调试 Wine

Sorry, this entry is only available in 中文.

16 January, 2025 02:14AM by aida

January 15, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: A comprehensive guide to NIS2 Compliance: Part 1 – Understanding NIS2 and its scope

The EU NIS2 directive, which calls for strengthening cybersecurity across the European Union, is now active in all member states. Join me for this 3-part blog post series  in which I’ll explain what it is, help you understand if it is applicable to your company and how you can become NIS2 compliant.

In this first part, I’ll provide an introduction on what NIS2 is, the differences from its predecessor NIS and its applicability so you can understand it and conclude if it is relevant for your company.

Intro to NIS2

The EU DIRECTIVE 2022/2555 or Network and Information Systems Directive (commonly known and referred to as NIS2 or EU NIS2 from here onwards) is a new piece of EU regulation that applies to all European Union Member States, with the goal of achieving a high common level of cybersecurity. The regulation updates the previous Network and Information Systems Directive (NIS or NIS1) from 2016 and mandates member states to adopt and rigorously enforce stricter cybersecurity requirements for entities providing critical services in the EU Region.

Unless your company is considered a small/micro entity (i.e. less than 50 employees or 10 million Euros in revenue) and does not operate in critical sectors (see table below), this article and the rest of the series is for you.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/f4ce/NIS-table.png" width="720" /> </noscript>

Table 1: A list of sectors under the scope of NIS2

EU NIS2 is a very broad and complex regulation, so in this post we’ll explore the specific applicability and requirements of NIS2 for organizations in more detail. 

Is it applicable to you? 

Generally speaking, the EU NIS2 applies to all medium or large public and private entities that operate in critical sectors, who provide their services or carry out activities in the EU market. Even if you don’t have an EU location, you are in scope if any of your customers are in the EU.

The EU NIS2 scope is covered in Annex I and Annex II of the Directive. Annex I lists the sectors of high criticality and Annex II covers other sectors deemed as critical (which would get your company in the scope as well).  The table presented in the previous section (Table 1)  gives you the list of sectors, but you must also to combine that with the size capping table below (Table 2) to get a full picture of the applicability: 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3a02/NIS2-table-II.png" width="720" /> </noscript>

Table 2: The size classifications and capping of NIS2

*defined per the SME Recommendation for the EU

The EU NIS2 scoping puzzle can be generally solved using the two tables provided, but there are some considerations to be made:

  • Micro and small entities are not in scope regardless of their sector, with the exception of Qualified Trust Service Providers, TLD Name registries and DNS service providers. 
  • If you are already bound by another EU Directive or sector-specific directive/regulation, then these  take precedence (as some call it lex specialis principle – e.g. if you’re in scope for DORA then it takes precedence over NIS2).
  • The applicability is always to the member state’s legislation (or directive transposition) rather than the Directive itself. 

A note about Essential and Important entities

Entities in scope of EU NIS2 can be further separated as Essential and Important entities depending on sector criticality and size. The requirements are the same for both types, with the main difference between Essential and Important entities being the level of supervision by Authorities. Essential entities are under proactive supervision, while Important entities are under reactive supervision (e.g. only after an incident happens).

What are the differences between NIS and NIS2? 

Technology and the digital market have evolved since the first EU NIS Directive was issued in 2016. Hence, NIS2 aims to build upon its predecessor and adjust to these changes and an evolved  threat landscape. But it also introduces several changes and improvements such as:

  • Broader scope (7 sectors in NIS1 x 16 sectors in NIS2)
  • Additional obligations (minimum set of requirements)
  • Stricter requirements (e.g. smaller window for incident reporting)
  • Personal liability for the management body 
  • Increase in administrative fines

When does it start to apply?

The EU NIS2 Directive entered into force on January 16, 2023. EU Member States had until October 17, 2024 to transpose the regulation into national laws and start applying such laws as of October 18, 2024.

That concludes our first post of this series. I hope that it helped you understand and solve the puzzle to conclude if NIS2 is applicable to you or not.  Stay tuned for our second post of the series where I’ll break down the requirements and let you know how you can translate those requirements into actions and controls in your company that will facilitate your journey towards compliance.

How Canonical can help you with NIS2 cybersecurity compliance

Canonical is committed to helping organizations become EU NIS2 compliant. We’re committed to delivering trusted open source that enables organizations to put security at the heart of their stack. Through Ubuntu Pro, our comprehensive security and support subscription, organizations can receive up to 12 years of expanded security maintenance for over 36,000 packages, wherever they use Ubuntu in their stack. Ubuntu Pro also includes patching automation and compliance auditing tools like Landscape and Livepatch, as well as access to compliance and hardening features

Learn more about Ubuntu Pro by visiting our dedicated page, or get in touch with our team for a conversation about how we can help you meet your needs.

Further resources about EU regulations and Compliance

Thank you for reading! Below you will find more resources on EU Regulations and how to achieve security and compliance using an infrastructure hardening approach.

15 January, 2025 08:42PM

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Expands Detection Coverage for Huawei Linux Distributions

We’re thrilled to announce significant enhancements to the Notus Scanner in both our Greenbone Enterprise and Community Edition vulnerability scanners. As a core component of the OpenVAS scanner, Notus Scanner is Greenbone’s module for performing Local Security Checks (LSC) and supports our industry leading detection capabilities by detecting vulnerable software packages and libraries directly on endpoints. This vital tool has been integral in safeguarding software used on a wide array of popular Linux distributions, including Debian, Ubuntu, OpenSUSE, Red Hat Enterprise Linux (RHEL) and Fedora, among others.

Expanding Horizons with Huawei

In our efforts to expand our service scope and enhance security, Notus Scanner is now extending its capabilities to include Linux distributions maintained by Huawei. This update will introduce vulnerable package detection for Huawei’s Versatile Routing Platform (VRP) products, EulerOS, EulerOS Virtual, Huawei Cloud EulerOS (HCE) and openEuler. These additions are aimed at fortifying security measures for all organizations that implement Huawei technologies, thereby strengthening Greenbone’s protective reach.

For users of our Greenbone Enterprise appliances, we are pleased to introduce the inclusion of Huawei Cloud EulerOS (HCE) in our vulnerability feed, ensuring that enterprises utilizing cloud infrastructure can benefit from our expanded detection capabilities.

Community Edition Detection just Got Stronger

The Greenbone Community Edition now boasts enhanced detection capabilities with the integration of Huawei’s EulerOS, EulerOS Virtual and openEuler into the vulnerability feed. This update widens the scope of our free and open source security coverage, and ensures that a more diverse set of software products can be secured by our Community Edition users. This enhancement reaffirms our commitment to providing comprehensive security solutions and democratizing cybersecurity capabilities.

High Performance and Reliability with Rust

Earlier in 2024, we released a new, high-performance Rust-based version of the Notus Scanner. This advancement underscores our commitment to providing not only extensive coverage but also efficient and reliable scanning capabilities. Rust’s safety features significantly reduce the risk of common bugs, enhancing stability and performance in our scanning processes.

Local Security Checks Are Essential

Unlike network vulnerability tests that scan a host’s network attack surface (such as active services accessible via network interfaces), Local Security Checks delve into the host’s attack surface at the software level. By scanning software installed on the host device directly for known vulnerabilities, a thorough assessment of potential security risks is assured.

Summary

We at Greenbone are excited about the addition of Huawei maintained Linux updates and confident in their ability to provide our users with even better tools to protect their digital environments. The expansion of Notus Scanner’s capabilities is more than just a technical enhancement – it’s a commitment to our users’ security. By broadening our scanning spectrum to include more Linux distributions, we support more products and ensure better cybersecurity for all.

For more information on how these updates can benefit your organization, or to learn more about our full suite of security solutions, please contact our sales team directly, get a 14 day free trial of our entry-level Enterprise product Greenbone Basic or jump right in and order Greenbone Basic today. Let us help you strengthen your defenses and stay ahead of potential threats!

15 January, 2025 08:00AM by Greenbone AG

hackergotchi for Deepin

Deepin

RSYNC Vulnerability Announcement (Upgrade Patch Pushed)

At 02:25 Beijing time on January 15, 2025, security researcher Nick Tait reported six security vulnerabilities in rsync on the oss-security mailing list. Among them, the most severe vulnerability allows attackers to execute arbitrary code on the server simply by having anonymous read access to the rsync server (such as a public mirror). Vulnerability Details: CVE-2024-12084 (CVSS: 9.8): There is a heap buffer overflow vulnerability in rsync due to improper handling of checksum lengths. When MAX_DIGEST_LEN exceeds the fixed SUM_LENGTH (16 bytes), attackers can perform out-of-bounds writes in the sum2 buffer. Affected versions: 3.2.7 to 3.4.0. CVE-2024-12085 (CVSS: 7.5): In ...Read more

15 January, 2025 06:16AM by aida

January 14, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Rsync remote code execution and related vulnerability fixes available

Security researchers at Google (Pedro Gallegos, Simon Scannell, and Jasiel Spelman) discovered vulnerabilities in the rsync server and rsync client. The rsync server vulnerabilities (CVE-2024-12084 and CVE-2024-12085) ultimately allow remote code execution (RCE). The rsync client vulnerabilities allow a malicious server to read arbitrary files (CVE-2024-12086), create unsafe symlinks (CVE-2024-12087) and overwrite arbitrary files in certain circumstances (CVE-2024-12088).

During the coordinated vulnerability response of the above issues, a sixth vulnerability (CVE-2024-12747) which affects how the rsync server handles symlinks was reported by Aleksei Gorban.

Canonical’s security team has released updates of the rsync packages for all supported Ubuntu releases. The updates remediate CVE-2024-12084, CVE-2024-12085, CVE-2024-12086, CVE-2024-12087, CVE-2024-12088, and CVE-2024-12747. Information on the affected versions can be found in the CVE pages linked above.

How the exploits work

Google researchers discovered that the rsync server is vulnerable to a heap buffer overflow (CVE-2024-12084) and an information leak of uninitialized stack data (CVE-2024-12085). By combining the two vulnerabilities, a malicious client with anonymous read-access can defeat ASLR (address space layout randomization) and remotely execute arbitrary code on the rsync server machine. These vulnerabilities were introduced in rsync v3.2.7, so Ubuntu 20.04 LTS and earlier releases are not vulnerable to this attack chain.

Three additional vulnerabilities affect the rsync client. CVE-2024-12086 is a path traversal vulnerability which allows a malicious server to read any file the client process can access. CVE-2024-12087 allows a malicious server to bypass –safe-links and create unsafe symbolic links. CVE-2024-12088 is another path traversal vulnerability which allows a malicious server to overwrite arbitrary files on the client’s machine under certain circumstances.

Aleksei Gorban discovered an additional vulnerability in the rsync server (CVE-2024-12747). In this case, rsync improperly handles symlinks during a race condition and can be used to leak sensitive information to a remote attacker.

Affected releases

ReleasePackage NameFixed Version
Trusty (14.04 LTS)rsync3.1.0-2ubuntu0.4+esm1
Xenial (16.04 LTS)rsync3.1.1-3ubuntu1.3+esm3
Bionic (18.04 LTS)rsync3.1.2-2.1ubuntu1.6+esm1
Focal (20.04 LTS)rsync3.1.3-8ubuntu0.8
Jammy (22.04 LTS)rsync3.2.7-0ubuntu0.22.04.3
Noble (24.04 LTS)rsync3.2.7-1ubuntu1.1
Oracular (24.10)rsyncfix not available

How to check if you are impacted

On your system, run the following command and compare the listed version to the table above.

dpkg -l rsync

How to address

We recommend you upgrade all packages:

sudo apt update && sudo apt upgrade

If this is not possible, the affected component can be targeted:

sudo apt update && sudo apt install --only-upgrade rsync

The unattended-upgrades feature is enabled by default for Ubuntu 16.04 LTS onwards. This service applies new security updates every 24 hours automatically. In other words, if you have this enabled, the patches above will be automatically applied within 24 hours of being available.

Acknowledgements

Many thanks to Pedro Gallegos, Simon Scannell, and Jasiel Spelman at Google for their researching and reporting these vulnerabilities, to Aleksei Gorban for their research, to Andrew Tridgell and Wayne Davison from rsync for creating security patches, and to CERT/CC’s VINCE for vulnerability coordination.

References

https://www.openwall.com/lists/oss-security/2025/01/14/3
https://www.kb.cert.org/vuls/id/952657
https://www.mail-archive.com/rsync-announce@lists.samba.org/msg00114.html

14 January, 2025 06:41PM

hackergotchi for Volumio

Volumio

The Palani! First Community Awards winner

Congratulations to the first-edition Winner of Community Awards!​

Palani by Abeta Concept

The Palani is a high-resolution DAC streamer with dual touchscreens that embodies elegance and innovation.

Honoured as Volumio’s inaugural Community Award winner, it captivates with its exceptional casing, commitment to quality, and analogue-style vu-meters.

Its ability to operate on battery power truly enhances its appeal. As a passive power supply, it can achieve great power supply performances and cuts any feedback.

The Palani works around sound with a silk touch, ensuring an audiophile experience.

Also, the variety of controls (touchscreen, app, Airplay, multiroom and others) adds the cherry on top, making it a real product!

Discover the inspiring features of the Palani through this video from its creator or on their website:

The post The Palani! First Community Awards winner appeared first on Volumio.

14 January, 2025 10:56AM by Volumio (leo@volumio.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Your data applications, contained and maintained

Introducing trusted open source database containers 

<noscript> <img alt="" height="1440" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_2560,h_1440/https://ubuntu.com/wp-content/uploads/034c/Your-data-applications-no-button.png" width="2560" /> </noscript>

It’s time to stop proclaiming that “cloud native is the future”. Kubernetes has just celebrated its 10 year anniversary, and 76% of respondents to the latest CNCF Annual Survey reported that they have adopted cloud native technologies, like containers, for much or all of their production development and deployment. Cloud native isn’t the future – it’s here and now.

Data-intensive workloads are no exception. On the contrary, The Voice of Kubernetes Experts Report 2024 found that 97% of organizations are running data workloads on cloud native platforms, with 72% of databases and 67% of analytics services being run on Kubernetes. 

Database containers are driving major improvements in scalability, flexibility, operational simplicity, and cost. But managing such stateful solutions on containers, often built using multiple open-source components, is also causing no small number of headaches for site reliability engineers, platform engineers, and CISOs alike. Alongside considerable complexity, containers can also introduce security and compliance risks due to uncertain image provenance, large attack surfaces, and lack of timely CVE fixes – particularly when developers build them themselves using the latest versions of open-source components.. 

In this blog, we’ll explain Canonical’s answer to the data container dilemma. In short, we’ve created a portfolio of securely designed, minimal, and fully maintained data application container images that enable organizations to enjoy the full benefits of cloud native architecture without compromising security or compounding operational complexity.

Canonical’s database containers

At Canonical, we know a thing or two about maintaining open source software – it’s what we’ve been doing for over 20 years. And that’s not just Ubuntu, we also maintain more than 36,000 additional packages from across the wider open source ecosystem. Now, we’re extending that same industry-leading expertise to data application containers.

So what does this mean in practice? It means that we’ve built enterprise-grade container images, designed from the ground up with security in mind following industry best practices. It means that we continuously monitor and rapidly address CVEs affecting the containers, with fixes for critical vulnerabilities available within 24 hours on average. And it means that we maintain and support each container image for up to 12 years with Ubuntu Pro.

Our database containers are fully OCI-compliant, and can run on any OCI-compliant platform, including Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), and Red Hat OpenShift. What’s more, they can run on any operating system. 

Our goal is to give organizations a single source for trusted, securely designed, and maintained open source containers that they can confidently deploy in production. You know where your images come from, you know that they are optimally and consistently packaged, and you know that they will receive regular updates and CVE fixes. 

Supply chain security has never been more important – it’s at the heart of Europe’s new Cyber Resilience Act (CRA), and other similar regulations are likely to follow. Our secure-by-design containers enable you to meet the requirements of these standards head-on.

We provide two varieties of container to meet the needs of different users. On one hand, we have standard OCI containers that include everything you need to develop, debug and run your applications with your preferred databases. On the other hand, we deliver ultra-small containers with a minimal attack surface – we call those “chiseled” containers.

Minimal containers, minimal attack surface

In the world of containers, size matters. The larger a container image is, the larger its attack surface, and the more susceptible it is to vulnerabilities. With that in mind, we’ve created truly minimal database container images called chiseled containers.

Building on the concept of distroless containers, chiseled containers deliver only the application and its runtime dependencies, with no other operating system-level packages, utilities, or libraries. They are rootless, and include no package manager or shell. This results in a minimal footprint that trims up to 80% of a traditional container’s attack surface. They differ from usual distroless containers as they offer greater operational flexibility and compatibility with Ubuntu ecosystems. Thanks to the fact that they maintain strong compatibility with Ubuntu-based workflows and tools, they’re perfect for enterprises already using Ubuntu, while still being able to run on any OS.

Let’s take Valkey as an example. Whereas a full blown Valkey container is approximately 320MB, chiselled Valkey is just 26.7MB.

The drastically reduced size of chiselled containers – which inherently reduces the number of potential vulnerabilities and attack vectors – makes them ideal for production. At the same time, the cut down nature of the images makes them lighter, faster to build in CI pipelines, and in many cases more performant.

Everything you need in one container

An entirely stripped down container is great, but alone may not be sufficient for the most scalable use cases. Some organizations need a more comprehensive solution with all the bells and whistles – tools, libraries, configuration options, lifecycle management, and plugins. For these scenarios, we integrate charms with our containers to augment the images with the benefits of software operators.

Charms are complete solutions that integrate with the containers to provide configuration management, monitoring, backup, high availability, and automation tools, along with many of the most popular plugins where appropriate. In other words, you get a full solution composed of a robust set of containers and everything you need to run and operate your database.

Custom database containers for your use case

In the cloud-native era, enterprises often need the freedom to build their own containers, tailored to their unique requirements. One-size-fits-all database container configurations won’t always address the diverse needs of every organization. Generic container images are designed for broad applicability, but they may lack specific libraries or components crucial for your workloads. By composing custom containers, enterprises can ensure that their solutions are optimized for their use cases, and compliant with their internal policies.

However, maintaining the security, consistency, and stability of custom-built images is a highly challenging and time-consuming undertaking. This is where our Container Build Service comes in.

With Container Build Service, our team will custom build minimal and optimized containers for any data solution you need – and we will maintain those containers for you for up to 12 years, with the same rigor we apply to securing Ubuntu and the other data application containers described above. Whatever your specific requirements or use cases, Container Build Service ensures that you can benefit from expertly built and security maintained containers.

Get in touch to discuss your custom container requirements.

Learn more about Canonical’s data solutions.

14 January, 2025 10:40AM

Ubuntu Blog: Your data applications, contained and maintained

Introducing trusted open source database containers 

<noscript> <img alt="" height="1440" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_2560,h_1440/https://ubuntu.com/wp-content/uploads/034c/Your-data-applications-no-button.png" width="2560" /> </noscript>

It’s time to stop proclaiming that “cloud native is the future”. Kubernetes has just celebrated its 10 year anniversary, and 76% of respondents to the latest CNCF Annual Survey reported that they have adopted cloud native technologies, like containers, for much or all of their production development and deployment. Cloud native isn’t the future – it’s here and now.

Data-intensive workloads are no exception. On the contrary, The Voice of Kubernetes Experts Report 2024 found that 97% of organizations are running data workloads on cloud native platforms, with 72% of databases and 67% of analytics services being run on Kubernetes. 

Database containers are driving major improvements in scalability, flexibility, operational simplicity, and cost. But managing such stateful solutions on containers, often built using multiple open-source components, is also causing no small number of headaches for site reliability engineers, platform engineers, and CISOs alike. Alongside considerable complexity, containers can also introduce security and compliance risks due to uncertain image provenance, large attack surfaces, and lack of timely CVE fixes – particularly when developers build them themselves using the latest versions of open-source components.. 

In this blog, we’ll explain Canonical’s answer to the data container dilemma. In short, we’ve created a portfolio of securely designed, minimal, and fully maintained data application container images that enable organizations to enjoy the full benefits of cloud native architecture without compromising security or compounding operational complexity.

Canonical’s database containers

At Canonical, we know a thing or two about maintaining open source software – it’s what we’ve been doing for over 20 years. And that’s not just Ubuntu, we also maintain more than 36,000 additional packages from across the wider open source ecosystem. Now, we’re extending that same industry-leading expertise to data application containers.

So what does this mean in practice? It means that we’ve built enterprise-grade container images, designed from the ground up with security in mind following industry best practices. It means that we continuously monitor and rapidly address CVEs affecting the containers, with fixes for critical vulnerabilities available within 24 hours on average. And it means that we maintain and support each container image for up to 12 years with Ubuntu Pro.

Our database containers are fully OCI-compliant, and can run on any OCI-compliant platform, including Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), and Red Hat OpenShift. What’s more, they can run on any operating system. 

Our goal is to give organizations a single source for trusted, securely designed, and maintained open source containers that they can confidently deploy in production. You know where your images come from, you know that they are optimally and consistently packaged, and you know that they will receive regular updates and CVE fixes. 

Supply chain security has never been more important – it’s at the heart of Europe’s new Cyber Resilience Act (CRA), and other similar regulations are likely to follow. Our secure-by-design containers enable you to meet the requirements of these standards head-on.

We provide two varieties of container to meet the needs of different users. On one hand, we have standard OCI containers that include everything you need to develop, debug and run your applications with your preferred databases. On the other hand, we deliver ultra-small containers with a minimal attack surface – we call those “chiseled” containers.

Minimal containers, minimal attack surface

In the world of containers, size matters. The larger a container image is, the larger its attack surface, and the more susceptible it is to vulnerabilities. With that in mind, we’ve created truly minimal database container images called chiseled containers.

Building on the concept of distroless containers, chiseled containers deliver only the application and its runtime dependencies, with no other operating system-level packages, utilities, or libraries. They are rootless, and include no package manager or shell. This results in a minimal footprint that trims up to 80% of a traditional container’s attack surface. They differ from usual distroless containers as they offer greater operational flexibility and compatibility with Ubuntu ecosystems. Thanks to the fact that they maintain strong compatibility with Ubuntu-based workflows and tools, they’re perfect for enterprises already using Ubuntu, while still being able to run on any OS.

Let’s take Valkey as an example. Whereas a full blown Valkey container is approximately 320MB, chiselled Valkey is just 26.7MB.

The drastically reduced size of chiselled containers – which inherently reduces the number of potential vulnerabilities and attack vectors – makes them ideal for production. At the same time, the cut down nature of the images makes them lighter, faster to build in CI pipelines, and in many cases more performant.

Everything you need in one container

An entirely stripped down container is great, but alone may not be sufficient for the most scalable use cases. Some organizations need a more comprehensive solution with all the bells and whistles – tools, libraries, configuration options, lifecycle management, and plugins. For these scenarios, we integrate charms with our containers to augment the images with the benefits of software operators.

Charms are complete solutions that integrate with the containers to provide configuration management, monitoring, backup, high availability, and automation tools, along with many of the most popular plugins where appropriate. In other words, you get a full solution composed of a robust set of containers and everything you need to run and operate your database.

Custom database containers for your use case

In the cloud-native era, enterprises often need the freedom to build their own containers, tailored to their unique requirements. One-size-fits-all database container configurations won’t always address the diverse needs of every organization. Generic container images are designed for broad applicability, but they may lack specific libraries or components crucial for your workloads. By composing custom containers, enterprises can ensure that their solutions are optimized for their use cases, and compliant with their internal policies.

However, maintaining the security, consistency, and stability of custom-built images is a highly challenging and time-consuming undertaking. This is where our Container Build Service comes in.

With Container Build Service, our team will custom build minimal and optimized containers for any data solution you need – and we will maintain those containers for you for up to 12 years, with the same rigor we apply to securing Ubuntu and the other data application containers described above. Whatever your specific requirements or use cases, Container Build Service ensures that you can benefit from expertly built and security maintained containers.

Get in touch to discuss your custom container requirements.

Learn more about Canonical’s data solutions.

14 January, 2025 10:40AM

Ubuntu Blog: How to build your first model using DSS

Introduction

GenAI is everywhere, and it’s changing how we approach technology. If you’ve ever wanted to dive into the world of large language models (LLMs) but felt intimidated, there’s good news! Hugging Face recently launched a self-paced course that’s perfect for beginners and more experienced enthusiasts alike. It’s hands-on, approachable, and designed to work on standard hardware thanks to the small footprint of the models.

As soon as I heard the news, I decided to give it a go using Canonical’s Data Science Stack (DSS).

In this blog, I’ll guide you through setting up DSS and running the first notebook of Hugging Face’s course. This notebook focuses on supervised fine-tuning, a methodology to adapt pre-trained language models to specific tasks or domains. By the end of this post, you’ll see just how simple and accessible GenAI can be – a perfect new skill to kick off the New Year.

Setting up your environment

Before we dive into the course, let’s set up the foundation. DSS makes configuring your data science environment straightforward and quick. Installing DSS and its dependencies will take you two minutes by following this guide (just install it, do not launch a notebook just yet). Go ahead, I’ll wait for you here!

NOTE: DSS supports both Intel and NVIDIA GPUs, so you don’t need to worry about compatibility. If you do have a GPU, read the guide to enable GPU support.

.

.

.

Ok! Now that you have DSS ready on your machine, let’s go ahead and create a new notebook:

dss create hf-smol-course --image=jupyter/minimal-notebook

This command will create a new notebook named hf-smol-course. I am using a minimal image from the Jupyter registry so that we can start from a bare environment and install the Smol Course dependencies without any conflict. 

Now log into your notebook by navigating to the url you see with dss list.

Once into the notebook, open a terminal and clone the Hugging Face Smol Course repository.

NOTE: It’s important to store your work inside the shared folder, so that it will persist across reboots.

cd shared
git clone https://github.com/huggingface/smol-course
cd smol-course
pip install -r requirements.txt
pip install ipywidgets

Installing the requirements might take a little bit of time due to the various cuda dependencies.

NOTE: ipywidget is necessary to log into Hugging Face with your notebook. 

Refresh your browser to make sure that the widgets library is properly loaded.

Notice how quick and easy it was to set up a GPU-enabled Python environment from scratch. Typically, you would have gone through many more steps to install Python, virtual environments, juggle with CUDA drivers, and then maybe have a working environment.

Get your Hugging Face token

Before getting started, make sure you have a Hugging Face token ready to use. If you don’t, head over to huggingface.co, click on your profile image, then click Access Tokens. Create a new write token for your notebooks. You will use this token at the beginning of the course notebooks to log into the HF API.

This is a self-paced course, and at this point you have your environment setup and can continue exploring all the notebooks. Let’s dive into one of them, just for fun.

Exploring Supervised Fine-Tuning

Let’s get our hands dirty!

Enter the 1_instruction_tuning folder, you can take a look at the README which explains the concept of chat templates and supervised fine-tuning. Good background in case you are not familiar with these techniques (do go over the chat template notebook first if you are not familiar with chat templates).
Open the sft_finetuning_example notebook and run through the first few cells of the notebooks. After defining a few imports, the notebook tests the base model before pre-training, to show you what useless a base model actually is:

<noscript> <img alt="" height="946" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1600,h_946/https://ubuntu.com/wp-content/uploads/c688/screenshot-haiku.png" width="1600" /> </noscript>

The base model’s output is non-sensical, we need to fine tune it. Consequently, the notebook walks you through the dataset preparation and then training.
Unless you have a beefy GPU, I suggest you lower the training parameters to experiment quickly. You can always go back later and set higher training parameters to compare results.

In my case I am setting max_steps=300 and eval_steps=30. Training the model took me about 20 minutes:

<noscript> <img alt="" height="723" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_840,h_723/https://ubuntu.com/wp-content/uploads/f080/Screenshot-2025-01-09-at-17.32.18.png" width="840" /> </noscript>

Uploading the model is optional. You can complete the following step of validating the fine tunes model again on the same prompt by adding the following snippet after the TODO comment:

model = AutoModelForCausalLM.from_pretrained(
    pretrained_model_name_or_path=f"./{finetune_name}"
).to(device)
outputs = model.generate(**inputs, max_new_tokens=100)
print("After training:")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

My output was the following:

<noscript> <img alt="" height="995" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_972,h_995/https://ubuntu.com/wp-content/uploads/b578/Screenshot-2025-01-09-at-17.32.34.png" width="972" /> </noscript>

The model started producing some sense, but it obviously would need some more training. What was your result?

Why DSS Makes It Easier

As I went through the notebook, one thing stood out: DSS removes all the usual setup headaches. With pre-installed libraries, GPU support, and a seamless interface, I didn’t have to troubleshoot driver issues or spend hours configuring dependencies. Instead, I could focus entirely on learning and experimenting.

Also, whenever you are done with your experimentation, you can easily clean up by simply removing the notebook with

dss remove <notebook name>

Without leaving any trace behind.

This experience was a reminder of why tools like DSS are game-changers. They lower the barriers to entry, allowing more people to explore fields like data science and GenAI without getting bogged down by technical challenges.

Next Steps

So for this new year, give yourself the gift of learning. With DSS and Hugging Face’s course, you’re just a few steps away from mastering the basics of GenAI. Who knows? This might be the start of your journey into a fascinating new field.

I will write soon about other aspects of DSS, such as how you can leverage MLFlow to take your machine learning experiments to the next level. For now, enjoy the beginning of 2025 and have fun learning more about LLMs and GenAI!

Explore more about Data Science Stack: https://ubuntu.com/ai/data-science

14 January, 2025 08:29AM

hackergotchi for GreenboneOS

GreenboneOS

December 2024 Threat Report: Sunsetting a Record Year for IT Risk

In 2024, geopolitical instability, marked by conflicts in Ukraine and the Middle East, emphasized the need for stronger cybersecurity in both the public and private sector. China targeted U.S. defense, utilities, internet providers and transportation, while Russia launched coordinated cyberattacks on U.S. and European nations, seeking to influence public opinion and create discord among Western allies over the Ukrainian war. As 2024 ends, we can look back at a hectic cybersecurity landscape on the edge.

2024 marked another record setting year for CVE (Common Vulnerabilities and Exposures) disclosures. Even if many are so-called “AI Slop” reports [1][2], the sheer volume of published vulnerabilities creates a big haystack. As IT security teams seek to find high-risk needles in a larger haystack, the chance of oversight becomes more prevalent. 2024 was also a record year for ransomware payouts in terms of volume and size, and Denial of Service (DoS) attacks.

It also saw the NIST NVD outage, which affected many organizations around the world including security providers. Greenbone’s CVE scanner is a CPE (Common Platform Enumeration) matching function and has been affected by the NIST NVD outage. However, Greenbone’s primary scanning engine, OpenVAS Scanner, is unaffected. OpenVAS actively interacts directly with services and applications, allowing Greenbone’s engineers to build reliable vulnerability tests using the details from initial CVE reports.

In 2025, fortune will favor organizations that are prepared. Attackers are weaponizing cyber-intelligence faster; average time-to-exploit (TTE) is mere days, even hours. The rise of AI will create new challenges for cybersecurity. Alongside these advancements, traditional threats remain critical for cloud security and software supply chains. Security analysts predict that fundamental networking devices such as VPN gateways, firewalls and other edge devices will continue to be a hot target in 2025.

In this edition of our monthly Threat Report, we review the most pressing vulnerabilities and active exploitation campaigns that emerged in December 2024.

Mitel MiCollab: Zero-Day to Actively Exploited in a Flash

Once vulnerabilities are published, attackers are jumping on them with increased speed. Some vulnerabilities have public proof of concept (PoC) exploit code within hours, leaving defenders with minimal reaction time. In early December, researchers at GreyNoise observed exploitation of Mitel MiCollab the same day that PoC code was published. Mitel MiCollab combines voice, video, messaging, presence and conferencing into one platform. The new vulnerabilities have drawn alerts from the Belgian national Center for Cybersecurity, the Australian Signals Directorate (ASD) and the UK’s National Health Service (NHS) in addition to the American CISA (Cybersecurity and Infrastructure Security Agency). Patching the recent vulnerabilities in MiCollab is considered urgent.

Here are details about the new actively exploited CVEs in Mitel MiCollab:

  • CVE-2024-41713 (CVSS 7.8 High): A path traversal vulnerability in the NuPoint Unified Messaging (NPM) component of Mitel MiCollab allows unauthenticated path traversal by leveraging the “…/” technique in HTTP requests. Exploitation can expose highly sensitive files.
  • CVE-2024-35286 (CVSS 10 Critical): A SQL injection vulnerability has been identified in the NPM component of Mitel MiCollab which could allow a malicious actor to conduct a SQL injection attack.

Since mid-2022, CISA has tracked three additional actively exploited CVEs in Mitel products which are known to be leveraged in ransomware attacks. Greenbone is able to detect endpoints vulnerable to these high severity CVEs with active checks [4][5].

Array Networks SSL VPNs Exploited by Ransomware

CVE-2023-28461 (CVSS 9.8 Critical) is a Remote Code Execution (RCE) vulnerability in Array Networks Array AG Series and vxAG SSL VPN appliances. The devices, touted by the vendor as a preventative measure against ransomware, are now being actively exploited in recent ransomware attacks. Array Networks themselves were breached by the Dark Angels ransomware gang earlier this year [1][2].

According to recent reports, Array Networks holds a significant market share in the Application Delivery Controller (ADC) market. According to the ​​IDC’s WW Quarterly Ethernet Switch Tracker, they are the market leader in India, with a market share of 34.2%. Array Networks has released patches for affected products running ArrayOS AG 9.4.0.481 and earlier versions. The Greenbone Enterprise Feed has included a detection test for CVE-2023-28461 since it was disclosed in late March 2023.

CVE-2024-11667 in Zyxel Firewalls

CVE-2024-11667 (CVSS 9.8 Critical) in Zyxel firewall appliances are being actively exploited in ongoing ransomware attacks. A directory traversal vulnerability in the web management interface could allow an attacker to download or upload files via a maliciously crafted URL. Zyxel Communications is a Taiwanese company specializing in designing and manufacturing networking devices for businesses, service providers and consumers. Reports put Zyxel’s market share at roughly 4.2% of the ICT industry with a diverse global footprint including large Fortune 500 companies.

A defense in depth approach to cybersecurity is especially important in cases such as this. When attackers compromise a networking device such as a firewall, typically they are not immediately granted access to highly sensitive data. However, initial access allows attackers to monitor network traffic and enumerate the victim’s network in search of high value targets.

Zyxel advises updating your device to the latest firmware, temporarily disabling remote access if updates cannot be applied immediately and applying their best practices for securing distributed networks. CVE-2024-11667 affects Zyxel ATP series firmware versions V5.00 through V5.38, USG FLEX series firmware versions V5.00 through V5.38, USG FLEX 50(W) series firmware versions V5.10 through V5.38 and USG20(W)-VPN series firmware versions V5.10 through V5.38. Greenbone can detect the vulnerability CVE-2024-11667 across all affected products.

Critical Flaws in Apache Struts 2

CVE-2024-53677 (CVSS 9.8 Critical), an unrestricted file upload [CWE-434] flaw affecting Apache Struts 2 allows attackers to upload executable files into web-root directories. If a web-shell is uploaded, the flaw may lead to unauthorized Remote Code Execution. Apache Struts is an open-source Java-based web-application framework widely used by the public and private sectors including government agencies, financial institutions and other large organizations [1]. Proof of concept (PoC) exploit code is publicly available, and CVE-2024-53677 is being actively exploited increasing its risk.

The vulnerability was originally tracked as CVE-2023-50164, published in December 2023 [2][3]. However, similarly to a recent flaw in VMware vCenter, the original patch was ineffective resulting in the re-emergence of vulnerability. CVE-2024-53677 affects the FileUploadInterceptor component and thus, applications not using this module are unaffected. Users should update their Struts2 instance to version 6.4.0 or higher and migrate to the new file upload mechanism. Other new critical CVEs in popular open-source software (OSS) from Apache:

The Apache Software Foundation (ASF) follows a structured process across its projects that encourages private reporting and releasing patches prior to public disclosure so patches are available for all CVEs mentioned above. Greenbone is able to detect systems vulnerable to CVE-2024-53677 and other recently disclosed vulnerabilities in ASF Foundation products.

Palo Alto’s Secure DNS Actively Exploited for DoS

CVE-2024-3393 (CVSS 8.7 High) is a DoS (Denial of Service) vulnerability in the DNS Security feature of PAN-OS. The flaw allows an unauthenticated attacker to reboot PA-Series firewalls, VM-Series firewalls, CN-Series firewalls and Prisma Access devices via malicious packets sent through the data plane. By repeatedly triggering this condition, attackers can cause the firewall to enter maintenance mode. CISA has identified CVE-2024-3393 vulnerability as actively exploited and it’s among five other actively exploited vulnerabilities in Palo Alto’s products over only the past two months.

According to the advisory posted by Palo Alto, only devices with a DNS Security License or Advanced DNS Security License and logging enabled are affected. It would be an easy assumption to say that these conditions mean that top-tier enterprise customers are affected. Greenbone is able to detect the presence of devices affected by CVE-2024-3393 with a version detection test.

Microsoft Security in 2024: Who Left the Windows Open?

While it would be unfair to single out Microsoft for providing vulnerable software in 2024, the Redmond BigTech certainly didn’t beat security expectations. A total of 1,119 CVEs were disclosed in Microsoft products in 2024; 53 achieved critical severity (CVSS > 9.0), 43 were added to CISA’s Known Exploited Vulnerabilities (KEV) catalog, and at least four were known vectors for ransomware attacks. Although the comparison is rough, the Linux kernel saw more (3,148) new CVEs but only three were rated critical severity and only three were added to CISA KEV. Here are the details of the new actively exploited CVEs in Microsoft Windows:

  • CVE-2024-35250 (CVSS 7.8 High): A privilege escalation flaw allowing an attacker with local access to a system to gain system-level privileges. The vulnerability was discovered in April 2024, and PoC exploit code appeared online in October.
  • CVE-2024-49138 (CVSS 7.8 High): A heap-based buffer overflow [CWE-122] privilege escalation vulnerability; this time in the Microsoft Windows Common Log File System (CLFS) driver. Although no publicly available exploit exists, security researchers have evidence that this vulnerability can be exploited by crafting a malicious CLFS log to execute privileged commands at the system privilege level.

Detection and mitigation of these new Windows CVEs is critical since they are actively under attack. Both were patched in Microsoft’s December patch release. Greenbone is able to detect CVE-2024-35250 and CVE-2024-49138 as well as all other Microsoft vulnerabilities published as CVEs.

Summary

2024 highlighted the continuously challenging cybersecurity landscape with record-setting vulnerability disclosures, ransomware payouts, DoS attacks and an alarming rise in active exploitations. The rapid weaponization of vulnerabilities emphasizes the need for a continuous vulnerability management strategy and a defense-in-depth approach.

December saw new critical flaws in Mitel, Apache and Microsoft products. More network products: Array Networks VPNs and Zyxel firewalls are now being exploited by ransomware threat actors underscoring the urgency for proactive patching and robust detection measures. As we enter 2025, fortune will favor those prepared; organizations must stay vigilant to mitigate risks in an increasingly hostile cyber landscape.

14 January, 2025 08:12AM by Joseph Lee

hackergotchi for ZEVENET

ZEVENET

Hardware Load Balancer vs Software Load Balancer: Which is Right for You?

Load balancers play a critical role in ensuring seamless network traffic distribution, improving the reliability and performance of digital infrastructures. But how do you choose between a hardware load balancer and a software load balancer? In this article, we’ll break down the key differences, benefits, and use cases to help you decide which solution fits your needs.

Hardware Load Balancer

What Is a Hardware Load Balancer?

A hardware load balancer is a physical device designed specifically to manage network traffic. It acts as an intermediary between client requests and backend servers, efficiently distributing workloads to optimize performance and reliability.

Key Features

  • Optimized Performance: Purpose-built for high-speed processing, it excels in managing large volumes of traffic.
  • Reliability: Robust hardware ensures uptime and durability in mission-critical environments.

Common Use Cases

Hardware load balancers are often deployed in enterprises where performance, security, and reliability are paramount. Examples include:

  • E-commerce platforms: Ensuring a smooth shopping experience for high numbers of simultaneous users.
  • Banking and financial applications: Maintaining stable and secure operations for sensitive data.
  • Critical services: Supporting healthcare or government platforms where downtime is unacceptable.

When to Choose a Hardware Load Balancer

Consider a hardware load balancer if your organization handles sensitive data, requires high performance, or cannot tolerate service disruptions. These devices provide unmatched stability, especially in large-scale environments.

Software Load Balancer

What Is a Software Load Balancer?

A software load balancer is a flexible application that runs on standard hardware to manage network traffic. Unlike its hardware counterpart, it relies on virtualization and cloud-based technologies to provide load balancing capabilities.

Key Features

  • Flexibility: Easily adapts to varying traffic volumes and integrates seamlessly with virtualized and cloud environments.
  • Cost-Effectiveness: Operates on existing hardware, significantly reducing upfront investment.

Common Use Cases

Software load balancers are ideal for businesses that prioritize scalability and cost savings. Examples include:

  • Startups and growing businesses: Supporting dynamic traffic patterns without major infrastructure investments.
  • Development and testing environments: Enabling agile development processes.
  • Organizations with fluctuating workloads: Scaling up or down effortlessly to meet changing demands.

When to Choose a Software Load Balancer

If your organization needs an adaptable, cost-effective solution that can grow with you, a software load balancer is the ideal choice. Its simplicity and flexibility make it perfect for modern, evolving infrastructures.

Hardware vs. Software: A Comparison

Feature Hardware Load Balancer Software Load Balancer
Performance Optimized for high traffic volumes. Scalable to adapt to traffic spikes.
Cost Higher upfront investment. Lower initial cost.
Flexibility Fixed capabilities. Highly adaptable to changes.
Maintenance Requires physical management. Easier to update and scale.
Use Cases Critical applications with stable traffic. Dynamic, growing, or cloud-based setups.

Choosing the right load balancer depends on your organization’s specific needs. For static, high-performance environments, hardware may be the best fit. For adaptable and cost-conscious solutions, software load balancers are ideal.

SKUDONET Load Balancer Solutions

At SKUDONET, we understand that every business has unique needs. That’s why we offer a comprehensive suite of load balancing solutions designed for reliability, scalability, and advanced security.

Our Solutions

SKUDONET combines the flexibility of open-source technology with enterprise-grade features to deliver unparalleled performance, whether you choose a hardware or software platform.

Selecting the right load balancer comes down to understanding your specific requirements. Whether you prioritize the robust stability of hardware or the adaptability of software, SKUDONET has a solution tailored to your needs.

Ready to optimize your infrastructure? Try SKUDONET Enterprise Edition for free with our 30-day fully functional trial and experience the difference firsthand.

TRY SKUDONET ENTERPRISE EDITION

14 January, 2025 07:42AM by Nieves Álvarez

hackergotchi for Deepin

Deepin

How to install graphics card drivers in deepin 23

1.Determine whether a 32-bit graphics card driver is needed Because some game installers, launchers, game bodies and other components mix 32-bit and 64-bit programs, it is recommended to install both 64-bit and 32-bit graphics card drivers at the same time The following content, beginning with $, is to be executed in the terminal 1.1 Check executable file Use the file command to check the exe file, for example: $ file installer.exe If the following content appears, it means that a 32-bit driver needs to be installed PE32 executable (GUI) Intel 80386 If the following content appears, it means that a ...Read more

14 January, 2025 03:28AM by aida

January 13, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 874

Welcome to the Ubuntu Weekly Newsletter, Issue 874 for the week of January 5 – 11, 2025. The full version of this issue is available here.

In this issue we cover:

  • First Plucky Puffin test rebuild
  • Phased updater and weekends
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Rocks Public Journal; 2024-01-10
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • LoCo Events
  • Ubuntu Studio: 24.04 LTS Backports Megathread
  • Ubuntu Studio: Support and Help Updates
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Cristovao Cordeiro (cjdc) – Rocks
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

13 January, 2025 09:42PM

hackergotchi for GreenboneOS

GreenboneOS

Patch Now! Cleo Products Actively Exploited in Ransomware Attacks

An actively exploited RCE (Remote Code Execution) with system privileges vulnerability that does not require user-interaction is as bad as it gets from a technical standpoint. When that CVE impacts software widely used by Fortune 500 companies, it is a ticking time bomb. And when advanced persistent threat actors jump on a software vulnerability such as this, remediation needs to become an emergency response effort. Most recently, CVE-2024-50623 (also now tracked as CVE-2024-55956) affecting more than 4,200 users of Cleo’s MFT (Managed File Transfer) software met all these prerequisites for disaster. It has been implicated in active ransomware campaigns affecting several Fortune 500 companies taking center stage in cybersecurity news.

In this cybersecurity alert, we provide a timeline of events related to CVE-2024-50623 and CVE-2024-55956 and associated ransomware campaigns. Even if you are not using an affected product, this will give you valuable insight into the vulnerability lifecycle and the risks of third-party software supply chains. 

CVE-2024-50623 and CVE-2024-55956: a Timeline of Events

The vulnerability lifecycle is complex. You can review our previous article about next-gen vulnerability management for an in depth explanation on how this process happens. In this report, we will provide a timeline for the disclosure and resolution of CVE-2024-50623 and subsequently CVE-2024-55956 as a failed patch attempt from the software vendor Cleo was uncovered and exploited by ransomware operators. Our Greenbone Enterprise Feed includes detection modules for both CVEs [1][2], allowing organizations to identify vulnerable systems and apply emergency remediation. Here is a timeline of events so far:

  • October 28, 2024: CVE-2024-50623 (CVSS 10 Critical) affecting several Cleo MFT products was published by the vendor and a patched version 5.8.0.21 was
  • November 2024: CVE-2024-50623 was exploited for data exfiltration impacting at least 10 organizations globally including Blue Yonder, a supply chain management service used by Fortune 500 companies.
  • December 3, 2024: Security researchers at Huntress identified active exploitation of CVE-2024-50623 capable of bypassing the original patch (version 5.8.0.21).
  • December 8, 2024: Huntress observed a significant uptick in the rate of exploitation. This could be explained by the exploit code being sold in a Malware as a Service cyber crime business model or simply that the attackers had finished reconnaissance and launched a widespread campaign for maximum impact.
  • December 9, 2024: Active exploitation and proof-of-concept (PoC) exploit code was reported to the software vendor Cleo.
  • December 10, 2024: Cleo released a statement acknowledging the exploitability of their products despite security patches and issued additional mitigation guidance.
  • December 11, 2024: Wachtowr Labs released a detailed technical report describing how CVE-2024-50623 allows RCE via Arbitrary File Write [CWE-434]. Cleo updated their mitigation guidance and released a subsequent patch (version 5.8.0.24).
  • December 13, 2024: A new name, CVE-2024-55956 (CVSS 10 Critical), was issued for tracking this ongoing vulnerability, and CISA added the flaw to its Known Exploited Vulnerabilities (KEV) catalog, flagged for use in ransomware attacks.

Cleo Products Leveraged in Ransomware Attacks

The risk to global business posed by CVE-2024-50623 and CVE-2024-55956 is high. These two CVEs potentially impact more than 4,200 customers of Cleo LexiCom, a desktop-based client for communication with major trading networks, Cleo VLTrader, a server-level solution tailored for mid-enterprise organizations, and Cleo Harmony for large enterprises.

The CVEs have been used as initial access vectors in a recent ransomware campaign. The Termite ransomware operation [1][2] has been implicated in the exploitation of Blue Yonder, a Panasonic subsidiary in November 2024. Blue Yonder is a supply chain management platform used by large tech companies including Microsoft, Lenovo, and Western Digital, and roughly 3,000 other global enterprises across many industries; Bayer, DHL, and 7-Eleven to name a few. Downtime of Blue Yonder’s hosted service caused payroll disruptions for StarBucks. The Clop ransomware group has also claimed responsibility for recent successful ransomware attacks.

In the second stage of some breaches, attackers conducted Active Directory domain enumeration [DS0026], installed web-shells [T1505.003] for persistence [TA0003], and attempted to exfiltrate data [TA0010] from the victim’s network after gaining initial access via RCE. An in-depth technical description of the Termite ransomware’s architecture is also available.

Mitigating CVE-2024-50623 and CVE-2024-55956

Instances of Cleo products version 5.8.0.21 are still vulnerable to cyber attacks. The most recent patch, version 5.8.0.24 is required to mitigate exploitation. All users are urged to apply updates with urgency. Additional mitigation and best practices include disabling the autorun functionality in Cleo products, removing access from the Internet or using firewall rules to restrict access to only authorized IP addresses, and blocking the IP addresses of endpoints implicated in the attacks.

Summary

Cleo Harmony, VLTrader, and LexiCom prior to version 5.8.0.24 are under active exploitation due to critical RCE vulnerabilities (CVE-2024-50623 and CVE-2024-55956). These flaws have been the entry point for successful ransomware attacks against at least 10 organizations and impacting Fortune 500 companies. Greenbone provides detection for affected products and affected users are urged to apply patches and implement mitigation strategies, as attackers will certainly continue to leverage these exploits.

13 January, 2025 11:11AM by Joseph Lee

January 11, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Support and Help Updates

We have always strived to give our users the best support options. After asking the community via a thread on Ubuntu Discourse and being given positive feedback, we have decided to move our primary support channel from Ask Ubuntu to Ubuntu Discourse.

Ask Ubuntu, which was run outside of the Ubuntu Governance, was a great idea in its time, but as time has gone on, it has become difficult for the moderators to moderate as its host, StackExchange, has made questionable decisions, including shutting-down OpenSSO, which effectively disabled many accounts which were exclusively linked to Launchpad without recovery. StackExchange has been uncooperative with re-enabling this link to Launchpad, leaving many users, who had higher privileges due to their participation, having to start over.

Additionally, as stated long before, the Ubuntu Forums section for Ubuntu Studio has long been dead. Additionally, the Ubuntu Forums, which is officially under the Ubuntu Governance, have found themselves in a position where the software Ubuntu Forum is unable to upgrade any further. As a result, on Thursday, January 9, 2025, they have officially shut-down. Over the two months prior, support has transitioned to Ubuntu Discourse with much success.

As such, with the community feedback, Ubuntu Studio’s primary support will be changing to Ubuntu Discourse. The support links will be changing over in the menu for all supported versions of Ubuntu Studio (as of this writing, 22.04 LTS, 24.04 LTS, and 24.10), and the Ask Ubuntu section on the website will change to Ubuntu Discourse.

Special Non-Support/Help Community Section

A new icon appearing in the Ubuntu Studio Information menu is “Connect with Community”. This will take you to the special Ubuntu Studio section of the Ubuntu Discourse where, while support and help questions aren’t allowed, other discussions are. This is also where you will find future release notes along with the newest LTS Backports Megathread for any application backport requests you may have.

Overall, this will be a great place to connect with other members of the community and interact with developers.


Small update on 22.04 LTS to 24.04 LTS upgrades

It has been confirmed that a “quirk” needs to be added to ubuntu-release-upgrader that forces an installation of pipewire-audio during the upgrade calculation. A member of the team that works on this has taken this on and is working on a fix. Please stay tuned for further updates.

11 January, 2025 08:02PM

January 09, 2025

Nobuto Murata: How to prevent TrackPoint or touchpad events from waking up ThinkPad T14 Gen 5 AMD from suspend

TL;DR

Try the following lines in your custom udev rules, e.g.
/etc/udev/rules.d/99-local-disable-wakeup-events.rules

KERNEL=="i2c-ELAN0676:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"
KERNEL=="PNP0C0E:00", SUBSYSTEM=="acpi", DRIVERS=="button", ATTRS{path}=="\_SB_.SLPB", ATTR{power/wakeup}="disabled"
Table of Contents

The motivation

Whenever something touches the red cap, the system wakes up from suspend/s2idle.
Whenever something touches the red cap, the system wakes up from suspend/s2idle.

I’ve used ThinkPad T14 Gen 3 AMD for 2 years, and I recently purchased T14 Gen 5 AMD. The previous system as Gen 3 annoyed me so much because the laptop randomly woke up from suspend even inside a backpack on its own, heated up the confined air in it, and drained the battery pretty fast as a consequence. Basically it’s too sensitive to any events. For example, whenever a USB Type-C cable is plugged in as a power source or whenever something touches the TrackPoint even if a display on a closed lid slightly makes contact with the red cap, the system wakes up from suspend. It was uncontrollable.

I was hoping that Gen 5 would make a difference, and it did when it comes to the power source event. However, frequent wakeups due to the TrackPoint event remained the same so I started to dig in.

Disabling touchpad as a wakeup source on T14 Gen 5 AMD

Disabling touchpad events as a wakeup source is straightforward. The touchpad device, ELAN0676:00 04F3:3195 Touchpad, can be found in the udev device tree as follows.

$ udevadm info --tree
...

 └─input/input12
   ┆ P: /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
   ┆ M: input12
   ┆ R: 12
   ┆ U: input
   ┆ E: DEVPATH=/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
   ┆ E: SUBSYSTEM=input
   ┆ E: PRODUCT=18/4f3/3195/100
   ┆ E: NAME="ELAN0676:00 04F3:3195 Touchpad"
   ┆ E: PHYS="i2c-ELAN0676:00"

And you can get all attributes including parent devices like the following.

$ udevadm info --attribute-walk -p /devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12
...

  looking at device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/0018:04F3:3195.0001/input/input12':
    KERNEL=="input12"
    SUBSYSTEM=="input"
    DRIVER==""
    ...
    ATTR{name}=="ELAN0676:00 04F3:3195 Touchpad"
    ATTR{phys}=="i2c-ELAN0676:00"

...

  looking at parent device '/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00':
    KERNELS=="i2c-ELAN0676:00"
    SUBSYSTEMS=="i2c"
    DRIVERS=="i2c_hid_acpi"
    ATTRS{name}=="ELAN0676:00"
    ...
    ATTRS{power/wakeup}=="enabled"

The line I’m looking for is ATTRS{power/wakeup}=="enabled". By using the identifiers of the parent device that has ATTRS{power/wakeup}, I can make sure that /sys/devices/platform/AMDI0010:01/i2c-1/i2c-ELAN0676:00/power/wakeup is always disabled with the custom udev rule as follows.

KERNEL=="i2c-ELAN0676:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"

Disabling TrackPoint as a wakeup source on T14 Gen 5 AMD

I’ve seen a pattern already as above so I should be able to apply the same method. The TrackPoint device, TPPS/2 Elan TrackPoint, can be found in the udev device tree.

$ udevadm info --tree
...

 └─input/input5
   ┆ P: /devices/platform/i8042/serio1/input/input5
   ┆ M: input5
   ┆ R: 5
   ┆ U: input
   ┆ E: DEVPATH=/devices/platform/i8042/serio1/input/input5
   ┆ E: SUBSYSTEM=input
   ┆ E: PRODUCT=11/2/a/63
   ┆ E: NAME="TPPS/2 Elan TrackPoint"
   ┆ E: PHYS="isa0060/serio1/input0"

And the information of parent devices too.

$ udevadm info --attribute-walk -p /devices/platform/i8042/serio1/input/input5
...

  looking at device '/devices/platform/i8042/serio1/input/input5':
    KERNEL=="input5"
    SUBSYSTEM=="input"
    DRIVER==""
    ...
    ATTR{name}=="TPPS/2 Elan TrackPoint"
    ATTR{phys}=="isa0060/serio1/input0"

...

  looking at parent device '/devices/platform/i8042/serio1':
    KERNELS=="serio1"
    SUBSYSTEMS=="serio"
    DRIVERS=="psmouse"
    ATTRS{bind_mode}=="auto"
    ATTRS{description}=="i8042 AUX port"
    ATTRS{drvctl}=="(not readable)"
    ATTRS{firmware_id}=="PNP: LEN0321 PNP0f13"
    ...
    ATTRS{power/wakeup}=="disabled"

I hit the wall here. ATTRS{power/wakeup}=="disabled" for the i8042 AUX port is already there but the TrackPoint still wakes up the system from suspend. I had to do bisecting for all remaining wakeup sources.

The list of the remaining wakeup sources

$ cat /proc/acpi/wakeup
Device	S-state	  Status   Sysfs node
GPP0	  S0	*disabled
GPP2	  S3	*disabled
GPP5	  S0	*enabled   pci:0000:00:02.1
GPP6	  S4	*enabled   pci:0000:00:02.2
GP11	  S4	*enabled   pci:0000:00:03.1
SWUS	  S4	*disabled
GP12	  S4	*enabled   pci:0000:00:04.1
SWUS	  S4	*disabled
XHC0	  S3	*enabled   pci:0000:c4:00.3
XHC1	  S4	*enabled   pci:0000:c4:00.4
XHC2	  S4	*disabled  pci:0000:c6:00.0
NHI0	  S3	*enabled   pci:0000:c6:00.5
XHC3	  S3	*enabled   pci:0000:c6:00.3
NHI1	  S4	*enabled   pci:0000:c6:00.6
XHC4	  S3	*enabled   pci:0000:c6:00.4
LID	  S4	*enabled   platform:PNP0C0D:00
SLPB	  S3	*enabled   platform:PNP0C0E:00
 Wakeup sources:
 │  [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:001/wakeup66]: enabled
 │  [/sys/devices/platform/USBC000:00/power_supply/ucsi-source-psy-USBC000:002/wakeup67]: enabled
 │ ACPI Battery [PNP0C0A:00]: enabled
 │ ACPI Lid Switch [PNP0C0D:00]: enabled
 │ ACPI Power Button [PNP0C0C:00]: enabled
 │ ACPI Sleep Button [PNP0C0E:00]: enabled
 │ AT Translated Set 2 keyboard [serio0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] Multimedia controller [0000:c4:00.5]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:03.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:04.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c4:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.5]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:c6:00.6]: enabled
 │ Mobile Broadband host interface [mhi0]: enabled
 │ Plug-n-play Real Time Clock [00:01]: enabled
 │ Real Time Clock alarm timer [rtc0]: enabled
 │ Thunderbolt domain [domain0]: enabled
 │ Thunderbolt domain [domain1]: enabled
 │ USB4 host controller [0-0]: enabled
 └─USB4 host controller [1-0]: enabled

Somehow, disabling SLPB “ACPI Sleep Button” stopped undesired wakeups by the TrackPoint.

  looking at parent device '/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00':
    KERNELS=="PNP0C0E:00"
    SUBSYSTEMS=="acpi"
    DRIVERS=="button"
    ATTRS{hid}=="PNP0C0E"
    ATTRS{path}=="\_SB_.SLPB"
    ...
    ATTRS{power/wakeup}=="enabled"

The final udev rule is the following. It also disables wakeup events from the keyboard as a side effect, but opening the lid or pressing the power button can still wake up the system so it works for me.

KERNEL=="PNP0C0E:00", SUBSYSTEM=="acpi", DRIVERS=="button", ATTRS{path}=="\_SB_.SLPB", ATTR{power/wakeup}="disabled"

In the case of ThinkPad T14 Gen 3 AMD

After solving the headache of frequent wakeups for T14 Gen5 AMD. I was curious if I could apply the same to Gen 3 AMD retrospectively. Gen 3 has the following wakeup sources active out of the box.

 Wakeup sources:
 │ ACPI Battery [PNP0C0A:00]: enabled
 │ ACPI Lid Switch [PNP0C0D:00]: enabled
 │ ACPI Power Button [LNXPWRBN:00]: enabled
 │ ACPI Power Button [PNP0C0C:00]: enabled
 │ ACPI Sleep Button [PNP0C0E:00]: enabled
 │ AT Translated Set 2 keyboard [serio0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] ISA bridge [0000:00:14.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.1]: enabled
 │ Advanced Micro Devices, Inc. [AMD] PCI bridge [0000:00:02.2]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:04:00.4]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.0]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.3]: enabled
 │ Advanced Micro Devices, Inc. [AMD] USB controller [0000:05:00.4]: enabled
 │ ELAN0678:00 04F3:3195 Mouse [i2c-ELAN0678:00]: enabled
 │ Mobile Broadband host interface [mhi0]: enabled
 │ Plug-n-play Real Time Clock [00:01]: enabled
 └─Real Time Clock alarm timer [rtc0]: enabled

Disabling the touchpad event was straightforward. The only difference from Gen 5 was the ID of the device.

KERNEL=="i2c-ELAN0678:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="disabled"

When it comes to the TrackPoint or power source event, nothing was able to stop it from waking up the system even after disabling all wakeup sources. I came across a hidden gem named amd_s2idle.py. The “S0i3/s2idle analysis script for AMD systems” is full with the domain knowledge of s2idle like where to look in /proc or /sys or how to enable debug and what part of the logs is important.

By running the script, I got the following output around the unexpected wakeup.

$ sudo python3 ./amd_s2idle.py --debug-ec --duration 30
Debugging script for s2idle on AMD systems
💻 LENOVO 21CF21CFT1 (ThinkPad T14 Gen 3) running BIOS 1.56 (R23ET80W (1.56 )) released 10/28/2024 and EC 1.32
🐧 Ubuntu 24.04.1 LTS
🐧 Kernel 6.11.0-12-generic
🔋 Battery BAT0 (Sunwoda ) is operating at 90.91% of design
Checking prerequisites for s2idle
✅ Logs are provided via systemd
✅ AMD Ryzen 7 PRO 6850U with Radeon Graphics (family 19 model 44)
...

Suspending system in 0:00:02
Suspending system in 0:00:01

Started at 2025-01-04 00:46:53.063495 (cycle finish expected @ 2025-01-04 00:47:27.063532)
Collecting data in 0:00:02
Collecting data in 0:00:01

Results from last s2idle cycle
💤 Suspend count: 1
💤 Hardware sleep cycle count: 1
○ GPIOs active: ['0']
🥱 Wakeup triggered from IRQ 9: ACPI SCI
🥱 Wakeup triggered from IRQ 7: GPIO Controller
🥱 Woke up from IRQ 7: GPIO Controller
❌ Userspace suspended for 0:00:14.031448 (< minimum expected 0:00:27)
💤 In a hardware sleep state for 0:00:10.566894 (75.31%)
🔋 Battery BAT0 lost 10000 µWh (0.02%) [Average rate 2.57W]
Explanations for your system
🚦 Userspace wasn't asleep at least 0:00:30
        The system was programmed to sleep for 0:00:30, but woke up prematurely.
        This typically happens when the system was woken up from a non-timer based source.

        If you didn't intentionally wake it up, then there may be a kernel or firmware bug

I compared all the logs generated between the events of power button, power source, TrackPoint, and touchpad. But except for the touchpad event, everything else was coming from GPIO pin #0 and there was no more information of how to distinguish those wakeup triggers. I ended up with a drastic approach of ignoring wakeup triggers from the GPIO pin #0 completely with the following kernel option.

gpiolib_acpi.ignore_wake=AMDI0030:00@0

And I get the line on each boot.

kernel: amd_gpio AMDI0030:00: Ignoring wakeup on pin 0

That comes with obvious downsides. The system doesn’t wake up frequently any longer, that is good. However, nothing can wake it up after getting into suspend. Opening the lid, pressing the power button or any key is simply ignored since all are going to GPIO pin #0. In the end, I had to enable the touchpad back as a wakeup source explicitly so the system can wakeup by tapping the touchpad. It’s far from ideal, but the touchpad is less sensitive than the TrackPoint so I will keep it that way.

KERNEL=="i2c-ELAN0678:00", SUBSYSTEM=="i2c", DRIVERS=="i2c_hid_acpi", ATTR{power/wakeup}="enabled"

I guess the limitation is coming from a firmware more or less, but at the same time I don’t expect fixes for the few year old model.

References

09 January, 2025 02:50PM

Scarlett Gately Moore: KDE: Snaps 24.12.1 Release, Kubuntu Plasma 5.27.12 Call for testers

I have released more core24 snaps to –edge for your testing pleasure. If you find any bugs please report them at bugs.kde.org and assign them to me. Thanks!

Kdenlive our amazing video editor!

Haruna is a video player that also supports youtube!

Kdevelop is our feature rich development IDE

KDE applications 24.12.1 release https://kde.org/announcements/gear/24.12.1/

New qt6 ports

  • lokalize
  • isoimagewriter
  • parley
  • kteatime
  • ghostwriter
  • ktorrent
  • kanagram
  • marble

Kubuntu:

We have Plasma 5.27.12 Bugfix release in staging https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/staging-plasma for noble updates, please test! Do NOT do this on a production system. Thanks!

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

09 January, 2025 01:24PM

Ubuntu Blog: How we used Flask and 12-factor charms to simplify Canonical.com development

Our latest Canonical website rebrand did not just bring the new Vanilla-based frontend, it also saw the introduction of Python Flask as the preferred web framework. Flask provides a good foundation for building new applications using the 12-factor methodology, effectively increasing our web team’s speed and efficiency in adding new features.

Because our web team works on both the website and the product’s UI we decided to take operational improvements one step further by providing a way to stand up in a few simple commands a fully integrated and observable Kubernetes environment for their Flask apps. Our solution, called 12-factor charms, provides an easy to use abstraction layer over existing Canonical products and it is aimed at application developers who create applications based on the 12-factor methodology.

As part of this blog post we will introduce the 12 factor charms in the context of the Flask, however the same also applies to 12-factor applications that are built using the following frameworks:

Let’s explore how you can use 12-factor charms to get a fully integrated and observable Kubernetes environment for Flask apps in a few simple commands.

The foundations: juju, charms and rocks

The 12-factor charm solution uses and combines capabilities in the following Canonical products:

  • Juju is an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale, on any infrastructure using charms.
  • A charm is an operator – business logic encapsulated in reusable software packages that automate every aspect of an application’s life.
  • Charmcraft is a CLI tool that makes it easy and quick to initialise, package, and publish Kubernetes and machine charms.
  • Rockcraft is a tool to create rocks – a new generation of secure, stable and OCI-compliant container images, based on Ubuntu.

More specifically a Rockcraft framework (conceptually similar to a snap extension) is initially used to facilitate the creation of a well structured, minimal and hardened container image, called a rock. A Charmcraft profile can then be leveraged to add a software operator (charm) around the aforementioned container image.

Encapsulating the original Flask application in a charm allows it to benefit from the entire charm ecosystem, meaning that the app can be connected to a database, e.g. an HA Postgres, observed through a Grafana based observability stack, get ingress and much more.

Creating a complete development environment in a few commands

Rockcraft and Charmcraft now natively support Flask. Production ready OCI images for Flask applications can be created using Rockcraft with 3 easy commands that need to be run in the root directory of the Flask application:

sudo snap install rockcraft --classic
rockcraft init --profile flask-framework
rockcraft pack

The full getting started tutorial for creating an OCI image for a Flask application takes you from a plain Ubuntu installation to a production ready OCI image for your Flask application. Using this tooling, the web development team was able to streamline the creation of their OCI images, speeding up their development and reducing maintenance effort.

Charmcraft now also natively supports Flask. The web development team is using it to create charms that automate every aspect of their Flask application’s life, including integrating with a database, preparing the tables in the database, integrating with observability and exposing the application using ingress. From the root directory of the Flask application, the Flask application charm can be created using 4 easy commands:

mkdir charm & cd charm
sudo snap install charmcraft --classic
charmcraft init --profile flask-framework
charmcraft pack

The full getting started tutorial for creating a charm for a Flask application takes you from a plain Ubuntu installation to deploying the Flask application on Kubernetes, exposing it using ingress and integrating it with a database.

Learn more about Juju, Rockraft and 12 factor charms

09 January, 2025 07:21AM

hackergotchi for Tails

Tails

Tails 6.11

Critical security fixes

The vulnerabilities described below were identified during an external security audit by Radically Open Security and disclosed responsibly to our team. We are not aware of these attacks being used against Tails users until now.

These vulnerabilities can only be exploited by a powerful attacker who has already exploited another vulnerability to take control of an application in Tails.

If you want to be extra careful and used Tails a lot since January 9 without upgrading, we recommend that you do a manual upgrade instead of an automatic upgrade.

  • Prevent an attacker from installing malicious software permanently. (#20701)

    In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit a vulnerability in Tails Upgrader to install a malicious upgrade and permanently take control of your Tails.

    Doing a manual upgrade would erase such malicious software.

  • Prevent an attacker from monitoring online activity. (#20709 and #20702)

    In Tails 6.10 or earlier, an attacker who has already taken control of an application in Tails could then exploit vulnerabilities in other applications that might lead to deanonymization or the monitoring of browsing activity:

    • In Onion Circuits, to get information about Tor circuits and close them.
    • In Unsafe Browser, to connect to the Internet without going through Tor.
    • In Tor Browser, to monitor your browsing activity.
    • In Tor Connection, to reconfigure or block your connection to the Tor network.
  • Prevent an attacker from changing the Persistent Storage settings. (#20710)

New features

Detection of partitioning errors

Sometimes, the partitions on a Tails USB stick get corrupted. This creates errors with the Persistent Storage or during upgrades. Partitions can get corrupted because of broken or counterfeit hardware, software errors, or physically removing the USB stick while Tails is running.

Tails now warns about such partitioning errors earlier. For example, if partitioning errors are detected when there is no Persistent Storage, Tails recommends that you reinstall or use a new USB stick.

Warning in the Welcome Screen: Errors were detected in the partitioning of your Tails USB stick.

Changes and updates

  • Update Tor Browser to 14.0.4.

  • Update Thunderbird to 128.5.0esr.

  • Remove support for hardware wallets in Electrum. Trezor wallets stopped working in Debian 12 (Bookworm), and so in Tails 6.0 or later.

  • Disable GNOME Text Editor from reopening on the last file. (#20704)

  • Add a link to the Tor Connection assistant from the menu of the Tor status icon on the desktop.

  • Make it easier for our team to find useful information in WhisperBack reports.

For more details, read our changelog.

Get Tails 6.11

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 6.0 or later to 6.11.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 6.11 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 6.11 directly:

09 January, 2025 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E331 António Aragão E Os Astrólogos Gastrónomos

Para começar o ano, recebemos a visita de António Aragão, organizador de Encontros de Comunidades de Tecnologias Livres (ECTL) e insigne treinador de papagaios. Também pudemos deslumbrar-nos com a sagacidade e alcance das previsões para 2024 do bruxo da casa: Fernão Vaz, Alfageme de Santarém, comedor de jantares de bife da vazia com trufas, que serão pagos pelo Diogo e o André Paula.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

09 January, 2025 12:00AM

January 08, 2025

hackergotchi for ARMBIAN

ARMBIAN

Happy New Year 2025! Armbian Weekly Highlights

Happy New Year from the Armbian Team!

As we step into this exciting new year, we at Armbian would like to extend our warmest wishes for a Happy New Year to our amazing community! Thank you for your continued support and contributions that make Armbian what it is today. Together, we look forward to achieving great things and exploring new possibilities in Linux and open-source innovation.


Armbian Updates Summary Week 1 (v25.2.0-trunk.274): What’s Changed in the Past Week

Armbian continues to push the boundaries of Linux performance and hardware compatibility with its rolling release updates. The latest changes in version v25.2.0-trunk.274 bring exciting enhancements in camera module support, board features, and software tools, all while maintaining a focus on stability and innovation.

Key Highlights

Camera Module Support Armbian now supports the Raspberry Pi Camera Modules on certain Rockchip hardware, expanding its capabilities for embedded systems and IoT applications:

  • OV5647 (Camera Module 1): Enables integration with various Raspberry Pi projects and beyond.
  • IMX219 (Camera Module 2): Advanced camera support for high-resolution capture in compact setups.

Board Enhancements

  • NanoPi M6: SPI NOR flash overlay added, providing improved boot options and storage flexibility for this versatile board.
  • OnePlus Kebab: Type-C support introduced, enhancing connectivity for modern peripherals and high-speed data transfers.
  • FriendlyElec CM3588: HDMI RX configuration added, optimizing display performance and making it easier to connect external monitors.

Software Enhancements

  • Sandboxed and Containerized SSH Server: A cutting-edge security measure, this new feature isolates SSH server operations to enhance safety and protect against vulnerabilities.
  • Wireguard VPN Server: Enables quick and easy VPN configuration, offering secure remote access to your systems.
  • Grafana Monitoring Dashboards: Provides detailed insights into performance and resource usage, allowing users to visualize and monitor their setups effectively.
  • OctoPrint: Streamlines 3D printing management, offering a robust web-based interface for monitoring and controlling 3D printers.

These updates reflect Armbian’s dedication to delivering user-friendly, powerful tools for a seamless experience.

U-Boot Improvements

Refinements in U-Boot build processes ensure better stability and clearer configurations for developers:

  • Addressed whitespace and newlines in UBOOT_TARGET_MAP.
  • Introduced clean builds for each U-Boot target, ensuring consistent and error-free deployment.

Kernel Updates

  • Changing full kernel config into defconfig
  • Trusted Firmware: Updated Arm Trusted Firmware to version 2.12, improving security and compatibility for Rockchip64 platforms.

Contributors

Armbian’s progress is made possible by the contributions of its dedicated developers. A big thank you to  @rpardini@bmx666@amazingfate@igorpecovnik@efectn, and @timsurber for their hard work!

Looking Ahead

The Armbian team is excited about the year ahead, aiming to introduce even more innovative features and improvements. With the support of a growing community, Armbian will continue to lead in providing a robust Linux experience for single-board computers. Together, let’s achieve new milestones and explore endless possibilities!

Stay updated by visiting the Armbian releases page.

 

The post Happy New Year 2025! Armbian Weekly Highlights first appeared on Armbian.

08 January, 2025 10:21PM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Web Engineering: Hack Week 2024

At Canonical, the work of our teams is strongly embedded in the open source principles and philosophy. We believe open source software will become the most prevalent method of software development and delivery in the future. Being open source is more than making the source of your software available, it’s also about contributing to other open source projects and helping to build communities. With this in mind, the web engineering team recently had the privilege of dedicating a week solely to contribute to external open source projects. We do this every year. 

Anyone who has tried to contribute to open-source projects will know that the journey is full of challenges, from finding an issue with enough context to understand the expected change, to getting the software running locally to test changes and much more. We would like to draw more attention to these issues and find sound practices to iron these out at an industry level. This is an ambitious goal, so first, we need to walk the walk.

Here is a list of contributions performed by the team during the week:

Total contributions: 40

TheAlgorithms/Python #12369
nurikk/zigbee2mqtt-frontend #2230
redphx/poc-tuya-ble-fingerbot #10
DefinitelyTyped/DefinitelyTyped #71144
react-bootstrap #6842
react-restart / ui #110
react-restart / ui #111
react-bootstrap #6836
npm-check-updates #1476
grafana #96270
mattermost/mattermost #29227
jasonacox/tinytuya #558
mattermost #29229
adlerweb/J7-C_UC96_BLE_Logger #1
mattermost #29233
react-bootstrap/pull/6855
react-bootstrap/pull/6856
react-bootstrap/pull/6857
the-djmaze/snappymail #1843
mattermost/docs #7591
mattermost/issues/19810
python/mypy #18152
react-bootstrap/issues/6859
arc53/DocsGPT/pull/1435
wooorm/franc #121
recharts/recharts #5244
pypi/inspector #177
mattermost/mattermost #29251
danilowoz/create-content-loader #326
fastapi/sqlmodel/pull/1211
mattermost/mattermost/pull/29260
mattermost/mattermost/pull/29269
mattermost/mattermost #29272
mattermost/mattermost #29276
Leaflet/Leaflet/pull/9528
abhishektripathi66/DSA/pull/40
orval/pull/1705
react-bootstrap #6862
lxd #14035
canonical/canonical.com/pull/1426

The goal of the initiative is to better understand what makes an open source project accessible to external contributors. This is a great opportunity to experience big and small projects. Projects with different cultural personalities. Build up some tips and tricks on how to run good projects and apply them to ours. This is a source of pride for the team by encouraging us to help projects going forward. 

Open source encourages compatibility with standards, making locking users in more difficult. This is why we love the freedom open source offers. Open source software allows for sharing knowledge, gaining knowledge, and practising. It promotes transparency in data collection and software systems. Freedom, therefore, is the gift that keeps on giving.

Please take a look at our open-source projects and reach out to us via the issues if anything is unclear.

08 January, 2025 01:50PM

hackergotchi for Deepin

Deepin

deepin Community Monthly Report | The 14th DDUC Successfully Held in Wuhan, Multiple New Developments in Community SIGs

Overview of deepin community data for December   deepin Community products 1、deepin 23 Product Updates and User Feedback deepin 23 Update In December 2024, deepin 23 was updated in beta three times. The main focus of this month was to fix security vulnerabilities to continuously improve system security. deepin Home In December 2024, Depth Home received a total of 176 bug and feature feedback from users: Among them, there were 128 bug reports, 4 of which have been fixed, and 13 confirmed issues pending resolution; There were 48 feature requests, 8 of which have been completed, and 13 have been ...Read more

08 January, 2025 02:50AM by aida

January 07, 2025

hackergotchi for Volumio

Volumio

Introducing CORRD: evolving Volumio into your personal music journey, everywhere

Volumio started because I had one goal in mind: to enjoy my music library with the highest quality possible. Back then, the world was a simpler place. We just listened to files, and streaming was not a thing.

Over the years developing Volumio, and with the help of our beloved community, something new emerged. Most people have more than one streaming account, so depending on what you want to listen to, you have to open more than one app. This creates a fragmented experience.

One thing you seem to love about Volumio, besides the obsessive attention to sound quality upon which we built it, is having your entire library condensed in one place. I think we did a great job of integrating such a fragmented digital music landscape into one coherent experience.

Now, I’ve started to feel that we’ve achieved as much as we can with Volumio’s current technical architecture. As with all good things, an overhaul is needed.

Thus came the vision to turn Volumio into the ultimate music experience.

Why limit Volumio enjoyment to just when you’re at home? I wanted something you can use wherever you are, that adapts to what you’re doing. Something that merges the convenience of recommendation algorithms and radio with the freedom and customizability that is at the core of what Volumio believes in.

That’s what led to the creation of CORRD a bold step toward making music more accessible and meaningful, wherever you are.

Welcome to CORRD, which is:

  • Unified Music Hub: CORRD brings together all your streaming services, music libraries, and discovery tools into one seamless app.
  • Personalized Music Assistant: Acts as your personal assistant, crafting a musical flow that matches what you’re doing, while helping you discover fresh and interesting music.
  • Versatile and Future-Ready: Designed to adapt to your lifestyle, seamlessly integrating with smart TVs, cars, voice assistants, and more.

CORRD is both an evolution of Volumio and a fresh way to connect with music. It’s a mobile-first, platform-agnostic music aggregator, player, and personal DJ, designed to unify streaming services under one app. By letting you tweak recommendations and discover endless new tracks, CORRD offers control, convenience, and boundless discovery on smart TVs, in cars, through voice assistants, or wherever else music might find you.

We’re starting with something radical because the music industry needs it. Algorithms today often feel like locked boxes, churning out repetitive suggestions. CORRD changes that by letting you personalize and shape your experience, blending the best parts of Volumio with a modern, adaptable foundation. And this is only the beginning—more Volumio features will follow, guided by our loyal community’s feedback.

What you see now is a Minimum Viable Product (MVP), and we need your insights to shape its future.

We’re launching with a private beta for the Volumio community first, because your passion has always inspired our innovation.

If you believe in a better way to enjoy and discover music, and want to help shape it, this is what you have to do to participate:

  • Step 1: Head to CORRD‘s website to enroll in the private-beta-test to download the app and start your journey.
  • Step 2: Join our Discord community to share your feedback and help us shape the future of music

With your feedback, we’ll refine CORRD and continue evolving the Volumio ecosystem, redefining music listening for everyone, everywhere.

The post Introducing CORRD: evolving Volumio into your personal music journey, everywhere appeared first on Volumio.

07 January, 2025 08:51AM by voladmin

hackergotchi for Deepin

Deepin

deepin Bi-Weekly Technical Report: deepin 25 Focuses on Bug Fixes, Launches x86 Device Compatibility...

The ninth Issue of deepin Biweekly Technical Report has been released, we will briefly list the progress of various deepin teams over the past two weeks and outline the general plan for the next two weeks!   DDE(deepin Desktop Environment) Requirements development for deepin 25 has been completed, and the current phase is focused on bug fixes for deepin 23 and deepin 25. Progress Fine-tune the visual effects of DTK's menu and scrollbar components. Continuously fix bugs in DDE components such as the taskbar and notifications based on community feedback. Adjust the technical preview interface to add a restart prompt ...Read more

07 January, 2025 06:16AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 873

Welcome to the Ubuntu Weekly Newsletter, Issue 873 for the week of December 29, 2024 – January 4, 2025. The full version of this issue is available here.

In this issue we cover:

  • Remembering and thanking Steve Langasek
  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • UbuCon Asia 2026 – Call for bids!
  • LoCo Events
  • Ubuntu Studio: Upgrades from 22.04 LTS to 24.04 LTS are NOT WORKING
  • Other Community News
  • In the Blogosphere
  • Other Articles of Interest
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

07 January, 2025 12:43AM

January 06, 2025

The Fridge: Remembering and thanking Steve Langasek

The following is a post from Mark Shuttleworth on the Ubuntu Discourse instance. For more information, you can find the full post here.


Steve Langasek is one of my heroes in open source and in life. He epitomises everything that is great about the movement – depth of technical insight, generosity of spirit, rigour in design, care for those who put their trust in us, curiosity about the future and a willingness to do hard work in the face of uncertainty to bring that future into being.

Our biggest choices are all about where we spend our time, who we spend it with, and whose lives we hope to improve in the process. Steve shone with a clarity of purpose that motivated many others to build the very best open source platforms they could dream about. He beautifully merged friendships and professional relationships, commercial and community goals, leadership and service. He touched thousands of people’s lives directly, and his work improves the lives of millions.

In recent years he battled illness with stoicism, humour and science. Through it all he remained active and engaged in our community and our exploration of the future. Even in terribly difficult moments I saw grace, precision and care in his actions and his priorities.

Steve passed away at the dawn of 2025. His time was short but remarkable. He will forever remain an inspiration. Judging by the outpouring of feelings this week, he is equally missed and mourned by colleagues and friends across the open source landscape, in particular in Ubuntu and Debian where he was a great mind, mentor and conscience.

It has been a singular honour to share these years and dreams. Thank you Steve. I will not forget, nor waver.

06 January, 2025 05:42PM

hackergotchi for Deepin

Deepin

(中文) 如何使用 Wine 日志调试问题

Sorry, this entry is only available in 中文.

06 January, 2025 03:05AM by aida

January 05, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Incus in 2024 and beyond!

A lot has happened in 2024 for the Incus project, so I thought it’d be interesting to see where we started, what we did and where we ended up after that very busy year, then look forward to what’s next in 2025!

Where we started

We began 2024 right on the heels of the Incus 0.4 release at the end of December 2023.

This is notable as Incus 0.4 was the last Incus release that could directly import changes from the LXD project due to Canonical’s decision to re-license LXD as AGPLv3.

This means that effectively everything that made it into Incus in 2024 originated directly from the Incus community. There is one small exception to that as LXD 5.0 LTS still saw some activity and as that’s still under Apache 2.0, we were able to import a few commits (83 to be exact) from that branch.

What we did

Some numbers

  • Releases
    • 12 feature releases (monthly cadence)
    • 1 LTS release (6.0.0)
    • 3 LTS bugfix releases (6.0.1, 6.0.2, 6.0.3)
  • Changes
    • 2317 commits
    • 751 pull requests
    • 124 individual contributors
  • 110394 people tried Incus online
  • 7375 posts were published on our forum
  • 396 Github issues were closed
  • 4700 hours of Incus videos were watched on Youtube
  • 670194 container and VM images downloaded

Our first LTS release

Incus 6.0 LTS was released at the beginning of April, alongside LXC and LXCFS 6.0 LTS.
All of which get 5 years of security support.

That was a huge milestone for Incus as it now allowed production users who don’t feel like going through an update cycle every month to switch over to Incus 6.0 LTS and have a stable production release for the years to come.

It also provides a much easier packaging target for Linux distributions as the monthly releases can be tricky to follow, especially when they introduce new dependencies.

Today, Incus 6.0 LTS represents around 50% of the Incus user base.

Notable feature additions

It’s difficult to come up with a list of the most notable new features because so much happened all over the place and deciding what’s notable ends up being very personal and subjective, depending on one’s usage of Incus, but here are a few!

  • Application container support (OCI), gives us the ability to natively run Docker containers on Incus
  • Clustered LVM storage backend, adds support for iSCSI/NVMEoTCP/FC storage in clusters
  • Network integrations (OVN inter-connect), allows for cross-cluster overlay networking
  • Automatic cluster re-balancing, simplifies operation of large clusters

Performance improvements

As more and more users run very large Incus systems, a number of performance issues were noticed and have been fixed.

An early one was related to how Incus handled OVN. The old implementation relied on the OVN command line tools to drive OVN database changes. This is incredibly inefficient as each call to those tools would require new TLS handshakes with all database servers, tracking down the leader, fetching a new copy of the database, performing a trivial operation and exiting. The new implementation uses a native database client directly in Incus which maintains a constant connection with the database, gets notified of changes and can instantly perform any needed configuration changes.

Then there were 2-3 different cases of database performance issues.
Two of them were caused by our auto-generated database helpers which weren’t very smart about handling of profiles, effectively causing a situation where performance would get exponentially worse as more profiles would be present in the database. Addressing this issue resulted in dramatic performance improvement for users operating with hundreds or even thousands of profiles.

Another was related to loading of instances on Incus startup, specifically loading the device definitions to check whether anything needed to be done on startup. This logic was always hitting configuration validation which can be costly, in this case, so costly that Incus would fail to startup during the allotted time by the init system (10 minutes). After some fixes to that logic, the affected system, running over 2000 virtual machines (on a single server) at the time, is now able to process all running VMs in just 10-15s.

On top of those issues, special attention was also put in optimizing resource usage on large systems, especially systems with multiple NUMA nodes, supporting basic NUMA balancing of virtual machines as well as selecting the best GPU devices based on NUMA cost.

Distribution integration

Back at the beginning of 2024, Incus was only available through my own packages for Debian or Ubuntu, or through native packages on Gentoo and NixOS.

This has changed considerably through 2024 with Incus now being readily available on:

  • Alpine Linux
  • Arch Linux
  • Chimera Linux
  • Debian
  • Fedora
  • Gentoo
  • NixOS
  • openSUSE
  • Rocky Linux
  • Ubuntu
  • Void Linux

Additionally, it’s also available as a Docker container to run on most any other platforms as well as available on MacOS through Colima. The client tool itself is available everywhere that Go supports.

Deployment tooling

Terraform/OpenTofu provider

The Incus Terraform/OpenTofu provider has seen quite a lot of activity this year.

We’re slowly headed towards a 1.0 release for it, basically ensuring that it can drive every single Incus feature and that its resources are defined in a clear and consistent way.

There is only one issue left in the 1.0 release milestone and there is an open pull request for it, so we are very close to where we want as far as feature coverage and with a few more bugfixes here and there, we should have that 1.0 release out in the coming weeks/month!

incus-deploy

incus-deploy was introduced in February and is basically a collection of Ansible and Terraform that allows for easy deployment of Incus, whether standalone or clustered and whether for testing/development or production.

This is commonly used by the Incus team to quickly deploy test clusters, complete with Ceph, OVN, clustered LVM, … all in a very reproducible way.

Incus OS

While incus-deploy provides an automated way to deploy Incus on top of traditional Linux servers, Incus OS is working on providing a solution for those who don’t want to have to deal with maintaining traditional Linux servers.

This is a fully immutable OS image, kept as minimal as possible and solely focused on running Incus.

It heavily relies on systemd tooling to provide a secure environment, starting from SecureBoot signing, to having every step of the boot be TPM measured, to having storage encrypted using that TPM state and the entire read-only disk image being verified through dm-verity.

The end result is an extremely secure and locked down environment which is designed for just one thing, running Incus!

We’re getting close to having something ready for early adopters with automated builds and update logic now working, but it will be a few more weeks before it’s safe/useful to install on a server.

Where we ended up

Over that year, Incus really turned into a full fledged Open Source project and community.

We have kept on with our release cadence, pushing out a new feature release every month while very actively backporting bugfixes and smaller improvements to our LTS release.

Distributions have done a great job at getting Incus packaged, making it natively available just about everywhere (we’re still waiting on solid EPEL packaging).

Our supporting projects like terraform-provider-incus, incus-deploy and incus-os are making it easier than ever to deploy and operate large scale Incus clusters as well as providing a simpler, more repeatable way of running Incus.

2024 was a very very good year for Incus!

What’s coming in 2025

Looking ahead, 2025 has the potential to be and even better year for us!

On the Incus front, there are no single huge feature to be looking forward to, but just the continual improvement, whether it be for containers, VMs, networking or clustering. We have a lot of small new features and polishing in mind which will help fill in some of the current gaps and provide a nice and consistent experience.

But it’s on the supporting projects that a lot of the potential now rests.

This will hopefully be the year of Incus OS, making installing Incus as easy as writing a file to a USB stick, booting a machine from it and accessing it over the network. Want to make a cluster, no problem, just boot a few more machines onto Incus OS and join them together as a cluster!

But we’re also going to be expanding incus-deploy. It’s currently doing a good job at deploying Incus on Ubuntu servers with Ansible but we want to expand that to also cover Debian and some of the RHEL derivatives so we can cover the majority of our current production users with it. On top of that, we want to also have incus-deploy handle setting up the common support services used by Incus clusters, typically OpenFGA, Keycloak, Grafana, Prometheus and Loki.

We also want to improve our testing and development lab, add more systems, add the ability to test on more architectures and easily test more complex features, whether it’s 100Gb/s+ networking with full hardware offload or confidential computing features like AMD SEV.

Sovereign Tech Fund

Thankfully a lot of that is going to be made a whole lot easier thanks to funding by the Sovereign Tech Fund who’s going to be supporting a variety of Incus related projects, especially focusing on the kind of work that’s not particularly exciting but is very much critical to the proper running of a project like ours.

This includes a big refresh of our testing and development lab, work on our LTS releases, new security features through the stack, improved support for other Linux distributions and OSes across our projects and more!

I for one am very excited about 2025!

05 January, 2025 03:48AM

January 04, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky 7.6

The 6th update of Sparky 7 – 7.6 is out. It is a quarterly updated point release of Sparky 7 “Orion Belt” of the stable line. Sparky 7 is based on and fully compatible with Debian 12 “Bookworm”. Due to issue of GPG key in Sparky testing 2024.11 iso images, there was an additional 2024.12 release made in December, so the stable iso images has been moved to January 2025. Changes: – all packages…

Source

04 January, 2025 06:30PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: KDE: Snap hotfixes and updates

Fixed okular pdf printing https://bugs.kde.org/show_bug.cgi?id=498065

Fixed kwave recording https://bugs.kde.org/show_bug.cgi?id=442085 please run sudo snap connect kwave:audio-record :audio-record until auto-connect gets approved here: https://forum.snapcraft.io/t/kde-auto-connect-our-two-recording-apps/44419

New qt6 snaps in –edge until 24.12.1 release

  • minuet
  • ksystemlog
  • kwordquiz
  • lokalize
  • ksirk
  • ksnakeduel
  • kturtle

I have begun the process of moving to core24 currently in –edge until 24.12.1 release.

Some major improvements come with core24!

Tokodon is our wonderful Mastadon client

I hate asking but I am unemployable with this broken arm fiasco and 6 hours a day hospital runs for treatment. If you could spare anything it would be appreciated! https://gofund.me/573cc38e

04 January, 2025 01:36PM

January 03, 2025

Ubuntu Studio: Upgrades from 22.04 LTS to 24.04 LTS are NOT WORKING

NotLikeThis - Discord Emoji

The Ubuntu Studio team has investigated a conflict involving upgrades from Ubuntu 22.04 to 24.04 failing. This has been confirmed to be reproducible.

We are currently following multiple bug reports (Launchpad bugs 2078639, 2078608, and 2079817) with most of them being duplicates of the first in that list.

If you have attempted to upgrade and ran into this problem, feel free to click on the first link in that list and click on “Does this bug affect you?”. Filing additional bug reports is unnecessary.

In most flavors of Ubuntu in 24.04 LTS, the idea was to have PipeWire completely replace PulseAudio as the primary sound server and would force the installation of PipeWire. However, with Ubuntu Studio, we went with a different approach of having PipeWire be the default, but be replaced by PulseAudio if the user wished to switch back to the classic, albiet unsupported, setup. This meant PipeWire had to be a “soft” dependency rather than a “hard” one so that it could be uninstalled by our metapackages without breaking the entire desktop metapackage.

However, this also made it so that the upgrade resolver (ubuntu-release-upgrader) would get confused when calculating how to perform the upgrade. This is where we are hitting the problem.

Currently, we are working with the Ubuntu Foundations Team at Canonical on how to have ubuntu-release-upgrader force an installation of PipeWire for Ubuntu Studio without Ubuntu Studio requiring a hard dependency on PipeWire.

Unfortunately, if we cannot resolve this issue, we may have to do one of two things:

  • Not support direct, in-place upgrades of Ubuntu Studio 22.04 LTS to 24.04 LTS and remove the upgrade notifier.
  • Create a hard dependency on PipeWire, effectively removing the ability to switch back to the classic PulseAudio/JACK bridged setup with Studio Controls.

We don’t want either of these solutions, which is why we are hoping we can find a solution with ubuntu-release-upgrader soon.

03 January, 2025 11:19PM

January 02, 2025

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Audits CIS Google Chrome Benchmarks

Web browsers are a primary gateway to business and consequently they are also a primary gateway for cyber attacks. Malware targeting browsers could gain direct unauthorized access to a target’s network and data or social engineer victims into providing sensitive information that gives the attacker unauthorized access, such as account credentials. In 2024, major browsers (Chrome, Firefox, and Safari) accounted for 59 Critical severity (CVSS3 ³ 9) and 256 High severity (CVSS3 between 7.0 and 8.9) vulnerabilities. 10 CVEs (Common Vulnerabilities and Exposures) in the trifecta were added to the KEV (Known Exploited Vulnerabilities) catalog of CISA (Cybersecurity & Infrastructure Security Agency). Browser security should therefore be top-of-mind for security teams.

In light of this, we are proud to announce the addition of CIS Google Chrome Benchmark v3.0.0 Level 1 auditing to our list of compliance capabilities. This latest feature allows our Enterprise feed subscribers to verify their Google Chrome configurations against the industry-leading CIS compliance framework of the CIS (Center for Internet Security). The new Google Chrome benchmark tests will sit among our other CIS controls in critical cybersecurity areas such as Apache, IIS, NGINX, MongoDB, Oracle, PostgreSQL, Windows and Linux [1] [2].

CIS Google Chrome Benchmark for Windows

The CIS Google Chrome Benchmark v3.0.0 Level 1 is now available in the Greenbone Enterprise Feed. It establishes a hardened configuration for the Chrome browser. For Windows, implementing the controls involves setting Windows registry keys to define Chrome’s security configuration. Continuous attestation is important because if modified at the user level Chrome becomes more vulnerable to data-leakage, social engineering attacks or other attack vectors.

Our Enterprise vulnerability feed uses compliance policies to run tests on target endpoints, verifying each requirement in the CIS benchmark through one or more dedicated vulnerability tests. These tests are grouped into scan configurations which can be used to create scan tasks that access groups of target systems to verify their security posture. When aligning with internal risk requirements or mandatory government policies, Greenbone has you covered.

The Importance of Browser Security

Much of the critical information flowing through the average organization is transmitted through the browser. The rise of a remote workforce and cloud-based web-applications means that web browsers are a primary interface for business activities. Not surprisingly, in the past few years, Internet browsers have been a hotbed for exploitation. National cybersecurity agencies such Germany’s BSI [3] [4], CISA [5] [6], and the Canadian Centre for Cyber Security [7] have all released advisories for addressing the risks posed by Internet browsers.

Browsers can be exploited via technical vulnerabilities and misconfigurations that could lead to remote code execution, theft of sensitive data and account takeover, but are also a conduit for social engineering attacks. Browser security must be addressed by implementing a hardened security profile and continuously attesting it and by regularly applying updates to combat any recently discovered vulnerabilities. Greenbone is able to detect known vulnerabilities for published CVEs in all major browsers and now with our latest CIS Google Chrome Benchmark certification, we can attest industry standard browser compliance.

How Does the CIS Google Chrome Benchmark Improve Browser Security?

Every CIS Benchmark is developed through a consensus review process that involves a global community of subject matter experts from diverse fields such as consulting, software development, auditing, compliance, security research, operations, government, and legal. This collaborative process is meant to ensure that the benchmarks are practical and data-driven and reflect real-world expertise. As such, CIS Benchmarks serve as a vital part of a robust cybersecurity program.

In general, CIS Benchmarks focus on secure technical configuration settings and should be used alongside essential cyber hygiene practices, such as monitoring and promptly patching vulnerabilities in operating systems, applications and libraries.

The CIS Google Chrome Benchmark defines security controls such as:

  • No domains can bypass scanning for dangerous resources such as phishing content and malware.
  • Strict verification of SSL/TLS certificates issued by websites.
  • Reducing Chrome’s overall attack surface by ensuring the latest updates are automatically applied periodically.
  • Chrome is configured to detect DNS interception which could potentially allow DNS hijacking.
  • Chrome and extensions cannot interact with other third party software.
  • Websites and browser extensions cannot abuse connections with media, the local file system or external devices such as Bluetooth, USB or media casting devices.
  • Only extensions from the Google Chrome Web Store can be installed.
  • All processes forked from the main Chrome process are stopped once the Chrome application has been closed.
  • SafeSites content filtering blocks links to adult content from search results.
  • Prevent importing insecure data such as auto-fill form data, default homepage or other configuration settings.
  • Ensuring that critical warnings cannot be suppressed.

Greenbone Is a CIS Consortium Member

As a member of the CIS consortium, Greenbone continues to enhance its CIS Benchmark scan configurations. All our CIS Benchmarks policies are aligned with CIS hardening guidelines and certified by CIS, ensuring maximum security for system audits. Also, Greenbone has added a new compliance view to the Greenbone Security Assistant (GSA) web-interface, streamlining the process for organizations seeking to remove security gaps from their infrastructure to prevent security breaches.

Summary

CIS Controls are critical for safeguarding systems and data by providing clear, actionable guidance on secure configurations. The CIS Google Chrome Benchmark is especially vital at the enterprise level, where browsers impact many forms of sensitive data. It’s exciting to announce that Greenbone is expanding the industry leading vulnerability detection capabilities with a new compliance scan: the CIS Google Chrome Benchmark v3.0.0 Level 1. With this certification, Greenbone continues to strengthen its position as a trusted ally in proactive cybersecurity. This latest feature reflects our dedication to advancing IT security and protecting against evolving cyber threats.

02 January, 2025 08:15AM by Greenbone AG

hackergotchi for Deepin

Deepin

2025 Happy New Year! deepin wish you all the best!

Related Reading: (1)Click to support the deepin Community   Content source: deepin community Reprinted with attribution

02 January, 2025 07:22AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Colin Watson: Free software activity in December 2024

Most of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via Liberapay (thanks!).

OpenSSH

I issued a bookworm update with a number of fixes that had accumulated over the last year, especially fixing GSS-API key exchange which was quite broken in bookworm.

base-passwd

A few months ago, the adduser maintainer started a discussion with me (as the base-passwd maintainer) and the shadow maintainer about bringing all three source packages under one team, since they often need to cooperate on things like user and group names. I agreed, but hadn’t got round to doing anything about it until recently. I’ve now officially moved it under team maintenance.

debconf

Gioele Barabucci has been working on eliminating duplicated code between debconf and cdebconf, ultimately with the goal of migrating to cdebconf (which I’m not sure I’m convinced of as a goal, but if we can make improvements to both packages as part of working towards it then there’s no harm in that). I finally got round to reviewing and merging confmodule changes in each of debconf and cdebconf. This caused an installer regression due to a weirdness in cdebconf-udeb’s packaging, which I fixed - sorry about that!

I’ve also been dealing with a few patch submissions that had been in my queue for a long time, but more on that next month if all goes well.

CI issues

I noticed and fixed a problem with Restrictions: needs-sudo in autopkgtest.

I fixed broken aptly images in the Salsa CI pipeline.

Python team

Last month, I mentioned some progress on sorting out the multipart vs. python-multipart name conflict in Debian (#1085728), and said that I thought we’d be able to finish it soon. I was right! We got it all done this month:

The Python 3.13 transition continues, and last month we were able to add it to the supported Python versions in testing. (The next step will be to make it the default.) I fixed lots of problems in aid of this, including:

Sphinx 8.0 removed some old intersphinx_mapping syntax which turned out to still be in use by many packages in Debian. The fixes for this were individually trivial, but there were a lot of them:

I found that twisted 24.11.0 broke tests in buildbot and wokkel, and fixed those.

I packaged python-flatdict, needed for a new upstream version of python-semantic-release.

I tracked down a test failure in vdirsyncer (which I’ve been using for some years, but had never previously needed to modify) and contributed a fix upstream.

I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools:

I fixed django-cte to remove a build-dependency on the obsolete python3-nose package.

I added Django 5.1 support to django-polymorphic. (There are a number of other packages that still need work here.)

I fixed various other build/test failures:

I upgraded these packages to new upstream versions:

  • aioftp
  • alot
  • astroid
  • buildbot
  • cloudpickle (fixing a Python 3.13 failure)
  • django-countries
  • django-sass-processor
  • djoser (fixing CVE-2024-21543)
  • ipython
  • jsonpickle
  • lazr.delegates
  • loguru (fixing a Python 3.13 failure)
  • netmiko
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pydoctor
  • pygresql
  • pylint (fixing Python 3.13 failures #1089758 and #1091029)
  • pypandoc (fixing a Python 3.12 warning)
  • python-aiohttp (fixing CVE-2024-52303 and CVE-2024-52304
  • python-aiohttp-security
  • python-argcomplete
  • python-asyncssh
  • python-click
  • python-cytoolz
  • python-jira (fixing a Python 3.13 failure)
  • python-limits
  • python-line-profiler
  • python-mkdocs
  • python-model-bakery
  • python-pgspecial
  • python-pyramid (fixing CVE-2023-40587)
  • python-pythonjsonlogger
  • python-semantic-release
  • python-utils
  • python-venusian
  • pyupgrade
  • pyzmq
  • quart
  • six
  • sqlparse
  • twisted
  • vcr.py
  • vulture
  • yoyo
  • zope.configuration
  • zope.testrunner

I updated the team’s library style guide to remove material related to Python 2 and early versions of Python 3, which is no longer relevant to any current Python packaging work.

Other Python upstream work

I happened to notice a Twisted upstream issue requesting the removal of the deprecated twisted.internet.defer.returnValue, realized it was still used in many places in Debian, and went on a PR-filing spree informed by codesearch to try to reduce the future impact of such a change on Debian:

Other small fixes

Santiago Vila has been building the archive with make --shuffle (also see its author’s explanation). I fixed associated bugs in cccc (contributed upstream), groff, and spectemu.

I backported an upstream patch to putty to fix undefined behaviour that affected use of the “small keypad”.

I removed groff’s Recommends: libpaper1 (#1091375, #1091376), since it isn’t currently all that useful and was getting in the way of a transition to libpaper2. I filed an upstream bug suggesting better integration in this area.

02 January, 2025 12:16AM

January 01, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2024/12

The 12th monthly Sparky project and donate report of the 2024: – Linux kernel updated up to 6.12.7, 6.6.68-LTS, 6.1.122-LTS & 5.15.175-LTS – updated ‘sparky-apt’ and ‘sparky-keyring’ packages on testing to fix APT related issue, you can fix it manually: https://sparkylinux.org/sparky-gpg-key-issue-on-testing/ – Sparky 2024.12 released which provides the Sparky APT GPG key issue fix…

Source

01 January, 2025 10:40PM by pavroo

December 31, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: KDE: Application snaps 24.12.0 release and more

https://kde.org/announcements/gear/24.12.0

I hope everyone had a wonderful holiday! Your present from me is shiny new application snaps! There are several new qt6 ports in this release. Please visit https://snapcraft.io/store?q=kde

I have also fixed the Krita snap unable to open/save bug. Please test –edge!

I am continuing work on core24 support and hope to be done before next release.

I do look forward to 2025! Begone 2024!

If you can help with gas, I still have 3 weeks of treatments to go. Thank you for your continued support.

https://gofund.me/573cc38e

31 December, 2024 02:34PM

hackergotchi for Grml developers

Grml developers

Michael Prokop: Mein Lesejahr 2024

Foto der hier vorgestellten Bücher

Mein Lesejahr 2024 war mit durchschnittlich einem Buch pro Woche ähnlich wie 2023. Mein Best-Of der von mir 2024 fertig gelesenen Bücher (jene die ich besonders lesenswert fand bzw. empfehlen möchte, die Reihenfolge entspricht dem Foto und stellt keinerlei Reihung dar):

Mein SuB bzw. Lesestapel für 2025 ist bereits gut gefüllt, ich freue mich aber trotzdem über etwaige Leseempfehlungen. Ebenso freue ich mich über Feedback, wenn jemand ein Buch aufgrund dieses Beitrags hier gelesen hat.

31 December, 2024 12:30PM

hackergotchi for Deepin

Deepin

Essential Knowledge Before Starting Wine Development

When it comes to Wine, most seasoned Linux users have heard of it, but when asked to explain exactly what Wine is, many might not be able to articulate it clearly. This article will provide a simple introduction to how Wine works and how to start developing with Wine. So, if you fall into one of the following three categories of readers: You want to participate in Wine development but don't know where to start. You just want to have a general understanding of how Wine works. You simply want to enjoy using the latest version of Wine. Hopefully, after ...Read more

31 December, 2024 02:09AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Santiago Zarate: Quick howto for systemd-inhibit

Bit of the why

So often I come across the need to avoid my system to block forever, or until a process finishes, I can’t recall how did I came across systemd inhibit, but here’s my approach and a bit of motivation

Motivation

I noticed that the Gnome Settings, come with Rygel

After some fiddling (not much really), it starts directly once I login and I will be using it instead of a fully fledged plex or the like, I just want to stream some videos from time to time from my home pc over my ipad :D using VLC.

The Hack

systemd-inhibit --who=foursixnine --why="maybe there be dragons" --mode block \
    bash -c 'while $(systemctl --user is-active -q rygel.service); do sleep 1h; done'

One can also use waitpid and more.

Thank you for comming to my ted talk.

31 December, 2024 12:00AM

December 30, 2024

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 872

Welcome to the Ubuntu Weekly Newsletter, Issue 872 for the week of December 22 – 28, 2024. The full version of this issue is available here.

In this issue we cover:

  • New Ubuntu Membership Board 2024
  • Ubuntu Stats
  • Hot in Support
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • UbuCon Asia 2024
  • LoCo Events
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

30 December, 2024 09:02PM by guiverc

hackergotchi for VyOS

VyOS

VyOS Project December 2024 Update

Hello, Community!

The December update is here! The biggest highlight of this month is the 1.4.1 release but there was lots of work in the rolling release as well, both from the maintainers team and from our contributor community. One of the biggest news in the rolling release is that we are ready to update FRR — our routing protocol stack — to the latest 10.2 release. That will allow us to get rid of the legacy OpenNHRP daemon that we use for DMVPN and use FRR's built-in NHRP implementation, among other things.

Apart from that, there are many more bug fixes and improvements made in November and December, especially in QoS, IPoE server, and other areas — read on for details!

30 December, 2024 03:56PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

(中文) 写在deepin 20年

Sorry, this entry is only available in 中文.

30 December, 2024 02:32AM by aida

December 27, 2024

hackergotchi for Pardus

Pardus

Yeni Yaşında Teknolojinin Kalbinde

Pardus 19. doğum gününü Teknolojinin Kalbi, TÜBİTAK BİLGEM çatısı altında büyük bir gurur ve heyecanla kutluyoruz.

27 December, 2024 10:23AM by Hace İbrahim Özbal

hackergotchi for Deepin

Deepin

December 26, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E330 Especial Comilanzenas

Recebemos uma colecção de convidados especiais (Joana Simões, André Bação, Gato Galão Gagarine) neste episódio aberto ao povo e falámos de tudo, sem rede e sem tabus: o Império não fez nada de errado; convidámos o Governo Civil de Lisboa a receber peúgas malcheirosas; pusemos em causa a validade de sondagens democráticas no mundo do software geoespacial, arengámos sobre opções de domótica e routers livres, Lunokhod, planetário, walkie-talkies e estabilizadores de corrente e exibimos as nossas compras de Comilanzenas (*Natal), para fazer inveja lá em casa. Em suma: um fartar vilanagem!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

26 December, 2024 12:00AM

December 24, 2024

hackergotchi for Deepin

Deepin

deepin Bi-Weekly Technical Report: DDE 7.0 to be Released with deepin 25 Preview

The eighth Issue of deepin Biweekly Technical Report has been released, we will briefly list the progress of various deepin teams over the past two weeks and outline the general plan for the next two weeks!   DDE(deepin Desktop Environment) Requirements development for deepin 25 has been completed, and the current phase is focused on bug fixes for deepin 23 and deepin 25. Progress Further improve the adaptation support for Treeland; Fix several Qt and DDE issues discovered during the Wayland adaptation process. Plans Prepare for the release of DDE 7.0 with the deepin 25 Preview.   System Development Progress ...Read more

24 December, 2024 01:59AM by aida

December 23, 2024

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 871

Welcome to the Ubuntu Weekly Newsletter, Issue 871 for the week of December 15 – 21, 2024. The full version of this issue is available here.

In this issue we cover:

  • Vote Extension: 2024 Ubuntu Technical Board
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #376
  • Rocks Public Journal; 2024-12-20
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • UbuCon Latin America, Barranquilla 2024!
  • LoCo Events
  • Advanced Intel® Battlemage GPU features now available for Ubuntu 24.10
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Updates and Security for Ubuntu 20.04, 22.04, 24.04, and 24.10
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić
  • Cristovao Cordeiro – cjdc
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

23 December, 2024 09:15PM by guiverc

hackergotchi for siduction

siduction

Release Notes for siduction 2024.1.0 »Shine on…«

The siduction team is pleased to introduce a new release. It has been some time since we dropped the last one. The reason for the delay is that we were waiting for KDE Plasma 6 to hit Debian Unstable. Although Plasma 6 was released at the end of February 2024, its release in Unstable was delayed by about four months due to the t64 transition in Debian, which you probably remember. But now it’s here, and Plasma 6.2 has been available in Unstable for about two weeks.

The username and password for the Live Session are siducer/live.

What to expect in siduction 2024.1.0?

First, a few words about the new artwork of siduction 2024.1.0 titled “Shine on…”. The famous album by Pink Floyd inspired this design. And if you now have an earworm… you’re welcome 🙂
The wallpaper Nexus by Krystian Zajdel serves as the basis. We have adapted it for siduction, paying tribute to the KDE developers, who have again put tremendous work into the project this year, making it the best desktop environment for us. If you agree, please consider donating or supporting KDE. The year-end donation marathon is currently still ongoing, where you can “adopt” an app.

siduction Editions

The flavors we offer for siduction 2024.1.0 include KDE Plasma 6.2.4.1, LXQt 2.1.0-1, Xfce 4.20, Xorg, and noX. GNOME, MATE, and Cinnamon have not made it again, as there is no maintainer for them within siduction. If you’re interested, please contact us. They may return one day or not. Of course, they are still installable from the repository.

KDE Plasma 6

Plasma 6 has now nearly fully arrived in Unstable and Testing and will be available for Debian 13 »Trixie«. Although Wayland is the default session type in Plasma 6, we have opted for X11 as the default, as Calamares currently does not take the desired keyboard layout under Wayland. This could have severe consequences in encrypted installations. However, you can at any time switch to Wayland in SDDM. As existing users, you’ve likely already upgraded to Plasma 6, and now the current Plasma generation is also available for fresh installations.


Known bugs: Dolphin currently cannot connect via smb://. As a workaround, you can use sftp. The bug lies in the kio-extras package. So if you see an update for that package, and you rely on smb://, please check if it works.

Xfce 4.20

The newly released Xfce 4.20 from December 15 barely made it into Unstable and therefore into our new release. The new Xfce release focuses, among other things, on the initial, still experimental support for Wayland. Also, the Thunar file manager received significant improvements.


Known bugs: Unfortunately, the Wayland implementation is currently so experimental that the session doesn’t start. We have blocked Wayland for now. If this bug is fixed, we will re-enable it in a point release.

LXQt 2.1

LXQt is the lightweight sibling of KDE Plasma. The desktop, developed over ten years, offers a usable but still experimental Wayland session in version 2.1.0.

The released images of siduction 2024.1.0 are a snapshot of Debian Unstable, also known as Sid, from December 23, 2024. They are enriched with useful packages and scripts, a Calamares-based installer, and a customized version of the Linux kernel 6.12.4, while systemd is at version 257.1-3.

siduction-btrfs improvements

When using the Btrfs filesystem with siduction, we enable you to manage your snapshots with the SUSE-developed tool Snapper, which has its own chapter in the siduction manual under System Administration → Btrfs and Snapper.

ChangeLog of changes:

siduction-btrfs (0.2.0) unstable; urgency=medium

  • Added support for systemd-boot.
  • Removed installation dependencies for grub-common and grub-btrfs.
  • Both enable the complete switch from GRUB to systemd-boot.
  • Improved description of snapshots in Snapper.

siduction-btrfs (0.3.0-1) unstable; urgency=medium

  • Rewritten to use the Snapper plugin directory.
  • Added support for a boot partition when using GRUB.

info-md

  • Can be used on systems with MBR and GPT partition tables, with or without a separate /boot partition.
  • Since version 0.3.0, siduction-btrfs uses the Snapper plugin directory.

When using the GRUB boot manager
After a rollback, the file /boot/grub/grub.cfg is recreated in the rollback target using chroot and then GRUB is reinstalled from the rollback target. This allows the user to boot directly into the rollback target with a simple reboot. All other subvolumes, including the previously used one, are accessible via the siduction snapshots submenu.
If the file /boot/grub/grub.cfg is updated during software installation or upgrade, the grub-menu-title script will add the flavor and subvolume to the menu line of the default boot entry.

When using the systemd-boot boot manager
After a rollback, the rollback-sd-boot script creates the boot entries. It takes into account all the kernels in the new snapshot. The default boot entry is set to the default subvolume.
If a subvolume is deleted for which boot entries existed, those entries will be removed.

Snapper Snapshot Description
After an APT action, the snapshot-description script changes the description displayed by Snapper (apt) to a more meaningful text.

Non-free and Contrib:

The following non-free and contrib packages are installed by default:

Nonfree:

  • amd64-microcode – Processor microcode firmware for AMD CPUs
  • firmware-amd-graphics – Binary firmware for AMD/ATI graphics chips
  • firmware-atheros – Binary firmware for Atheros wireless cards
  • firmware-bnx2 – Binary firmware for Broadcom NetXtremeII
  • firmware-bnx2x – Binary firmware for Broadcom NetXtreme II 10Gb
  • firmware-brcm80211 – Binary firmware for Broadcom 802.11 wireless card
  • firmware-crystalhd – Crystal HD Video Decoder (firmware)
  • firmware-intelwimax – Binary firmware for Intel WiMAX Connection
  • firmware-iwlwifi – Binary firmware for Intel Wireless cards
  • firmware-libertas – Binary firmware for Marvell Libertas 8xxx wireless card
  • firmware-linux-nonfree – Binary firmware for various drivers in the Linux kernel
  • firmware-misc-nonfree – Binary firmware for various drivers in the Linux kernel
  • firmware-myricom – Binary firmware for Myri-10G Ethernet adapters
  • firmware-netxen – Binary firmware for QLogic Intelligent Ethernet (3000)
  • firmware-qlogic – Binary firmware for QLogic HBAs
  • firmware-realtek – Binary firmware for Realtek wired/wifi/BT adapters
  • firmware-ti-connectivity – Binary firmware for TI Connectivity wireless network
  • firmware-zd1211 – binary firmware for the zd1211rw wireless driver
  • firmware-sof-signed – Intel audio firmware
  • intel-microcode – Processor microcode firmware for Intel CPUs

Contrib:

  • b43-fwcutter – utility for extracting Broadcom 43xx firmware
  • firmware-b43-installer – firmware installer for the b43 driver
  • firmware-b43legacy-installer – firmware installer for the b43legacy driver
  • iucode-tool – Intel processor microcode

Removing Non-Free Content

Currently, the installer does not offer an option to deselect packages that do not comply with the DFSG, the Debian Free Software Guidelines. This means that non-free packages, such as proprietary firmware, are installed by default on the system. The command vrms will list these packages for you. You can manually uninstall unwanted packages or remove them all by entering apt purge $(vrms -s) before or after installation. Otherwise, our script remove-nonfree can do this for you later.

Installation Notes and Known Issues

If you want to reuse an existing home partition (or another data partition), do so after the installation, not in the Calamares installer.
On some Intel graphics processors on certain devices, the system may freeze shortly after booting into Live. To fix this, you need to set the kernel parameter intel_iommu=igfx_off before rebooting.

Credits for siduction 2024.1

Core Team:

  • Torsten Wohlfarth (towo)
  • Hendrik Lehmbruch (hendrikL)
  • Ferdinand Thommes (devil)
  • Vinzenz Vietzke (vinzv)
  • Axel Konrad (akli)

Former contributors:

  • Alf Gaida (agaida) (eaten by the cat)
  • Axel Beu 2021†
  • Markus Meyer (coruja)

Code, Ideas, Testing:

  • der_bud
  • se7en
  • davydych
  • tuxnix
  • lupinix
  • tobilinuxer

Our thanks go to everyone involved and our loyal users!

We would like to thank all testers and everyone who has supported us over the years. This release is your achievement too. We also want to thank the KDE community, which provides an excellent desktop environment.
And now, enjoy siduction and the holidays ahead!

On behalf of the siduction team:
Ferdinand Thommes

23 December, 2024 09:05PM by Ferdinand Thommes

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What to know when procuring Linux laptops

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/11c9/procurement-blog-no-button.png" width="720" /> </noscript>

Technology procurement directly influences business success. The equipment you procure will determine how your teams deliver projects and contribute to your success. So what does being “well-equipped” look like in the world of Linux laptops? 

In this blog, we’ll lay down the best practices for procurement professionals who have been tasked with procuring Linux laptops. We’ll cover how you can ensure you get the most out of your hardware, meet your compliance goals and ensure long-term success.

Defining Linux laptops 

You’ve received your requirements, and your mission is to stay faithful to them. Whether you’re procuring Linux laptops for specialized use cases (like AI and graphics), or general desktop use, it’s important to define the term “Linux laptop”. 

Given that by design, Linux is hardware agnostic, you could describe nearly every laptop as a “Linux laptop.” All you have to do is install Linux. However, given the diversity of Linux distributions, and the different support models available from both software and hardware vendors, the task goes beyond just hardware. Procuring Linux laptops requires taking the whole picture into account – starting with balancing hardware and software.

Balancing software and hardware

Hardware and software are interdependent: you need to find the right combination to reach your security, stability and performance goals. You’ll likely find that the more specialized the use case, the more of a role hardware will play in your overall decision. That’s because specialized hardware is less abundant. Whilst Linux broadens your horizons, your choice of distribution and support model will likely need to accommodate stricter performance requirements than you would find with a general desktop. 

By choosing a Linux distribution that is proven to perform at a high-level across a range of different laptops, you can ensure that you retain a large degree of choice at the hardware level. That’s where certification comes in.

The value of certification 

Regardless of which Linux distribution you choose to use, you need to know that it works on the hardware in question, and can support your specific needs. This is where certification programs come in. Certification programs are when a publisher tests and optimizes their OS, in a laboratory setting, to ensure it can run smoothly on the hardware. This is especially important if your Linux laptops are for specialized use cases where there is no tolerance available for malfunctions. 

Consistent experience

It’s important to check how thorough an organization’s certification program is, and that they’re transparent about how they decide to award (or not) certification. For example, Ubuntu is certified for over 1,000 laptops, from consumer and corporate to prosumer and workstation devices. Canonical documents Ubuntu laptop compatibility and the thorough testing that each device receives in coverage guides, with certification being withheld in the case that a device does not meet the required standard. This ensures a consistent experience for all users. 

Continuous performance

Certification is not just about creating a consistent experience across different devices, but ensuring they continue to perform as required, through updates and patching. Taking Canonical as an example, all Ubuntu certified laptops receive support, through patching and maintenance, until the specific Ubuntu release reaches end of life. In addition, through direct partnerships with Dell, Lenovo and HP, Canonical works proactively to meet device-specific needs. We work closely to fix any issues during the certification process, ensuring that each device performs as expected.

Demonstrating compliance

What makes a compliant Linux laptop? This is decided by the compliance requirements of your organization and the legislation that governs where you operate. It suffices to say that it’s a non-starter if your laptop fails your compliance tests. 

Certification is an important part of compliance. By using an OS that is supported on your specific laptop model, you reduce the risk of unexpected behaviors or processes that may give rise to vulnerabilities or exploits. Taking Ubuntu as an example, certification includes the testing of in-built security features like secure boot, to ensure they function as intended. This enables you to demonstrate that your chosen hardware-software combination is supported and secure. 

In addition, Ubuntu long term support releases include security and patching for 5 years, with the option of extending this to 12 years with an Ubuntu Pro subscription and Legacy support add-on. This demonstrates the importance of selecting both the right distro and the right support model – it can make the difference between your laptop’s end of life and continued high performance. 

Beyond certification

Certification is an important part of the procurement picture, but it’s important to also consider what the OS brings to the table outside of certification. Beyond keeping your laptops up and running, you’ll need an OS that helps you achieve your goals at scale, across a fleet of laptops. This section will focus on manageability, and use Ubuntu to illustrate the key points you should consider.

Support for modern enterprise applications

Your Linux laptops have the ultimate goal of performing to the standards your end users expect. Beyond your OS running smoothly with your hardware, at the application level you should be on the lookout for a mature ecosystem of applications that can run natively on your OS. 

Linux offers the flexibility to onboard new apps via APIs, however you should aim for this to be the exception, rather than the rule. It’s simply not scalable for your administrators to spend time on onboarding and maintaining the core applications for a fleet of laptops with diverse needs. Additionally, non-native applications may not deliver the performance your end users expect.

By selecting an OS like Ubuntu, you gain access to an extensive ecosystem of over 36,000 toolchains and applications that span from productivity to coding, graphics and AI. Backed by both a community of users who are passionate about contributing to Ubuntu, and Canonical’s long-term security maintenance and support, end users gain access to an ecosystem that runs natively and is stable. 

Compliance hardening tools 

Auditing, hardening and maintaining Linux systems in order to conform to standards like CIS or DISA-STIG is a time consuming, but essential process. Choosing a distro that incorporates tools for compliance and hardening will reduce both time and errors in the process. A distro that commits to these tools is likely to be a reliable long-term choice.

Taking Ubuntu as an example, Canonical tests its long-term support release against standards such as FIPS-140, NIST, DISA-STIG and Common Criteria, and offers automated hardening tools for these standards and others, through an Ubuntu Pro subscription.

IT management and governance

Going beyond individual laptops, your laptop fleet as a whole needs to be manageable from a governance perspective. Manually managing large fleets of laptops is inefficient, but also dangerous. A report by Verizon estimates that sysadmins are responsible for around 11% of data breaches, usually due to misconfigurations. Even with the most secure hardware in the world, without the right approach your Linux laptops will be vulnerable.

Your chosen OS must provide you with both visibility, in order to audit the current state of your devices, and manageability, allowing you to manage access and roll out updates at scale without large amounts of manual effort.

For example, Ubuntu supports identity management protocols such as Entra ID (for Microsoft) and AuthD, the open standard supported by the vast majority of enterprise and consumer identity providers. Ubuntu can also be integrated with your chosen device management platform, or you can use Canonical Landscape.

Minimal attack surface

The best distros will build on the hardware security built into your Linux laptops through regular firmware patching, and ensure that the software layer is secure by adopting a zero trust approach. Overall, your distro should actively work to reduce the attack surface of your Linux laptop, rather than increase it.

Taking Ubuntu as an example, you would encounter a set of pre-configurations designed to reduce the attack surface of your laptops to the bare minimum, by ensuring that any access to your Linux laptops is granted on a “need to know” basis, rather than by default. This includes automatic security patching, password hashing, no open ports (important for physical security) and restrictions on unprivileged users.  

Long-term support: where compliance and performance meet

Ultimately, your Linux laptop needs to last the distance, which means remaining supported and secure. If either of these two criteria stop being true, then it usually means you’ve reached the end-of-life. When does this occur? 

You should aim for a laptop that can realistically outlast your desired lifespan in order to give yourself some breathing room. 

This is where the value of long term support comes into play. You should investigate the support offered by both your hardware vendor and your software provider, in order to calculate an accurate estimate. Ubuntu LTS releases are maintained for 5 years as standard, with the option of expanded security maintenance taking the total to up to 12 years. This includes security patching and maintenance for over 36,000 packages, wherever Ubuntu LTS is running – including on any certified devices.

Find out more about where to find the best Ubuntu laptops by visiting our certification page. 

Further reading 

23 December, 2024 05:35PM

hackergotchi for Deepin

Deepin

December 22, 2024

hackergotchi for Elive

Elive

Elive 3.8.46 released

The Elive Team is pleased to announce the release of 3.8.46 Our Christmas Gift: After years of waiting for the New Enlightenment desktop E26 in Elive, we have finally started the development and this is the first version that includes it! The desktop is not yet fully ready, it still lacks some integrations with the OS, correct designs, and there are some instabilities in the desktop, but you can already start playing with it! :) However, this should not be a concern, as E16 is still included as the stableSEE DETAILS

Check more in the Elive Linux website.

22 December, 2024 04:13PM by Thanatermesis

December 21, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: Thug Life

My current playlist is this diorama of Lulu the Piggy channeling Tupac Shakur in a toy vending machine in the basement of New World Mall in Flushing Chinatown.

21 December, 2024 11:06PM

December 20, 2024

hackergotchi for Grml developers

Grml developers

Michael Prokop: Grml 2024.12 – codename Adventgrenze

Picture with metrics of three user profiles on GitHub.com, with many contributions especially in the last quarter of the year

We did it again™! Just in time, we’re excited to announce the release of Grml stable version 2024.12, code-named ‘Adventgrenze’! (If you’re not familiar with Grml, it’s a Debian-based live system tailored for system administrators.)

This new release is built on Debian trixie, and for the first time, we’re introducing support for 64-bit ARM CPUs (arm64 architecture)!

I’m incredibly proud of the hard work that went into this release. A significant amount of behind-the-scenes effort went into reworking our infrastructure and redesigning the build process. Special thanks to Chris and Darsha – our Grml developer days in November and December were a blast!

For a detailed overview of the changes between releases 2024.02 and 2024.12, check out our official release announcement. And, as always, after a release comes the next one – exciting improvements are already in the works!

BTW: recently we also celebrated 20(!) years of Grml Releases. If you’re a Grml and or grml-zsh user, please join us in celebrating and send us a postcard!

20 December, 2024 06:05PM

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: LXC/LXCFS/Incus 6.0.3 LTS release

Introduction

The Linux Containers project maintains Long Term Support (LTS) releases for its core projects. Those come with 5 years of support from upstream with the first two years including bugfixes, minor improvements and security fixes and the remaining 3 years getting only security fixes.

This is now the third round of bugfix releases for LXC, LXCFS and Incus 6.0 LTS.

LXC

LXC is the oldest Linux Containers project and the basis for almost every other one of our projects. This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still actively used directly or indirectly on millions of systems.

Announcement: https://discuss.linuxcontainers.org/t/lxc-6-0-3-lts-has-been-released/22402

Highlights of this point release:

  • Added support for PuzzleFS images in lxc-oci
  • SIGHUP is now propagated through lxc.init
  • Reworked testsuite including support for 64-bit Arm

LXCFS

LXCFS is a FUSE filesystem used to workaround some shortcomings of the Linux kernel when it comes to reporting available system resources to processes running in containers. The project started in late 2014 and is still actively used by Incus today as well as by some Docker and Kubernetes users.

Announcement: https://discuss.linuxcontainers.org/t/lxcfs-6-0-3-lts-has-been-released/22401

Highlights of this point release:

  • Better detection of swap accounting support
  • Reworked testsuite including support for 64-bit Arm

Incus

Incus is our most actively developed project. This virtualization platform is just over a year old but has already seen over 3500 commits by over 120 individual contributors. Its first LTS release made it usable in production environments and significantly boosted its user base.

Announcement: https://discuss.linuxcontainers.org/t/incus-6-0-3-lts-has-been-released/22403

Highlights of this point release:

  • OS info for virtual machines (incus info)
  • Console history for virtual machines (incus console --show-log)
  • Ability to create clustered LVM pools directly through Incus
  • QCOW2 and VMDK support in incus-migrate
  • Configurable macvlan mode (bridge, vepa, passthru or private)
  • Load-balancer health information (incus network load-balancer info)
  • External interfaces in OVN networks (support for bridge.external_interfaces)
  • Parallel cluster evacuation/restore (on systems with large number of CPUs)
  • Introduction of incus webui as a quick way to access the web interface
  • Automatic cluster re-balancing
  • Partial instance/volume refresh (incus copy --refresh-exclude-older --refresh)
  • Configurable columns, formatting and refresh time in incus top
  • Support for DHCP ranges in OVN (ipv4.dhcp.ranges)
  • Support for changing the backing interface of a managed physical network
  • Extended QEMU scriptlet (additional functions)
  • New log file for QEMU QMP traffic (qemu.qmp.log)
  • New get_instances_count function available in placement scriptlet
  • Support for --format in incus admin sql
  • Storage live migration for virtual machines
  • New authorization scriptlet as an alternative to OpenFGA
  • API to retrieve console screenshots
  • Configurable initial owner for custom storage volumes (initial.uid, initial.gid, initial.mode)
  • Image alias reuse on import (incus image import --reuse --alias)
  • New incus-simplestreams prune command
  • Console access locking (incus console --force to override)

What’s next?

We’re expecting another LTS bugfix release for the 6.0 branches in the first quarter of 2025.
We’re also actively working on a new stable release (non-LTS) for LXCFS.
Incus will keep going with its usual monthly feature release cadence.

Thanks

This LTS release update was made possible thanks to funding provided by the Sovereign Tech Fund (now part of the Sovereign Tech Agency).

The Sovereign Tech Fund supports the development, improvement, and maintenance of open digital infrastructure. Its goal is to sustainably strengthen the open source ecosystem, focusing on security, resilience, technological diversity, and the people behind the code.

Find out more at: https://www.sovereign.tech

20 December, 2024 05:39PM

hackergotchi for VyOS

VyOS

VyOS 1.4.1 release

Hello, Community!

VyOS 1.4.1 release is now available to customers and community members with contributor subscriptions. Its source code is available as a tarball upon request to everyone who legitimately received a binary image for us.  Fixes for CVE-2023-32728 (Zabbix agent SMART plugin RCE) and CVE-2024-6387 (regreSSHion) that were already available as hotfixes are integrated in the image, and there is a fix for a potential DoS in the HTTP API caused by a vulnerability in the python-multipart library (CVE-2024-53981). This release also includes multiple bug fixes and a few improvements, including support for Base64-encoded IPsec secrets, VXLAN VNI to VLAN range mappings, reject routes, and more — read on for details!

20 December, 2024 04:55PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2024.12 available

We are proud to announce our new stable release version 2024.12, code-named ‘Adventgrenze’!

This Grml release brings you fresh software packages from Debian trixie, enhanced hardware support and addresses known bugs from previous releases.

With version 2024.12 Grml, for the first time ever, supports 64-bit ARM CPUs (Architecture arm64)!

This milestone was made possible thanks to the financial support from netcup.

As previously announced, releases for 32-bit x86 PCs have been discontinued. With the grml32 flavor removed, we’ve introduced UEFI 32-bit boot support on our Grml amd64 flavor to ensure 64-bit PCs with 32-bit Firmware can use this release.

20 December, 2024 10:42AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Building RAG with enterprise open source AI infrastructure

One of the most critical gaps in traditional Large Language Models (LLMs) is that they rely on static knowledge already contained within them. Basically, they might be very good at understanding and responding to prompts, but they often fall short in providing current or highly specific information. This is where Retrieval-augmented Generation (RAG) comes in;  RAG addresses these critical gaps in traditional LLMs by incorporating current and new information that serves as a reliable source of truth for these models. 

In our previous blog in this series on understanding and deploying RAG, we walked you through the basics of what this technique is and how it enhances generative AI models by utilizing external knowledge sources such as documents and extensive databases. These external knowledge bases enhance machine learning models for enterprise applications by providing verifiable, up-to-date information that reduces errors, simplifies implementation, and lowers the cost of continuous retraining.

In this second blog of our four-part series on RAG, we will focus on creating a robust enterprise AI infrastructure for RAG systems using open source tooling for your Gen AI project.  This blog will discuss AI infrastructure considerations such as hardware, cloud services, and generative AI software. Additionally, it will highlight a few open source tools designed to accelerate the development of generative AI.

RAG AI infrastructure considerations

AI infrastructure encompasses the integrated hardware and software systems created to support AI and machine learning workloads to carry out complex analysis, predictions, and automation. The main challenge when introducing AI in any project is operating the underlying infrastructure stack that supports the models and applications. While similar to regular cloud infrastructures, machine learning tools require a tailored approach to operations to remain reliable and scalable, and the expertise needed for this approach is both difficult to find and expensive to hire. Neglecting proper operations can lead to significant issues for your company, models, and processes, which can seriously damage your image and reputation.

Building a generative AI project, such as a RAG system, requires multiple components and services. Additionally, it’s important to consider the cloud environment for deployment, as well as the choice of operating system and hardware. These factors are crucial for ensuring that the system is scalable, efficient, secure, and cost-effective. The illustration below maps out a full-stack infrastructure delivery of RAG and Gen AI systems:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcd0nIhhKndN2IO3F8ARHQEBP77EpJMps4KiEVWgdwrbDcbGJuFNJxtcMOX8QMcFcJKJCdZvuGlQtkQMzw6lSY4M89Nd20_PGBAlukXMVJiB3Aicj9L4nIYDJyeoflF7qkTQIPjIg?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

Let’s briefly examine each of these considerations and explore their pros and cons.

Hardware

The hardware on which your AI will be deployed is critical. Choosing the right compute options—whether CPUs or GPUs—depends on the specific demands and use cases of your AI workloads. Considerations such as throughput, latency, and the complexity of applications are important; for instance, if your AI requires massive parallel computation and scalable inference, GPUs may be necessary. Additionally, your chosen storage hardware is important, particularly regarding read and write speeds needed for accessing data. Lastly, the network infrastructure should also be carefully considered, especially in terms of workload bandwidth and latency. For example, a low-latency and high-bandwidth network setup is essential for applications like chatbots or search engines

Clouds

Cloud infrastructure provides computing power, storage, and scalability, and meets the demands of AI workloads. There are multiple types of cloud environments – including private, public, and bare-metal deployments – and each one has its pros and cons. For example, bare-metal infrastructure offers high performance for computing and complete control over security. However, managing and scaling a bare-metal setup can be challenging. In comparison, public cloud deployments are currently very popular due to their accessibility, but these infrastructures are owned and managed by public cloud providers. Finally, private cloud environments provide enhanced control over data security and privacy compared to public clouds. 

The good thing is that you can relatively easily blend these different elements of the cloud together into hybrid cloud environments that offer the pros of each one while covering the flaws that single-environment cloud setups may present.

Operating system 

The operating system (OS) plays a crucial role in managing AI workloads, serving as the foundational layer for overseeing hardware and software resources. There are several OS options suitable for running AI workloads, including Linux and enterprise systems like Windows.

Linux is the most widely used OS for AI applications due to its flexibility, stability, and extensive support for machine learning frameworks such as TensorFlow, PyTorch, and Hugging Face. Common distributions used for AI workloads include Ubuntu, Debian, Fedora, CentOS, and many more. Additionally, Linux environments provide excellent support for containerized setups like docker containers and CNCF-compliant setups like Kubernetes. 

Gen AI services

Generative AI projects, such as RAG, may involve multiple components, including a knowledge base, large language models, retrieval systems, generators, inferences, and more. Each of these services will be defined and discussed in greater detail in the upcoming section titled “Advanced RAG and Gen AI Reference Solutions with Open Source.”

While the RAG services may offer different functionalities, it is essential to choose the components that best fit your specific use case. For example, in small-scale RAG deployments, you might need to set aside fine-tuning and early-stage model repositories as these are advanced Gen AI reference solutions. Additionally, it is crucial that all these components integrate smoothly and coherently to create a seamless workflow. This helps reduce latency and accommodates the required throughput for your project.

RAG reference solution

When a query is made in an AI chatbot, the RAG-based system first retrieves relevant information from a large dataset or knowledge base, and then uses this information to inform and guide the generation of the response. The RAG-based system consists of two key components. The first component is the Retriever, which is responsible for locating relevant pieces of information that can help answer a user query. It searches a database to select the most pertinent information. This information is then provided to the second component, the Generator. The Generator is a large language model that produces the final output.

Before using your RAG-based system, you must first create your knowledge base, which consists of external data that is not included in your LLM training data. This external data can originate from various sources, including documents, databases, and API calls. Most RAG systems utilize an AI technique called model embedding, which converts data into numerical representations and stores it in a vector database. By using an embedding model, you can create a knowledge model that is easily understandable and readily retrievable in the context of AI.  Once you have a knowledge base and a vector database set up, you can now perform your RAG process; here is a conceptual flow:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcJjaB4NVMu351t4KcBKmXgOFHBkkkfdK-m-RGVjhpxEqoieVIDwEkndFijDcorQ2HDTb1Z3HXAMYGao-C_Q9yENvF0_yTk83XyzHR6fQiKLxEY59OqYPeU5TW5m5fmiHaTMIQJ?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

This conceptual flow follows 5 general steps:

  1. The user enters a prompt or query. 
  2. Your Retriever searches for relevant information from a knowledge base. The relevance can be determined using mathematical vector calculations and representations through a vector search and database functionality.
  3. The relevant information is retrieved to provide enhanced context to the generator. 
  4. The query and prompts are now enriched with this context and are ready to be augmented for use with a large language model using prompt engineering techniques. The augmented prompt enables the language model to respond accurately to your query. 
  5. Finally, the generated text response is delivered to you.

Advanced RAG and Gen AI reference solution with open source

RAG can be used in various applications, such as AI chatbots, semantic search, data summarization, and even code generation. The reference solution below outlines how RAG can be combined with advanced generative AI reference architectures to create optimized LLM projects that provide contextual solutions to various Gen AI use cases.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXeUfVjUIYyCfdDI4MUw2coGG4L69S8j116gQ5Q5KjrLLRpUN2uw8R4r1JxOSC2OuFA89v6wfop5eoFPSzJT1xzyMLt87WapnhtauyPPfPgLkb9cIGe2F6GtgWd_cD-6biR1brWltw?key=bWiOXP50Qv6mRzi9JmPlbdNX" width="720" /> </noscript>

Figure: RAG enhanced GenAI Ref solution (source: https://opea.dev/

The GenAI blueprint mentioned above was published by Opea (Open Platform for Enterprise AI), a project of the Linux Foundation. The aim of creating this blueprint is to establish a framework of composable building blocks for state-of-the-art generative AI systems, which include LLM,  data storage, and prompt engines. Additionally, it provides a blueprint for RAG and outlines end-to-end workflows. The recent release 1.1 of the Opea project showcased multiple GenAI projects that demonstrate how RAG systems can be enhanced through open source tools.

Each service within the blocks has distinct tasks to perform, and there are various open source solutions available that can help to accelerate these services based on enterprise needs. These are mapped in the table below:

ServicesDescriptionSome open source solutions
Ingest/data processingThe ingest or data processing is a data pipeline layer. This is responsible for data extraction, cleansing, and the removal of unnecessary data that you will run. Kubeflow
OpenSearch
Embedding modelThe embedding model is a machine-learning model that converts raw data to vector representations.Hugging face sentence transformer
Sentence transformer used by OpenSearch 
Retrieval and rankingThis component retrieves the data from the knowledge base; it also ranks the relevance of the information being fetched based on relevance scores.FAISS (Facebook AI Similarity Search) – such as the one being used in OpenSearch
HayStack
Vector databaseA vector database stores vector embeddings so data can be easily searched by the ‘retrieval and ranking services’.Milvus
PostgreSQL Pg_VectorOpenSearch: KNN Index as a vector database
Prompt processingThis service formats queries and retrieved text into a readable format so it is structured to the LLM model. Langchain
OpenSearch: ML – agent predict
LLMThis component provides the final response using multiple GenAI models.GPT
BART
and many more
LLM inferenceThis refers to operationalizing machine learning in production by processing running data into a machine learning model so that it gives an output.Kserve
VLLM
GuardrailThis component ensures ethical content in the Gen AI response by creating a guardrail filter for the inputs and outputs.Fairness Indicators
OpenSearch: guardrail validation model 
LLM Fine-tuningFine-tuning is the process of taking a pre-trained machine learning model and further training it on a smaller, targeted data set.Kubeflow
LoRA
Model repositoryThis component is used to store and version trained machine learning (ML) models, especially within the process of fine-tuning. This registry can track the model’s lifecycle from deployment to retirement.Kubeflow
MLFlow
Framework for building LLM applicationThis simplifies LLM workflow, prompts, and services so that building LLMs is easier.Langchain

This table provides an overview of the key components involved in building a RAG system and advanced Gen AI reference solution, along with associated open source solutions for each service. Each service performs a specific task that can enhance your LLM setup, whether it relates to data management and preparation, embedding a model in your database, or improving the LLM itself.

The rate of innovation in this field, particularly within the open source community, has become exponential. It is crucial to stay updated with the latest developments, including new models and emerging RAG solutions.

Conclusion 

Building a robust generative AI infrastructure, such as those for RAG, can be complex and challenging. It requires careful consideration of the technology stack, data, scalability, ethics, and security. For the technology stack, the hardware, operating systems, cloud services, and generative AI services must be resilient and efficient based on the scale that enterprises require.

There are multiple open-source software options available for building generative AI infrastructure and applications, which can be tailored to meet the complex demands of modern AI projects. By leveraging open source tools and frameworks, organizations can accelerate development, avoid vendor lock-in, reduce cost and meet the enterprise needs.

Now that you’ve been introduced to Blog Series #1 – What is RAG? and this Blog Series #2 on how to prepare a robust RAG AI infrastructure, it’s time to get hands-on and try building your own RAG using open source tools in our next blog in this series, “Build a one-stop solution for end-to-end RAG workflow with open source tools”. Stay tuned for part 3, to be published soon!

Canonical for your RAG and AI Infra needs

Build the right RAG architecture and application with Canonical RAG and MLOps workshop

Canonical provides workshops and enterprise open source tools and services and can advise on securing the safety of your code, data, and models in production.

Canonical offers a 5-day workshop designed to help you start building your enterprise RAG systems. By the end of the workshop, you will have a thorough understanding of RAG and LLM theory, architecture, and best practices. Together, we will develop and deploy solutions tailored to your specific needs. Download the datasheet here.

Explore more and contact our team for your RAG needs.

Learn and use best-in-class Gen AI tooling  on any hardware and cloud

Canonical offers enterprise-ready AI infrastructure along with open source data and AI tools to help you kickstart your RAG projects. Canonical is the publisher of Ubuntu, a Linux operating system that operates on public cloud platforms, data centres, workstations, and edge/IOT devices. Canonical has established partnerships with major public cloud providers such as Azure, Google Cloud, and AWS. Additionally, Canonical collaborates with silicon vendors, including Intel, AMD, NVIDIA, and RISC-V, ensuring their platform is silicon agnostic.

Secure your stack with confidence

Enhance the security of your GenAI projects while mastering best practices for managing your software stack. Discover ways to safeguard your code, data, and machine learning models in production with Confidential AI.

20 December, 2024 08:18AM