Ubuntu MATE 24.04 is more of what you like, stable MATE Desktop on top of current Ubuntu.
This release rolls up some fixes and more closely aligns with Ubuntu. Read on to learn more šļø
Ubuntu MATE 24.04 LTS
Thank you! š
Iād like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release š
Iād like to acknowledge the close collaboration with all the Ubuntu flavour teams and the Ubuntu Foundations and Desktop Teams.
The assistance and support provided by Erich Eickmeyer (Ubuntu Studio), Simon Quigley (Lubuntu) and David Muhammed (Ubuntu Budgie) have been invaluable.
Thank you! š
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
We are pleased to announce the release of the next version of our distro, 24.04 Long Term Support. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations that the Ubuntu Budgie team have been released since the 22.04 release in April 2022: We also inherits hundreds of stabilityā¦
Thanks to the hard work from our contributors, Lubuntu 24.04 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.04 being a long-term support interim release, it will follow [ā¦]
Recently, there have been a lot of questions about LTS release-building procedures. We are making changes in that area ā not least due to specific patterns in user behavior, and now it's a good time to discuss that.
we're happy to announce the release of Proxmox Backup Server 3.2. It's based on Debian 12.5 "Bookworm", but uses the newer Linux kernel 6.8, and includes ZFS 2.2.3
Here are the highlights
Debian Bookworm 12.5, with a newer Linux kernel 6.5
ZFS 2.2.3
Flexible notification system
Automated installation
Exclude backup groups from jobs
Overview of prune and GC jobs
We have included countless bugfixes and improvements for general client and backend usability; see...
With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.
In this write-up Iād like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. For now, weāll be using a custom ISO image, while waiting for the above-mentioned merge-proposal to be landed. Furthermore, as the Debian archive is going through major transitions builds of the āunstableā branch of d-i donāt currently work. So I implemented a small backport, producing updated netcfg and netcfg-static for Bookworm, which can be used as localudebs/ during the d-i build.
Letās start with preparing a working directory and installing the software dependencies for our virtualized Debian system:
Next weāll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:
Finally, letās launch the installer using a custom preseed.cfg file, that will automatically install Netplan for us in the target system. A minimal preseed file could look like this:
For this demo, weāre installing the full netplan.io package (incl. Python CLI), as the netplan-generator package was not yet split out as an independent binary in the Bookworm cycle. You can choose the preseed file from a set of different variants to test the different configurations:
Weāre using the custom linux kernel and initrd.gz here to be able to pass the PRESEED_URL as a parameter to the kernelās cmdline directly. Launching this VM should bring up the normal debian-installer in its netboot/gtk form:
Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.
After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.
During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.
Done! After the installation finished you can reboot into your virgin Debian Bookworm system.
To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was written by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:
Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.
In our case we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:
Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more join the discussion at Salsa:installer-team/netcfg and find us at GitHub:netplan.
Ubuntu MATE 23.10 is more of what you like, stable MATE Desktop on top of current Ubuntu.
This release rolls up a number of bugs fixes and updates that continues to build on recent releases, where the focus has been on improving stability šŖØ
Ubuntu MATE 23.10
Thank you! š
Iād like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release š From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively
testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! š
caja-rename 23.10.1-1 has been ported from Python to C.
libmatemixer 1.26.0-2+deb12u1 resolves heap corruption and application crashes when removing USB audio devices.
mate-desktop 1.26.2-1 improves portals support.
mate-notification-daemon 1.26.1-1 fixes several memory leaks.
mate-system-monitor 1.26.0-5 now picks up libexec files from /usr/libexec
mate-session-manager 1.26.1-2 set LIBEXECDIR to /usr/libexec/ for correct interaction with mate-system-monitor āļø
mate-user-guide 1.26.2-1 is a new upstream release.
mate-utils 1.26.1-1 fixes several memory leaks.
Yet more AI Generated wallpaper
My friend Simon Butcher š¬š§ is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created a stunning AI-generated š¤š§ wallpaper for Ubuntu MATE using bleeding edge diffusion models š The sample below is 1920x1080 but the version included in Ubuntu MATE 23.10 are 3840x2160.
Hereās what Simon has to say about the process of creating this new wallpaper for Mantic Minotaur:
Since Minotaurs are imaginary creatures, interpretations tend to vary widely. I wanted to produce an image of a powerful creature in a graphic novel style, although not gruesome like many depictions. The latest open source Stable Diffusion XL base model was trained at a higher resolution and the difference in quality has been noticeable, particularly at better overall consistency and detail, while reducing anatomical irregularities in images. The image was produced locally using Linux and an NVIDIA A100 80GB GPU, starting from an initial text prompt and refined using img2img, inpainting and upscaling features.
Major Applications
Accompanying MATE Desktop 1.26.2 š§ and Linux 6.5 š§ are Firefox 118 š„š¦,
Celluloid 0.25 š„, Evolution 3.50 š§, LibreOffice 7.6.1 š
See the Ubuntu 23.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 23.10
This new release will be first available for PC/Mac users.
You can upgrade to Ubuntu MATE 23.10 from Ubuntu MATE 23.04. Ensure that you
have all updates installed for your current version of Ubuntu MATE before you
upgrade.
Open the āSoftware & Updatesā from the Control Center.
Select the 3rd Tab called āUpdatesā.
Set the āNotify me of a new Ubuntu versionā drop down menu to āFor any new versionā.
Press Alt+F2 and type in update-manager -c -d into the command box.
Update Manager should open up and tell you: New distribution release ā23.10ā is available.
If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
Click āUpgradeā and follow the on-screen instructions.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Feedback
Is there anything you can help with or want to be involved in? Maybe you just
want to discuss your experiences or ask the maintainers some questions. Please
come and talk to us.
Ubuntu MATE 23.04 is the least exciting Ubuntu MATE release ever. The good news is, if you liked Ubuntu MATE 22.10 then it is more of the same; just with better artwork! šļøš¼ļø I entered this development cycle full of energy and enthusiasm off the back of the Ubuntu Summit in Prague, but then I was seriously ill š¤ and had a long stay in hospital. Iām recovering well and should be 100% in a couple of months. This setback and also changing jobs a couple of months ago has meant that Iāve not been able to invest the usual time and effort into Ubuntu MATE. Iām happy to say that Iāve been able to deliver another solid šŖØ release with the help of the Ubuntu community.
Thank you! š
**Iād like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release š From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively
testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! š
My friend Simon Butcher š¬š§ is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created **some stunning **AI-generated š¤š§ ** for Ubuntu MATE using bleeding edge diffusion models** š The samples below are 1920x1080 but the versions included in Ubuntu MATE 23.04 are 3840x2160.
Hereās what Simon has to say about the process of creating these new wallpapers for Lunar Lobster:
My usual workflow involves checking reddit, etc for the latest techniques, and then installing the latest open-source tools and checkpoints for unlimited experimentation (e.g. stable diffusion), plus some selective use of Dall-e and Midjourney, while trying not to exhaust my credits. I then experiment with lot of different prompts (including negative prompts to discourage certain features), settings, styles and ideas from each tool to see what sort of images I can get, then tweak and evolve my approach based on the results.
Lobsters are fascinating creatures, but in real life, I find them a bit ugly, with all those antennae and legs akimbo. For the theme of āLunar Lobsterā, rather precise anatomy, I explored ideas of stylised alien robotic space lobsters, lunar landers and other lobster-themed spacecraft. After a producing a shortlist of varied images, I then perform any necessary AI processing such as inpainting, outpainting (generating new parts of an image beyond the existing canvas - particularly useful for getting the correct aspect ratio) and AI upscaling to make them suitable for use as wallpaper.
As a podcaster and streamer Iām delighted to have PipeWire installed by default since Ubuntu MATE 22.10. The Ubuntu MATE meta packages have been updated to correctly install the revised pipewire packages in Ubuntu. Special thanks to Erich Eickmeyer, from the Ubuntu Studio project, for his work on this.
Major Applications
Accompanying MATE Desktop 1.26.1 š§ and Linux 6.20 š§ are Firefox 111 š„š¦,
Celluloid 0.20 š„, Evolution 3.48 š§, LibreOffice 7.5.2 š
See the Ubuntu 23.04 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 23.04
This new release will be first available for PC/Mac users.
You can upgrade to Ubuntu MATE 23.04 from Ubuntu MATE 22.10. Ensure that you
have all updates installed for your current version of Ubuntu MATE before you
upgrade.
Open the āSoftware & Updatesā from the Control Center.
Select the 3rd Tab called āUpdatesā.
Set the āNotify me of a new Ubuntu versionā drop down menu to āFor any new versionā.
Press Alt+F2 and type in update-manager -c -d into the command box.
Update Manager should open up and tell you: New distribution release ā23.04ā is available.
If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
Click āUpgradeā and follow the on-screen instructions.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Feedback
Is there anything you can help with or want to be involved in? Maybe you just
want to discuss your experiences or ask the maintainers some questions. Please
come and talk to us.
Weāre thrilled to share our latest strides in enhancing partnerships, expanding support, and advancing innovations within the Armbian ecosystem. Hereās a roundup of recent developments:
1. Strengthening Partnerships in Shenzhen: Weāve embarked on a mission to fortify our collaborations with partners in Shenzhen, aimed at fostering better support for our community. During our visit, we engaged with both existing and potential partners to deepen our ties and enhance the services we offer to you. Read more
2. Platinum Support and Giveaway for Bananapi M7: Weāre excited to announce the launch of platinum support and a special giveaway for the latest Bananapi M7, a collaborative effort between Bananapi and ARMSOM. This initiative aims to provide unparalleled assistance and rewards to our valued users. Learn more
3. Expansion of Community Build Framework: Our community interaction has led to the integration of several new boards, including SakuraPi and H96 TV box, into our build framework. Additionally, weāve upgraded u-boot on 32-bit Rockchip devices and successfully ported Khadas Edge 2 to kernel 6.1. Moreover, FriendlyElec NAS now runs on mainline-based kernels, enriching our ecosystem with more versatile options.
4. Ongoing Upgrades and Future Ventures: In the pipeline, we have several exciting upgrades underway. Weāre working on upgrading the Odroid XU kernel to version 6.6 and adding support for Orangepi 5 PRO. Furthermore, weāre introducing mainline kernel support for Orangepi Zero 2W, and our team is eagerly diving into the development of the new Radxa Rock 5 ITX board and Rock 5C.
Stay tuned for more updates as we continue to elevate the Armbian experience!
We are excited to announce that our latest software version 8.2 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.5 "Bookworm" but uses a newer Linux kernel 6.8, QEMU 8.1, LXC 6.0, Ceph 18.2 and ZFS 2.2.
We have an import wizard to migrate VMware ESXi guests to Proxmox VE. The integrated VM importer is presented as storage plugin for native integration into the API and web-based user interface. You can use this to import the VM as a whole...
Weāre excited about the upcomingĀ Ubuntu 24.04 LTSĀ release, Noble Numbat. Like all Ubuntu releases, Ubuntu 24.04 LTS comes with 5 years of free security maintenance for the main repository. Support can be expanded for an extra 5 years, and to include the universe repository, via Ubuntu Pro.Ā Organisations looking to keep their systems secure without needing a major upgrade can also get the Legacy Support add-on to expand that support beyond the 10 years. Combined with the enhanced security coverage provided by Ubuntu Pro and Legacy Support, Ubuntu 24.04 LTS provides a secure foundation on which to develop and deploy your applications and services in an increasingly risky environment. In this blog post, we will look at some of the enhancements and security features included in Noble Numbat, building on those available in Ubuntu 22.04 LTS.
Unprivileged user namespace restrictions
Unprivileged user namespaces are a widely used feature of the Linux kernel, providing additional security isolation for applications, and are often employed as part of a sandbox environment. They allow an application to gain additional permissions within a constrained environment, so that a more trusted part of an application can then use these additional permissions to create a more constrained sandbox environment within which less trusted parts can then be executed. A common use case is the sandboxing employed by modern web browsers, where the (trusted) application itself sets up the sandbox where it executes the untrusted web content. However, by providing these additional permissions, unprivileged user namespaces also expose additional attack surfaces within the Linux kernel. There has been a long history of (ab)use of unprivileged user namespaces to exploit various kernel vulnerabilities. The most recent interim release of Ubuntu, 23.10, introduced the ability to restrict the use of unprivileged user namespaces to only those applications which legitimately require such access. In Ubuntu 24.04 LTS, this feature has both been improved to cover additional applications both within Ubuntu and from third parties, and to allow better default semantics of the feature. For Ubuntu 24.04 LTS, the use of unprivileged user namespaces is then allowed for all applications but access to any additional permissions within the namespace are denied. This allows more applications to more better gracefully handle this default restriction whilst still protecting against the abuse of user namespaces to gain access to additional attack surfaces within the Linux kernel.
Binary hardening
Modern toolchains and compilers have gained many enhancements to be able to create binaries that include various defensive mechanisms. These include the ability to detect and avoid various possible buffer overflow conditions as well as the ability to take advantage of modern processor features like branch protection for additional defence against code reuse attacks.
The GNU C library, used as the cornerstone of many applications on Ubuntu, provides runtime detection of, and protection against, certain types of buffer overflow cases, as well as certain dangerous string handling operations via the use of the _FORTIFY_SOURCE macro. FORTIFY_SOURCE can be specified at various levels providing increasing security features, ranging from 0 to 3. Modern Ubuntu releases have all used FORTIFY_SOURCE=2 which provided a solid foundation by including checks on string handling functions like sprintf(), strcpy() and others to detect possible buffer overflows, as well as format-string vulnerabilities via the %n format specifier in various cases. Ubuntu 24.04 LTS enables additional security features by increasing this to FORTIFY_SOURCE=3. Level three greatly enhances the detection of possible dangerous use of a number of other common memory management functions including memmove(), Ā memcpy(), snprintf(), vsnprintf(), strtok() and strncat(). This feature is enabled by default in the gcc compiler within Ubuntu 24.04 LTS, so that all packages in the Ubuntu archive which are compiled with gcc, or any applications compiled with gcc on Ubuntu 24.04 LTS also receive this additional protection.
The Armv8-M hardware architecture (provided by the āarm64ā software architecture on Ubuntu) provides hardware-enforced pointer authentication and branch target identification. Pointer authentication provides the ability to detect malicious stack buffer modifications which aim to redirect pointers stored on the stack to attacker controlled locations, whilst branch target identification is used to track certain indirect branch instructions and the possible locations which they can target. By tracking such valid locations, the processor can detect possible malicious jump-oriented programming attacks which aim to use existing indirect branches to jump to other gadgets within the code. The gcc compiler supports these features via the -mbranch-protection option. In Ubuntu 24.04 LTS, the dpkg package now enables -mbranch-protection=standard, so that all packages within the Ubuntu archive enable support for these hardware features where available.
AppArmor 4
The aforementioned unprivileged user namespace restrictions are all backed by the AppArmor mandatory access control system. AppArmor allows a system administrator to implement the principle of least authority by defining which resources an application should be granted access to and denying all others. AppArmor consists of a userspace package, which is used to define the security profiles for applications and the system, as well as the AppArmor Linux Security Module within the Linux kernel which provides enforcement of the policies. Ubuntu 24.04 LTS includes the latest AppArmor 4.0 release, providing support for many new features, such as specifying allowed network addresses and ports within the security policy (rather than just high level protocols) or various conditionals to allow more complex policy to be expressed. An exciting new development provided by AppArmor 4 in Ubuntu 24.04 LTS is the ability to defer access control decisions to a trusted userspace program. This allows for quite advanced decision making to be implemented, by taking into account the greater context available within userspace or to even interact with the user / system administrator in a real-time fashion. For example, the experimental snapd prompting feature takes advantage of this work to allow users to exercise direct control over which files a snap can access within their home directory. Finally, within the kernel, AppArmor has gained the ability to mediate access to user namespaces as well as the io_uring subsystem, both of which have historically provided additional kernel attack surfaces to malicious applications.Ā
Disabling of old TLS versions
The use of cryptography for private communications is the backbone of the modern internet. The Transport Layer Security protocol has provided confidentiality and integrity to internet communications since it was first standardised in 1999 with TLS 1.0. This protocol has undergone various revisions since that time to introduce additional security features and avoid various security issues inherent in the earlier versions of this standard. Given the wide range of TLS versions and options supported by each, modern internet systems will use a process of auto-negotiation to select an appropriate combination of protocol version and parameters when establishing a secure communications link. In Ubuntu 24.04 LTS, TLS 1.0, 1.1 and DTLS 1.0 are all forcefully disabled (for any applications that use the underlying openssl or gnutls libraries) to ensure that users are not exposed to possible TLS downgrade attacks which could expose their sensitive information.
Upstream Kernel Security Features
Linux kernel v5.15 was used as the basis for the Linux kernel in the previous Ubuntu 22.04 LTS release. This provided a number of kernel security features including core scheduling, kernel stack randomisation and unprivileged BPF restrictions to name a few. Since that time, the upstream Linux kernel community has been busy adding additional kernel security features. Ubuntu 24.04 LTS includes the v6.8 Linux kernel which provides the following additional security features:
Intel shadow stack support
Modern Intel CPUs support an additional hardware feature aimed at preventing certain types of return-oriented programming (ROP) and other attacks that target the malicious corruption of the call stack. A shadow stack is a hardware enforced copy of the stack return address that cannot be directly modified by the CPU. When the processor returns from a function call, the return address from the stack is compared against the value from the shadow stack ā if the two differ, the process is terminated to prevent a possible ROP attack. Whilst compiler support for this feature has been enabled for userspace packages since Ubuntu 19.10, it has not been able to be utilised until it was also supported by the kernel and the C library. Ubuntu 24.04 LTS includes this additional support for shadow stacks to allow this feature to be enabled when desired by setting the GLIBC_TUNABLES=glibc.cpu.hwcaps=SHSTK environment variable.
Secure virtualisation with AMD SEV-SNP and Intel TDX
Confidential computing represents a fundamental departure from the traditional threat model, where vulnerabilities in the complex codebase of privileged system software like the operating system, hypervisor, and firmware pose ongoing risks to the confidentiality and integrity of both code and data. Likewise, unauthorised access by a malicious cloud administrator could jeopardise the security of your virtual machine (VM) and its environment. Building on the innovation of Trusted Execution Environments at the silicon level, Ubuntu Confidential VMs aim to restore your control over the security assurances of your VMs.
For the x86 architecture, both AMD and Intel processors provide hardware features (named AMD SEV SNP and Intel TDX respectively) to support running virtual machines with memory encryption and integrity protection. They ensure that the data contained within the virtual machine is inaccessible to the hypervisor and hence the infrastructure operator.Ā Support for using these features as a guest virtual machine was introduced in the upstream Linux kernel version 5.19.
Thanks to Ubuntu Confidential VMs, a user can make use of compute resources provided by a third party whilst maintaining the integrity and confidentiality of their data through the use of memory encryption and other features.Ā On the public cloud, Ubuntu offers the widest portfolio of confidential VMs. These build on the innovation of both the hardware features, with offerings available across Microsoft Azure, Google Cloud and Amazon AWS.Ā
For enterprise customers seeking to harness confidential computing within their private data centres, a fully enabled software stack is essential. This stack encompasses both the guest side (kernel and OVMF) and the host side (kernel-KVM, QEMU, and Libvirt). Currently, the host-side patches are not yet upstream. To address this, Canonical and Intel have forged a strategic collaboration to empower Ubuntu customers with an Intel-optimised TDX Ubuntu build. This offering includes all necessary guest and host patches, even those not yet merged upstream, starting with Ubuntu 23.10 and extending into 24.04 and beyond. The complete TDX software stack is accessible through this github repository.Ā
This collaborative effort enables our customers to promptly leverage the security assurances of Intel TDX. It also serves to narrow the gap between silicon innovation and software readiness, a gap that grows as Intel continues to push the boundaries of hardware innovation with 5th Gen Intel Xeon scalable processors and beyond.
Strict compile-time bounds checking
Similar to hardening of binaries within the libraries and applications distributed in Ubuntu, the Linux kernel itself gained enhanced support for detecting possible buffer overflows at compile time via improved bounds checking of the memcpy() family of functions. Within the kernel, the FORTIFY_SOURCE macro enables various checks in memory management functions like memcpy() and memset() by checking that the size of the destination object is large enough to hold the specified amount of memory, and if not will abort the compilation process. This helps to catch various trivial memory management issues, but previously was not able to properly handle more complex cases such as when an object was embedded within a larger object. This is quite a common pattern within the kernel, and so the changes introduced in the upstream 5.18 kernel version to enumerate and fix various such cases greatly improves this feature. Now the compiler is able to detect and enforce stricter checks when performing memory operations on sub-objects to ensure that other object members are not inadvertently overwritten, avoiding an entire class of possible buffer overflow vulnerabilities within the kernel.
Wrapping up
Overall, the vast range of security improvements that have gone into Ubuntu 24.04 LTS greatly improve on the strong foundation provided by previous Ubuntu releases, making it the most secure release to date. Additional features within both the kernel, userspace and across the distribution as a whole combine to address entire vulnerability classes and attack surfaces. With up to 12 years of support, Ubuntu 24.04 LTS provides the best and most secure foundation to develop and deploy Linux services and applications. Expanded Security Maintenance, kernel livepatching and additional services are all provided to Ubuntu Pro subscribers to enhance the security of their Ubuntu deployments.
We hope you've had a good Easter. We've been working hard to improve OSMC for all platforms and keep things running smoothly.
We've also finalised our support for Kodi v21 and this will be the final release of Kodi v20, with test builds for Kodi v21 being made available on the forums in the coming days before an anticipated release in May.
The end of 2023 was also busy for us, with the announcement of Vero V, our fifth iteration of our flagship device. We're happy to announce significant playback improvements to the device in this update.
Vero 4K / 4K + and V users will now experience perfect AV sync playback after several months of hard work.
Vero V users can now enjoy Dolby Vision compatible Profile 5 tonemapping with output to HDR and SDR. If you've ever played content that looks magenta and green, this is because it doesn't have a fallback layer. Vero V will now tonemap this and output it in the best possible format for your display.
This is the first step in our efforts to support UHD content and Dolby Vision content provided by streaming services such as Netflix which does not have a fallback layer. Many thanks to those that tested this on our forums and reported positive feedback.
Here's what's new:
Kodi v20.5
Kodi v20.5 (Nexus) is now available as standard on OSMC, and release details can be found here.
On the OSMC side, we've made some changes to keep everything running smoothly. Here's what's new:
Bug fixes
Fix a network issue in My OSMC
Vero 4K / 4K +: fix an issue where using the 'toothpick' recovery method can render an existing installation unbootable
My OSMC: fix an issue which could cause a build up of backup files in the Kodi user data directory
Fixed CEC issues on Vero 4K / 4K +
Fixed a number of issues with the OSMC skin
Improving the user experience
Vero V: added Dolby Vision Profile 5 colourspace conversion support. Previously playing content that was Profile 5 (did not have a fallback layer) would result in purple and green colours
Improved CPU governor performance on all devices
Improved PTS handling on Vero 4K / 4K + and Vero V
Improved playback performance and synchronisation on Vero 4K/4K+/V
My OSMC: Changing settings for updates; backup or restore no longer requires a reboot to take effect.
Vero V: improved Bluetooth range and performance when using A2DP audio
Vero 4K / 4K + / V: improve support for VESA reduced blanking modes used, eg, by Dell monitors
Miscellaneous
Vero 4K/4K+/V: add support for specific Ortek keyboard
Updated translations
Wrap up
To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course ā if you have updates scheduled automatically you should receive an update notification shortly.
If you enjoy OSMC, please follow us on X, like us on Facebook and consider making a donation if you would like to support further development.
You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.
Vero V is our latest and greatest flagship and the best way to enjoy OSMC. To celebrate the significant milestones in this update, we're offering Vero V at a discount for a limited period of time. Grab yours today
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
Did you say Idaho Potatoes? Earlier this week, I was at a wedding, talking to a friend.Ā She asked if Iād ever been on a cruise ā I have, once, years ago, and I proceeded to tell her the story of a couple from Idaho that we ate dinner with every night on the cruise. [ā¦]
Discover how IBM Cloudās bare metal servers offer highly confined and high-performing single-tenant cloud isolation through the use of Ubuntu Core and Snaps, supported by the AMD Pensando Elba DPU (Data Processing Unit). This setup enables the creation of secure and efficient environments for each tenant. Its design ensures the total separation of their servers from the cloud underlay. The architecture delivers consistent performance and enables non intrusive control from the cloud provider. Learn how this innovative solution can benefit your business and enhance your cloud infrastructure.
Introduction
Public cloud bare-metal servers offer dedicated physical resources, but can present isolation and performance challenges. Isolation requirements involve maintaining full control of compute capabilities by the tenant, while preserving the backend management of its infrastructure by the cloud provider and preventing unauthorised access. Performance requirements entail providing consistent performance even under heavy workloads. Cloud providers face challenges in ensuring physical and logical isolation, resource allocation, monitoring, management, scalability, and security. To address these complex requirements, providers must invest in advanced technologies and implement best practices for resource allocation, monitoring, and management. They also need to regularly review and update infrastructure to meet tenant needs.
In the following discussion, we will explore how IBM Cloud is addressing these challenges by harnessing the distinctive capabilities of Ubuntu Core and Snaps deployed on the AMD Pensando Elba infrastructure accelerators.
IBM Cloud Bare Metal Servers for VPC
IBM has always been dedicated to keeping clients essential data secure through a strong focus on resilience, performance, and compliance. IBM Cloud executes that focus within highly regulated industries such as finance and insurance organisations. Given IBM Cloudās long-standing commitment to data security, it is unsurprising and essential that Bare Metal Servers for VPC (VPC BM) implements the most rigorous security guarantees to meet customers expectations.
Bare metal servers, which are physical servers dedicated to a single tenant, offer benefits such as high performance and customizability, but managing them in a multi-tenant environment can be complex. A key requirement is ensuring isolation between the tenant and the cloud backend, both to maintain security and to prevent performance issues caused by noisy neighbours.
VPC BM allows customers to select a preset server profile that best matches their workloads to help accelerate the deployment of compute resources. Customers can achieve maximum performance without oversubscription deployed in 10 minutesĀ
VPC BMĀ is powered with the latest technology. They are built for cloud-enterprise applications, including VMware and SAP, and can also support HPC and IOT workloads. They come with enhanced high-performance networking at 100 Gbps as well as advanced security features.Ā
A network orchestration layer handles the networking for all bare metal servers that are within an IBM Cloud VPC across regions and zones. This allows for management and creation of multiple, virtual private clouds in multi zone regions and also improves security, reduces latency, and increases high availability.
āI selected IBM Cloud VPC because of 5 points that I thought and was proven correct based on my experience using the service. First is security. Secondly is agility. The third is isolation. Fourth is the high performance. Fifth, and last, is the scalability.ā
Ivo Draginov CEO BatchService
AMD Pensando DSC2-200 āElbaā
In use with some of the largest cloud providers and Hyperscalers on the planet, the AMD Pensando DSC2-200 has proven itself as the platform of choice for cloud providers seeking to optimise performance, increase scale and introduce new infrastructure services at the speed of software. The DSC2-200 is full-height, half-length PCIe card powered by AMD Pensando 2nd generation DPU āElbaā. The DSC2-200 is the ideal platform for cloud providers to implement multi-tenant SDN, stateful security, storage, encryption and telemetry at line rate. The platformās scale architecture allows cloud provider to offer multiple services on the same DPU card.
Developers can create customised data plane services that target 400G throughput, microsecond-level latencies, and scale to tens of millions of flows. The heart of the AMD Pensando platform is a fully programmable P4 data processing unit (DPU). High-level programming languages (P4, C) enable rapid development and deployment of new features and services.
The innovative design of AMD Pensando DPU provides secure air-gap between tenantās compute instances and cloud infrastructure as well as secure isolation between tenants. This separation enables cloud operators to manage their infrastructure functions efficiently and independently of their tenantās workloads while freeing up the valuable compute resources from the infrastructure tasks and fully dedicating them to revenue generating business applications. The exceptional throughput and performance of the Elba DSC2-200, along with its strong alignment with IBMās security expectations, made it a top choice for inclusion in IBM Cloudās bare metal servers for VPC. This combination of features enables IBM Cloud to provide highly secure and powerful environments for its customers.
Achieving IBM Cloudās target outcomes with Ubuntu Core and Snaps
The first goal was to implement a secure and reliable operating system that IBM Cloud development teams could use to launch their management interface and functionality on the AMD Pensando DPU cards. Initially IBM Cloud selected Ubuntu Server as the operating system. They were familiar with it and could easily develop on top of it using the familiar Linux toolset and API.
To develop software running on the AMD Pensando DPU cards, the development kit provides a complete container-based development environment. It allows for the development of data plane, management plane, and control plane functions. To perform correctly, these containers must be allowed direct communication with the card hardware components with fine-grained isolation. Using traditional container runtimes such as Docker and Kubernetes alone cannot meet the unique requirements of this solution. Fortunately, Snap packages provide this access through secure and controlled interfaces to the operating system.
Using Snap packages, IBM Cloud developers were able to implement all the functionalities they needed in record time. This positive experience made them turn their attention to Ubuntu Core, the version of Ubuntu specifically designed for embedded systems such as AMD Pensando DPU cards. It is entirely made up of Snap packages, creating a confined, immutable and transaction-based system. Communication among containers and between containers and the operating system is locked down under full control. In addition, Ubuntu Core provides full disk encryption and secure boot, achieving additional mandatory security compliance objectives.
IBM Cloud successfully converted their bespoke AMD Pensando system image from Ubuntu Server to Ubuntu Core and, after positive results in the pre-production tests, proceeded to deploy it in production to support Bare Metal Servers on VPC.
Conclusion
In summary, Canonicalās Ubuntu Core and IBM Cloudās components, when packaged as Snaps, provide a unique solution that effectively addresses the challenges faced by the company. This innovative approach has enabled IBM Cloud to enhance its offerings and deliver improved performance, security, and tenant isolation. The development of the solution completed in under a year and has been successfully operating in production since then. The implementation has been a resounding success. Ultimately addressing these challenges provided IBM Cloud with several advantages, including differentiation, cost savings, and improved efficiency.
The collaboration between IBM Cloud, Canonical, and AMD Pensando remains ongoing, with plans to expand the use of Ubuntu Core and Snaps to support other non-bare metal offerings, including Virtual Server for VPC. A key medium-term goal is to achieve FedRAMP compliance, which involves upgrading to Ubuntu Core 22 and ensuring FIPS compliance at the kernel and filesystem levels. This ongoing partnership and development aim to enhance the security, performance, and functionality of IBM Cloudās solutions.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
At Canonical, weāre committed to open-source principles and fostering collaboration. Over the last 20 years, Ubuntuās brand has become a leader in open source, with an open operating system. Our community shapes Ubuntuās journey, and we recognise room for improvement in how we collaborate, particularly in design at Canonical. Despite most of our development being open source, our design processes often lack transparency, particularly in visuals, user interaction, and research.
We are excited to announce that we kickstarted a working group within the Design team with a mission to empower external designers to contribute to open-source projects. Our focus is on building resources that bridge the gap between designers and open-source project maintainers, making it easier for designers to dive into projects and for maintainers to receive valuable design contributions and feedback.
Before we figure out how to support you, weāre checking out ongoing Open Design initiatives and understanding the needs, motivations, and interests of designers and project maintainers. Weāre learning tons along the way and prioritising ideas on how to move forward!
As we kick things off, your input would be invaluable in shaping our efforts. Therefore, we are inviting open source maintainers and designers to participate in this survey. Your input will provide valuable insights and help us ensure weāre on the right track.
DISA, the Defense Information Systems Agency, has published their Security Technical Implementation Guide (STIG) for Ubuntu 22.04 LTS. The STIG is free for the public to download from the DOD Cyber Exchange. Canonical has been working with DISA since we published Ubuntu 22.04 LTS to draft this STIG, and we are delighted that it is now finalised and available for everyone to use.
We are now developing the Ubuntu Security Guide profile with a target release in summer 2024.
What is a STIG?
A STIG is a set of guidelines for how to configure an application or system in order to harden it. Hardening means reducing the systemās attack surface: removing unnecessary software packages, locking down default values to the tightest possible settings and configuring the system to run only what you explicitly require. System hardening guidelines also seek to lessen collateral damage in the event of a compromise.
STIGs are intended to be applied with judgement and common sense. Each mission or deployment is going to be different: where a piece of guidance doesnāt make sense for your specific needs, you can choose your own path forward whilst keeping the overall intentions of the STIG in mind.
The STIGs have been primarily developed for use within the US Department of Defense. However, because they are based on universally-recognised security principles, they can be used by anyone who wants a robust system hardening framework. As a result, STIGs are being more widely adopted across the US government and numerous industries, such as financial services and online gaming.
When will Canonical publish a DISA-STIG USG profile?
The STIG that DISA has published is primarily composed of a manual XCCDF XML document that describes in human-readable words how to configure Ubuntu 22.04 LTS. This XML file contains nearly 200 individual pieces of guidance, which can be quite a daunting prospect to tackle from scratch. To simplify this process, Canonical produces the Ubuntu Security Guide (USG), an automation tool that handles both the checking and remediation of the STIG rules. USG is available as part of Ubuntu Pro, and can be enabled through the Pro client.
Our engineering team is currently working through the XCCDF document and codifying the rules into a new profile for USG. We will publish the STIG profile for USG in the coming months,Ā with a target release in summer 2024, and will make an announcement at that time.
Conclusion
The STIG for Ubuntu 22.04 LTS will allow any users or administrators to harden their systems in accordance with this rigorous standard. Doing this by hand is a time-consuming proposition, so we recommend waiting until automated tooling is available to speed up the hardening and auditing process; the USG profile is in active development and will be published as soon as itās ready.
Update disable_sudo_use_pty, negate it explicitly, not just comment it. This should avoid distortion of gpm with jfbterm. Thanks to ottokang for reporting this issue.
MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching 10 million users in November 2022. AI enthusiasts and professionals have struggled with experiment tracking, model management and code reproducibility, so when MLflow was launched, it addressed pressing problems in the market. MLflow is lightweight and able to run on an average-priced machine. But it also integrates with more complex tools, so itās ideal to run AI at scale.
A short history
Since MLflow was first released in June 2018,Ā the community behind it has run a recurring survey to better understand user needs and ensure the roadmap s address real-life challenges.Ā About a year after the launch, MLflow 1.0 was released, introducing features such as improved metric visualisations, metric X coordinates, improved search functionality and HDFS support. Additionally, it offered Python, Java, R, and REST API stability.
MLflow 2.0 landed in November 2022, when the product also celebrated 10 million users. This version incorporates extensive community feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well as improved integrations with the ML ecosystem, a revamped MLflow Tracking UI, a refresh of core APIs across MLflowās platform components, and much more.
In September 2023, Canonical released Charmed MLflow, a distribution of the upstream project.
Why use MLflow?
MLflow is often considered the most popular ML platform. It enables users to perform different activities, including:
Reproducing results: ML projects usually start with simplistic plans and tend to go overboard, resulting in an overwhelming quantity of experiments. Manual or non-automated tracking implies a high chance of missing out on finer details. ML pipelines are fragile, and even a single missing element can throw off the results. The inability to reproduce results and codes is one of the top challenges for ML teams.
Easy to get started: MLflow can be easily deployed and does not require heavy hardware to run. It is suitable for beginners who are looking for a solution to better see and manage their models. For example, this video shows how Charmed MLflow can be installed in less than 5 minutes.
Environment agnostic: The flexibility of MLflow across libraries and languages is possible because it can be accessed through a REST API and Command Line Interface (CLI). Python, R, and Java APIs are also available for convenience.
Integrations: While MLflow is popular in itself, it does not work in a silo. It integrates seamlessly with leading open source tools and frameworks such as Spark, Kubeflow, PyTorch or TensorFlow.
Works anywhere: MLflow runs on any environment, including hybrid or multi-cloud scenarios, and on any Kubernetes.
MLFlow is an end-to-end platform for managing the machine learning lifecycle. It has four primary components:
MLflow Tracking
MLflow Tracking enables you to track experiments, with the primary goal of comparing results and the parameters used. It is crucial when it comes to measuring performance, as well as reproducing results. Tracked parameters include metrics, hyperparameters, features and other artefacts that can be stored on local systems or remote servers.Ā
MLflow Models
MLflow Models provide professionals with different formats for packaging their models. This gives flexibility in where models can be used, as well as the format in which they will be consumed. It encourages portability across platforms and simplifies the management of the machine learning models.Ā
MLflow projects
Machine learning projects are packaged using MLflow Projects. It ensures reusability, reproducibility and portability. A project is a directory that is used to give structure to the ML initiative. It contains the descriptor file used to define the project structure and all its dependencies. The more complex a project is, the more dependencies it has. They come with risks when it comes to version compatibility as well as upgrades.
MLflow project is useful especially when running ML at scale, where there are larger teams and multiple models being built at the same time. It enables collaboration between team members who are looking to jointly work on a project or transfer knowledge between them or to production environments.
MLflow model registry
Model Registry enables you to have a centralised place where ML models are stored. It helps with simplifying model management throughout the full lifecycle and how it transitions between different stages. It includes capabilities such as versioning and annotating, and provides APIs and a UI.
Key concepts of MLflow
MLflow is built around two key concepts: runs and experiments.Ā
In MLflow, each execution of your ML model code is referred to as a run. All runs are associated with an experiment.Ā
An MLflow experiment is the primary unit for MLflow runs. It influences how runs are organised, accessed and maintained. An experiment has multiple runs, and it enables you to efficiently go through those runs and perform activities such as visualisation, search and comparisons. In addition, experiments let you run artefacts and metadata for analysis in other tools.
Kubeflow vs MLflow
Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, and are driven by a thriving community whose contributions are making a difference in the development of the projects.Ā The main purpose of both Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, and enable teams to develop and deploy machine learning models in a scalable, portable and reproducible manner.
However, comparing Kubeflow and MLflow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly,Ā they have different strengths. On the one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. From a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects.
Charmed MLflow is Canonicalās distribution of the upstream project. It is part of Canonicalās growing MLOps portfolio. It has all the features of the upstream project, to which we add enterprise-grade capabilities such as:
Simplified deployment: the time to deployment is less than 5 minutes, enabling users to also upgrade their tools seamlessly.
Simplified upgrades using our guides.
Automated security scanning: The bundle is scanned at a regular cadence..
Security patching: Charmed MLflow follows Canonicalās process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project, and the risk of exploitation.
Maintained images: All Charmed MLflow images are actively maintained.
Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
Clouds, be they private or public, surprisingly remain one of the most DIY-favouring markets. Perhaps due to the nebulous and increasingly powerful technologies, a series of myths, or even unnecessary egos, the majority of non-tech-centric enterprises (meaning, companies whose primary business scope rests outside the realm of IT software and hardware) still try to build and nurture in-house cloud management teams, without considering outsourcing even part of their workload. Self-management has its advantages, however, thinking itās the only option is a mistake. Reading this you may think: āmanaged cloud services are for lazy people, I can do it myself.ā And the truth is, you indeed can. But should you?Ā
Cloud operationsĀ
Letās be honest: building a cloud is no easy feat. It is not for beginners, and involves a large series of considerations: is it large enough? Secure enough? Efficient enough? Does it justify the cost? So having made your way through this maze of questions and having finally concluded that you want to move towards a cloud deployment, the last thing you need is another set of considerations for operating it.Ā
Operations can be a vague term. In the tech/cloud field, it defines the entire range of actions and activities required to keep any cloud infrastructure running consistently, reliably, and efficiently. Briefly, good operations make sure your cloud does what itās supposed to do most of the time and does not significantly disrupt your business processes when errors happen. While different from cloud to cloud, most operations can be classified into three categories:Ā
Monitoring ā constant measurements of key metrics against a predefined schema to ensure functionality
Management ā tweaks and changes to the infrastructure, such as upgrades, patches, and scaling, to ensure reliability
Troubleshooting ā a system of protocols and procedures that keeps your workloads safe and ensures minimum data loss when incidents happen
This may sound complicated and complex, and in many ways it is. As an industry rule of thumb, for every 100 nodes of any cloudās deployment, you will require at least one expert to ensure that proper operations are in place. This is very important because improper operations can cause significant disruption to your entire business, from inaccurate data and processes to major errors in processes and performance. Briefly put, cloud operations cannot be neglected.
The cost of self-managed cloudsĀ
Regardless of how big or small or simple or complex your infrastructure is, there is a range of costs that you are likely to incur when it comes to operating your cloud. These can be:Ā
Direct ā These are costs directly associated with the deployment and operation of your cloud, such as hardware purchases and maintenance, software licences, service subscriptions and more. They are relatively predictable and will allow you to budget quite easily ahead of time, but do allow a margin of +/- 10% when estimating, as the integration of components within the wider infrastructure can sometimes incur additional service costs.Ā
Indirect ā When it comes to indirect costs, the definitionās boundaries become more blurry. In general, an indirect cost is any cost that, when neglected or denied, significantly reduces the reliability, efficiency, or even mere availability of your cloud. For example, IT headcount is a significant indirect cost: it will cost you money to hire, train, retain, and grow a team of experts to manage your infrastructure, and these costs will only be augmented by the ongoing skill gap the market is currently experiencing. The opportunity costs of having people work on operations rather than innovation can range from negligible to severe, as time-to-market is an essential component for maintaining a competitive edge in any industry.Ā
Indirect costs are highly unpredictable and involve a significant level of corporate responsibility should you choose to do everything yourself. Suppose youāve hired your team and trained them: at any point, engineers can leave, or require additional training; sometimes their talent will be needed to sustain other technical feats within your business; and sometimes things can simply go slower than expected. Itās not impossible to navigate these indirect costs. Just note that while this has some advantages ā like full independence and more freedom to allocate resources ā it has increased risks of financial losses and slower time to market.ā
In light of these costs, a general observation (or unwritten market consensus) is that tech-centric companies will likely be able to self-manage their clouds successfully. Non-tech-centric companies are likely to encounter a point where managed cloud services would present a more feasible and competitive opportunity.Ā
When to opt for managed cloud services:Ā
Before discussing when to opt for managed cloud services, letās take a moment to clarify what they entail. Opting for Managed Cloud Services involves outsourcing your cloud infrastructure operations to an external expert, also known as a Managed Service Provider (MSP). Youāll ideally be able to relinquish all your operational concerns (along with responsibility for the efficiency of your operations) to the MSP, and focus on innovation or whatever else really matters for you.Ā
There is a pervasive myth that managed cloud services are only a useful option when your company finds itself unable to manage anything by itself, or when you simply donāt have an IT team. Nothing could be further from the truth. There are several situations where choosing a managed cloud service provider can prove both helpful and lucrative:Ā
Vertical growth ā When you want to expand into a new territory, it is unlikely that you will have access to a well-established senior expertise within your IT team. This in turn can be expensive to acquire, and will need plenty of time to adjust to your companyās values and processes. Choosing an MSP to support you and enable you to grow vertically as soon as you want can help you accelerate your time to market and cut talent acquisition costs.Ā
Re-focus ā You probably already have an IT team, and you are probably very happy with it. But when it comes to their bandwidth, you may want to have them focus on sustaining technological innovation for your competitive advantage, rather than spending most of their time keeping the lights on in your cloud infrastructure. A managed cloud service will help offer your team enough headspace to concentrate on your primary business scope.Ā
Cost predictability ā Faced with a new project, it would be wise and appropriate to estimate your costs. But cloud infrastructure, as mentioned above, can incur a lot of unexpected costs, especially when it comes to covering a skill gap and mitigating for lost opportunities. A managed service provider should offer a stable and predictable price (usually per node per year), which can give you full control over your budgets and allow you to allocate resources more efficiently.Ā
When venturing into unfamiliar territory, opting for managed services is advisable ā especially for non-tech-centric enterprises. Cloud infrastructure operations is a perfect example of such a case: a highly complex and resource-intensive set of processes that is essential to your business success, but detrimental to your costs if improperly self-managed. For any non-tech-centric enterprise looking to enter, expand, or upgrade their open-source cloud infrastructure, Managed Cloud Services are an attractive opportunity that proposes countless advantages and can help you retain (or even augment) your competitive edge.Ā
Recently, OS2ATC, an annual technology event in the field of open source operating systems, was held in Beijing, where many industry technologists and experts gathered to share the latest technical achievements and innovative ideas in the fields of Al and System, Hardware, Kernel, RISC-V Architecture, ARM Architecture, Longxin Architecture, Programming Technology, RUST, Intelligent Vehicles and Robotics, AIOT, Cloud Native, Virtualisation, and so on, focusing on the topic of "Open Source Intelligence". Open Source Operating System Annual Technical Conference (OS2ATC) has been held for ten consecutive years, which has been effective in promoting the development of OS-related teaching, research and industry ...Read more
Hello, Community! We haven't had clear guidelines on when and how to create suitable tasks in our Phorge development portal for a long time. As the community grows, we need such guidelines more and more to ensure that tasks get resolved. We are also changing our policy of never closing tasks for inactivity, although we will still close them only under particular and limited circumstances.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
At the worldās leading industrial trade fair, companies from the mechanical engineering, electrical engineering and digital industries as well as the energy sector will come together to present solutions for a high-performance, but also sustainable industry at Hannover Messe. This year, Qualcomm brought its DX Summit to Hannover Messe, putting together business and technology leaders to discuss digital transformation solutions and experiences that are moving enterprise forward today, from manufacturing to logistics, transportation, energy and more.
Canonical will join the Qualcomm DX Summit at Hannover Messe on April 23rd , 2024, where industry experts will delve into the cutting-edge technologies that are driving Industry 4.0 forward.Ā Weāre looking forward to meeting our partners and customers on-site to discuss the latest in open-source innovation, and solutions on edge AI. Fill in the form and get a free ticket for Qualcomm DX Summit and Hannover Messe from Canonical.
Secure and scale your smart edge AI deployments with Ubuntu
During the event, Canonical will present aĀ talk using a real-world case-study to showcase our joint offering with Qualcomm and illustrate how Canonical solutions benefit enterprise IoT customers to bring digital transformation and AI to their latest IoT projects.Ā
Presenter: Aniket Ponkshe, Director of Silicon Alliances, Canonical
Date and time: 2:20 pm ā 2:40 pm, April 23rd, 2024
April 15th, 2024, in the Top 10 Most Secure Mobile Phones to Buy in 2024, the cybersecurity-focused firm Efani, has ranked Purism Librem 5 as as the #1 most secure phone for the year 2024. From their article: Factors to Consider When Evaluating Smartphone Security When assessing the security of smartphones, several crucial aspects must [ā¦]
Ubuntu Budgie 24.04 LTS (Noble Numbat) is a Long Term Support release with 3 years of support by your distro maintainers, from April 2024 to May 2027. These release notes showcase the key takeaways for 22.04 upgraders to 24.04. In these release notes the areas covered are: Quarter & half tiling is pretty much self-explaining. Dragging a window to theā¦
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To [ā¦]
March 2024 was another eventful month for vulnerabilities and cybersecurity in general. It was the second consecutive month of lapsed Common Vulnerability Exposure (CVE) enrichment putting defenders in a precarious position with reduced risk visibility. The Linux kernel continued its elevated pace of vulnerability disclosures and was commissioned as a new CVE Numbering Authority (CNA). In addition, several critical vulnerabilities were added to CISAās Known Exploited Vulnerabilities (KEV) list including Microsoft Windows, Fortinet FortiClientEMS, all the major browsers, and enterprise Continuous Integration And Delivery software vendor JetBrains.
Hereās a quick review of March 2024ās most impactful cybersecurity events.
The NIST NVD Disruption
NISTās National Vulnerability Database (NVD) team largely abandoned CVE Enrichment in February 2024 with no warning. NIST NVD slowed to a CVE enrichment rate of just over 5% during March and it became obvious that the abrupt halt was not just a short-term outage. Disruption of CVE enrichment puts cybersecurity operations around the world at a big disadvantage because the NVD is the largest centralized repository of vulnerability severity information. Without severity enrichment, cybersecurity admins are left with very little information for vulnerability prioritization and risk management decision making.
Experts in the cybersecurity community traded public speculation until the VulnCon & Annual CNA Summit, where NISTās Tanya Brewer announced that the non-regulatory US government agency would relinquish some aspects of the NVD management to an industry consortium. Brewer did not explain the exact cause for outage, but forecasted several additional goals for NIST NVD moving forward:
Allowing more outside parties to submit enrichment data
Improving the NVDās software identification capabilities
Adding new types of threat intelligence data such as EPSS and the NIST Bugs Framework
Improving the NVD dataās usability and supporting new use cases
Automating some aspects of CVE analysis
Plenty Going On āIn The Linux Kernelā
A total of 259 CVEs were disclosed in March 2024 with a description that began with: āIn the Linux kernelā marking the second most active month ever for Linux vulnerability disclosures. The all time record was set one month prior in February with a total of 279 CVEs issued. March also marked a new milestone for kernel.org, the maintainer of the Linux kernel, as it was inducted as a CVE Numbering Authority (CNA). Kernel.org will now assume the role of assigning and enriching CVEs that impact the Linux kernel. Going forward the kernel.org asserts that CVEs will only be issued for discovered vulnerabilities after a fix is available, and CVEs will only be issued for versions of the Linux kernel that are actively supported.
Multiple High Severity Vulnerabilities In Fortinet Products
Several High severity vulnerabilities in Fortinet FortiOS and FortiClientEMS were disclosed. Of these, CVE-2023-48788 has been added to CISAās KEV database. The risk imposed by CVE-2023-48788 is further compounded by the existence of a publicly available proof-of-concept (PoC) exploit. While CVE-2023-48788 is notably an SQL Injection [CWE-89] vulnerability, it can be exploited in tandem with the xp_cmdshell function of Microsoft SQL Server for remote code execution (RCE). Even when xp_cmdshell is not enabled by default, researchers have shown that it can be enabled via the SQL Injection weakness.
Greenbone has a network vulnerability test (NVT) that can identify systems affected by CVE-2023-48788, local security checks (LSCs) [1][2] that can identify systems affected by CVE-2023-42790 and CVE-2023-42789, and another LSC to identify systems affected by CVE-2023-36554. A proof-of-concept exploit for CVE-2023-3655 has been posted to GitHub.
CVE-2023-48788 (CVSS 9.8 Critical): A SQL Injection vulnerability allowing an attacker to execute unauthorized code or commands via specially crafted packets in Fortinet FortiClientEMS version 7.2.0 through 7.2.2.
CVE-2023-42789 (CVSS 9.8 Critical): An out-of-bounds write in Fortinet FortiOS allows an attacker to execute unauthorized code or commands via specially crafted HTTP requests. Affected products include FortiOS 7.4.0 through 7.4.1, 7.2.0 through 7.2.5, 7.0.0 through 7.0.12, 6.4.0 through 6.4.14, 6.2.0 through 6.2.15, FortiProxy 7.4.0, 7.2.0 through 7.2.6, 7.0.0 through 7.0.12, 2.0.0 through 2.0.13.
CVE-2023-42790 (CVSS 8.1 High): A stack-based buffer overflow in Fortinet FortiOS allows an attacker to execute unauthorized code or commands via specially crafted HTTP requests. Affected products include FortiOS 7.4.0 through 7.4.1, 7.2.0 through 7.2.5, 7.0.0 through 7.0.12, 6.4.0 through 6.4.14, 6.2.0 through 6.2.15, FortiProxy 7.4.0, 7.2.0 through 7.2.6, 7.0.0 through 7.0.12, 2.0.0 through 2.0.13.
CVE-2023-36554 (CVSS 9.8 Critical): FortiManager is prone to an improper access control vulnerability in backup and restore features that can allow attackers to execute unauthorized code or commands via specially crafted HTTP requests. Affected products are FortiManager version 7.4.0, version 7.2.0 through 7.2.3, version 7.0.0 through 7.0.10, version 6.4.0 through 6.4.13 and 6.2, all versions.
Zero Days In All Major Browsers
Pwn2Own, an exciting hacking competition took place at CanSecWest security conference on March 20th ā 22nd. At this yearās event, 29 distinct zero-days were discovered and over one million dollars in prize money was awarded to security researchers. Independent entrant Manfred Paul earned a total of $202,500 including $100,000 for two zero day sandbox escape vulnerabilities in Mozilla Firefox. Mozilla quickly issued updates to Firefox with version 124.0.1.
Manfred Paul also achieved remote code execution (RCE) in Appleās Safari by combining Pointer Authentication Code (PAC) [D3-PAN] bypass and integer underflow [CWE-191] zero-days. PACs in Appleās operating systems are cryptographic signatures for verifying the integrity of pointers to prevent the exploitation of memory corruption bugs. PAC has been bypassed before for RCE in Safari. Manfred defeated Google Chrome and Microsoft Edge via an Improper Validation of Specified Quantity in Input [CWE-1284] vulnerability to complete the browser exploit trifecta.
The fact all major browsers were breached underscores the high risk of visiting untrusted Internet sites and the overall lack of security provided by major browser vendors. Greenbone includes tests to identify vulnerable versions of Firefox and Chrome.
CVE-2024-29943 (CVSS 10 Critical): An attacker was able to exploit Firefox via an out-of-bounds read or write on a JavaScript object by fooling range-based bounds check elimination. This vulnerability affects versions of Firefox before 124.0.1.
CVE-2024-29944 (CVSS 10 Critical): Firefox incorrectly handled Message Manager listeners allowing an attacker to inject an event handler into a privileged object to execute arbitrary code.
CVE-2024-2887 (High Severity): A type confusion [CWE-843] vulnerability in the Chromium browserās implementation of WebAssembly (Wasm).
New Actively Exploited Microsoft Vulnerabilities
Microsoftās March 2024 security advisory included a total of 61 vulnerabilities impacting many products. The Windows kernel had the most CVEs disclosed with a total of eight, five of which are rated high severity. Microsoft WDAC OLE DB provider for SQL, Windows ODBC Driver, SQL Server, and Microsoft WDAC ODBC Driver combined to account for ten high severity CVEs. There are no workarounds for any vulnerabilities in the group meaning that updates must be applied to all affected products. Greenbone includes vulnerability tests to detect the newly disclosed vulnerabilities from Microsoftās March 2024 security advisory.
Microsoft has so far tagged six its new March 2024 vulnerabilities as āExploitation More Likelyā, while two new vulnerabilities affecting Microsoft products were added to the CISA KEV list; CVE-2023-29360 (CVSS 8.4 High) affecting Microsoft Streaming Service and CVE-2024-21338 (CVSS 7.8 High) published in 2023 were assigned actively exploited status in March.
CVE-2024-27198: Critical Severity CVE In JetBrains TeamCity
TeamCity is a popular continuous integration and continuous delivery (CI/CD) server developed by JetBrains, the same company behind other widely-used development tools like IntelliJ IDEA, the leading Kotlin Integrated Development Environment (IDE), and PyCharm, an IDE for Python. TeamCity is designed to help software development teams automate and streamline their build, test, and deployment processes and competes with other CI/CD platforms such as Jenkins, GitLab CI/CD, Travis CI, and Azure DevOps, among others. TeamCity is estimated to hold almost 6% of the total Continuous Integration And Delivery market share and ranks third overall, while according to JetBrains, over 15.9 million developers use their products, including 90 of the Fortune Global Top 100 companies.
Given JetBrains market position, a critical severity vulnerability in one of their products will quickly attract the attention of threat actors. Within three days of CVE-2024-27198 being published it was added to the CISA KEV catalog. Greenbone Enterprise vulnerability feed includes tests to identify affected products including a version check and an active check that sends a crafted HTTP GET request and analyzes the response.
When combined, CVE-2024-27198 (CVSS 9.8 Critical) and CVE-2024-27199 allow an attacker to bypass authentication using an alternative path or channel [CWE-288] to read protected files including those outside of the restricted directory [CWE-23] and perform limited admin actions.
Summary
March 2024 was another fever-pitched month for software vulnerabilities due to the NIST NVD outage and active exploitation of several vulnerabilities in enterprise and consumer software products. On the bright side, several zero-day vulnerabilities impacting all major browsers were identified and patched.
However, the fact that a single researcher was able to so quickly exploit all major browsers is serious wake-up call for all organizations since the browser plays such a fundamental role in modern enterprise operations. Vulnerability management remains a core element in cybersecurity strategy, and regularly scanning IT infrastructure for vulnerabilities ensures that the latest threats can be identified for remediation ā closing the gaps that attackers seek to exploits for access to critical systems and data.
Have you ever faced the challenge of ensuring certain user properties, like usernames or email addresses, remain off-limits for future accounts after deleting a user? The new blocklist feature in Univention Corporate Server Version 5.0-6-erratum-974 is the solution. This article takes a closer look at UDM blocklists.
A Quick Look at the Basics
Blocklists are an essential tool for administrators, enabling them to proactively prevent the reuse of user or group properties. Imagine keeping previously used values like email addresses or usernames locked for a set duration. This function becomes a cornerstone in larger UCS environments, where the cycle of creating and deleting accounts is a regular affair.
So, what exactly are user or group properties? Weāre talking about crucial details such as the username (username), first and last names (firstname, lastname), the password (password), and, importantly, the primary email address of a user account (mailPrimaryAddress), along with the email address associated with a group (mailAddress).
You can place any of these properties on one or more blocklists to prevent their reuse. Picture this scenario: in your organization, thereās an employee named Anna Alster with the email a.alster@organisation.de. When Anna leaves the company, her email address, along with her user account, is deleted. Fast forward a few weeks, and a new colleague, Anita Alster, joins the team. According to company policy, sheās assigned the same email address: a.alster@organisation.de. This could lead to an uncomfortable situation where Anita might access Annaās āoldā emails.
With the introduction of the new blocklists in the Univention Directory Manager (UDM), you can avert such scenarios with ease. Administrators have the power to specify in advance which properties are off-limits for reuse and for how long. Once set, the system seamlessly handles the rest.
This article presents the new feature in detail, guiding you through the steps to create, edit, and delete these blocklists. Whether you prefer the intuitive Univention Management Console (UMC) or the command-line agility of the udm tool, managing these lists is straightforward and efficient.
How to activate Blocklists and configure the Cron Job
To use the new blocklists, start by updating all UCS systems where you manage UDM objects. Itās crucial to have the latest UCS version, 5.0-6-erratum-974, running on all your machines. Donāt forget to install any available package updates for each computer too. Conveniently, both these tasks can be effortlessly completed through the Software Update module in the Univention Management Console.
Next, edit the necessary UCR variable. Navigate to the System / Univention Configuration Registry module and look for the directory/manager/blocklist/enabled entry. Change this variable to true and then save your changes.
After activating the blocklists, the next step is to set a duration for each. This duration determines how long each block remains effective. Once the specified period expires, the system automatically clears the entries from the blocklist. This removal process is managed by a script, triggered by a cron job every morning at 8 a.m. If you need to adjust this timing, simply edit the UCR variable directory/manager/blocklist/cleanup/cron and input the desired time in crontab syntax in the Value field.
The next two sections will guide you through configuring the blocklists yourself. Weāll cover two methodsāonce via the Univention Management Console and once on the command line.
Configuring Blocklists via UMC
To manage your blocklists, start by accessing the Domain / Blocklists module. This is your hub for creating new blocklists, as well as editing or deleting existing ones. To initiate a new list, simply click on Add. For this new blocklist, youāll need to make some key entries:
Name: Choose an easily identifiable name for your blocklist. A descriptive, unique name is best, especially if youāll be managing multiple blocklists.
Retention time for objects in this blocklist: In this field, specify the length of time the block should remain in effect. This duration is critical; once itās surpassed, the blocklist will be automatically deleted. Use time units like y (years), m (months), and d (days) to define this period. For example, entering 2y3m1d sets the blocklist to stay active for 2 years, 3 months, and 1 day.
In the Properties to block section, your task is to specify which properties need to be locked from reuse. This is where you identify the UDM modules and their corresponding properties. For instance, if you aim to block the reuse of primary email addresses for user accounts, simply enter users/user in the UDM module field and mailPrimaryAddress as the property.
If you need to block additional properties, simply click the plus sign located just below the input fields. This allows you to add more modules and their respective properties to the same blocklist. For example, to block an email address used by a group, add groups/group as the module and mailAddress as the property.
Once youāve configured the blocklist to your needs, click Save to finalize your changes. Remember, the Domain / Blocklists module in UMC isnāt just for creating new lists. You can return to this module anytime to make adjustments or delete existing blocklists.
Configuring Blocklists via Command Line
For those who prefer working outside the web interface, the Univention Directory Manager (UDM) offers a powerful command-line alternative to manage blocklists. Known as univention-directory-manager, or simply udm, this tool requires root privileges for operation. One of the key advantages here is that both UMC modules and UDM provide access to the same domain administration modules. This means you get the same functionality through the command line as you would in the web interface. To explore the range of capabilities and options available, just type udm āhelp. This command brings up a comprehensive list of all supported parameters and options.
When managing blocklists via the command line, use the command udm blocklists/list along with its subcommands to efficiently handle different tasks. These subcommands include:
create: Creates a new blocklist.
modify: Make changes to an existing blocklist.
remove: Delete a blocklist.
list: View all the blocklists that currently exist.
To create a new blocklist that excludes a username from reuse for one year, youāll need to define several parameters in your udm blocklists/list command. Start with a name for the list using āset name=, followed by the time period for the block with āset retentionTime=, and then specify the UDM module and property with āappend blockingProperties=. Enclose any expressions with spaces and special characters in double quotation marks. Thus, the complete udm command to achieve this would look as follows:
When you list the existing blocklists, youāll see not only this newly created list but also all entries that have been made through the Univention Management Console.
To delete a blocklist on the command line, use the remove command, the āfilter name= parameter, and enter the listās name:
Keep in mind, if the list name contains special characters or spaces, itās important to enclose it in double quotation marks.
Test Run: User Name Reuse Strictly Prohibited!
If you attempt to assign a user property thatās currently on a blocklist, the system will promptly notify you. The image below illustrates this: it shows an attempt to create an account with the name hej. However, this action is prevented by an existing blocklist that restricts the use of already assigned usernames for one year:
Effortless and Intelligent Administration Made Easy
The new UDM blocklists are an invaluable asset for user administration. They equip administrators with a robust tool to effectively manage the reuse of sensitive user properties, including email addresses and usernames. This feature plays a crucial role in minimizing potential mix-ups and enhancing security.
The latest version of Sparky CLI Installer provides a few changes, such as added autopartitioning option and so, setting the target system a little faster:
ā autopartitioning of selected disk ā removes all data of the chosen disk
ā auto creates and formattes 3 partitions: root, swap and efi if required
ā requires 15 GB of disk minimum
ā no root password (sudo in use only)
ā installs grub in mbr with 5 sec (default) delay
ā autologin enabled as default if any desktop is installed
It is available to Sparky testing (8) only.
To get and test it, launch Sparky latest ISO image 2024.02 and update the installer to version => 202404014: sudo apt update
sudo apt install sparky-backup-core
The launch the cli installer as usually: sudo sparky-installer
Years ago, at what I think I remember was DebConf 15, I hacked for a while
on debhelper to
write build-ids to debian binary control files,
so that the build-id (more specifically, the ELF note
.note.gnu.build-id) wound up in the Debian apt archive metadata.
Iāve always thought this was super cool, and seeing as how Michael Stapelberg
blogged
some great pointers around the ecosystem, including the fancy new debuginfod
service, and the
find-dbgsym-packages
helper, which uses these same headers, I donāt think Iām the only one.
At work Iāve been using a lot of rust,
specifically, async rust using tokio. To try and work on
my style, and to dig deeper into the how and why of the decisions made in these
frameworks, Iāve decided to hack up a project that Iāve wanted to do ever
since 2015 ā write a debug filesystem. Letās get to it.
Back to the Future
Time to admit something. I really love Plan 9. Itās
just so good. So many ideas from Plan 9 are just so prescient, and everything
just feels right. Not just right like, feels good ā like, correct. The
bit that Iāve always liked the most is 9p, the network protocol for serving
a filesystem over a network. This leads to all sorts of fun programs, like the
Plan 9 ftp client being a 9p server ā you mount the ftp server and access
files like any other files. Itās kinda like if fuse were more fully a part
of how the operating system worked, but fuse is all running client-side. With
9p thereās a single client, and different servers that you can connect to,
which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.
The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9
in terms of adoption ā 9p is in all sorts of places folks donāt usually expect.
For instance, the Windows Subsystem for Linux uses the 9p protocol to share
files between Windows and Linux. ChromeOS uses it to share files with Crostini,
and qemu uses 9p (virtio-p9) to share files between guest and host. If youāre
noticing a pattern here, youād be right; for some reason 9p is the go-to protocol
to exchange files between hypervisor and guest. Why? I have no idea, except maybe
due to being designed well, simple to implement, and itās a lot easier to validate the data being shared
and validate security boundaries. Simplicity has its value.
As a result, thereās a lot of lingering 9p support kicking around. Turns out
Linux can even handle mounting 9p filesystems out of the box. This means that I
can deploy a filesystem to my LAN or my localhost by running a process on top
of a computer that needs nothing special, and mount it over the network on an
unmodified machine ā unlike fuse, where youād need client-specific software
to run in order to mount the directory. For instance, letās mount a 9p
filesystem running on my localhost machine, serving requests on 127.0.0.1:564
(tcp) that goes by the name āmountpointnameā to /mnt.
Linux will mount away, and attach to the filesystem as the root user, and by default,
attach to that mountpoint again for each local user that attempts to use
it. Nifty, right? I think so. The server is able
to keep track of per-user access and authorization
along with the host OS.
WHEREIN I STYX WITH IT
Since I wanted to push myself a bit more with rust and tokio specifically,
I opted to implement the whole stack myself, without third party libraries on
the critical path where I could avoid it. The 9p protocol (sometimes called
Styx, the original name for it) is incredibly simple. Itās a series of client
to server requests, which receive a server to client response. These are,
respectively, āTā messages, which transmit a request to the server, which
trigger an āRā message in response (Reply messages). These messages are
TLV payload
with a very straight forward structure ā so straight forward, in fact, that I
was able to implement a working server off nothing more than a handful of man
pages.
Later on after the basics worked, I found a more complete
spec page
that contains more information about the
unix specific variant
that I opted to use (9P2000.u rather than 9P2000) due to the level
of Linux specific support for the 9P2000.u variant over the 9P2000
protocol.
MR ROBOTO
The backend stack over at zoo is rust and tokio
running i/o for an HTTP and WebRTC server. I figured Iād pick something
fairly similar to write my filesystem with, since 9P can be implemented
on basically anything with I/O. That means tokio tcp server bits, which
construct and use a 9p server, which has an idiomatic Rusty API that
partially abstracts the raw R and T messages, but not so much as to
cause issues with hiding implementation possibilities. At each abstraction
level, thereās an escape hatch ā allowing someone to implement any of
the layers if required. I called this framework
arigato which can be found over on
docs.rs and
crates.io.
/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File {
/// OpenFile is the type returned by this File via an Open call.
typeOpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
/// the same. A Qid contains information about the mode of the file,
/// version of the file, and a unique 64 bit identifier.
fnqid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
async fnstat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
async fnwstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
async fnwalk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
async fnunlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
async fncreate(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
/// file i/o. This is split into a second type since it is genuinely
/// unrelated -- and the fact that a file is Open or Closed can be
/// handled by the `arigato` server for us.
async fnopen(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
}
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile {
/// iounit to report for this file. The iounit reported is used for Read
/// or Write operations to signal, if non-zero, the maximum size that is
/// guaranteed to be transferred atomically.
fniounit(&self) -> u32;
/// Read some number of bytes up to `buf.len()` from the provided
/// `offset` of the underlying file. The number of bytes read is
/// returned.
async fnread_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to `buf.len()` from the provided
/// `offset` of the underlying file. The number of bytes written
/// is returned.
fnwrite_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
}
Thanks, decade ago paultag!
Letās do it! Letās use arigato to implement a 9p filesystem weāll call
debugfs that will serve all the debug
files shipped according to the Packages metadata from the apt archive. Weāll
fetch the Packages file and construct a filesystem based on the reported
Build-Id entries. For those who donāt know much about how an apt repo
works, hereās the 2-second crash course on what weāre doing. The first is to
fetch the Packages file, which is specific to a binary architecture (such as
amd64, arm64 or riscv64). That architecture is specific to a
component (such as main, contrib or non-free). That component is
specific to a suite, such as stable, unstable or any of its aliases
(bullseye, bookworm, etc). Letās take a look at the Packages.xz file for
the unstable-debugsuite, maincomponent, for all amd64 binaries.
This will return the Debian-style
rfc2822-like headers,
which is an export of the metadata contained inside each .deb file which
apt (or other tools that can use the apt repo format) use to fetch
information about debs. Letās take a look at the debug headers for the
netlabel-tools package in unstable ā which is a package named
netlabel-tools-dbgsym in unstable-debug.
So here, we can parse the package headers in the Packages.xz file, and store,
for each Build-Id, the Filename where we can fetch the .deb at. Each
.deb contains a number of files ā but weāre only really interested in the
files inside the .deb located at or under /usr/lib/debug/.build-id/,
which you can find in debugfs under
rfc822.rs. Itās
crude, and very single-purpose, but Iām feeling a bit lazy.
Who needs dpkg?!
For folks who havenāt seen it yet, a .deb file is a special type of
.ar file, that contains (usually)
three files inside ā debian-binary, control.tar.xz and data.tar.xz.
The core of an .ar file is a fixed size (60 byte) entry header,
followed by the specified size number of bytes.
[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...
First up was to implement a basic ar parser in
ar.rs. Before we get
into using it to parse a deb, as a quick diversion, letās break apart a .deb
file by hand ā something that is a bit of a rite of passage (or at least it
used to be? Iām getting old) during the Debian nm (new member) process, to take
a look at where exactly the .debug file lives inside the .deb file.
$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz | grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
Since we know quite a bit about the structure of a .deb file, and I had to
implement support from scratch anyway, I opted to implement a (very!) basic
debfile parser using HTTP Range requests. HTTP Range requests, if supported by
the server (denoted by a accept-ranges: bytes HTTP header in response to an
HTTP HEAD request to that file) means that we can add a header such as
range: bytes=8-68 to specifically request that the returned GET body be the
byte range provided (in the above case, the bytes starting from byte offset 8
until byte offset 68). This means we can fetch just the ar file entry from
the .deb file until we get to the file inside the .deb we are interested in
(in our case, the data.tar.xz file) ā at which point we can request the body
of that file with a final range request. I wound up writing a struct to
handle a read_at-style API surface in
hrange.rs, which
we can pair with ar.rs above and start to find our data in the .deb remotely
without downloading and unpacking the .deb at all.
After we have the body of the data.tar.xz coming back through the HTTP
response, we get to pipe it through an xz decompressor (this kinda sucked in
Rust, since a tokioAsyncRead is not the same as an http Body response is
not the same as std::io::Read, is not the same as an async (or sync)
Iterator is not the same as what the xz2 crate expects; leading me to read
blocks of data to a buffer and stuff them through the decoder by looping over
the buffer for each lzma2 packet in a loop), and tarfile parser (similarly
troublesome). From there we get to iterate over all entries in the tarfile,
stopping when we reach our file of interest. Since we canāt seek, but gdb
needs to, weāll pull it out of the stream into a Cursor<Vec<u8>> in-memory
and pass a handle to it back to the user.
I was originally hoping to avoid transferring the whole tar file over the
network (and therefore also reading the whole debug file into ram, which
objectively sucks), but quickly hit issues with figuring out a way around
seeking around an xz file. Whatās interesting is xz has a great primitive
to solve this specific problem (specifically, use a block size that allows you
to seek to the block as close to your desired seek position just before it,
only discarding at most block size - 1 bytes), but data.tar.xz files
generated by dpkg appear to have a single mega-huge block for the whole file.
I donāt know why I would have expected any different, in retrospect. That means
that this now devolves into the base case of āHow do I seek around an lzma2
compressed data streamā; which is a lot more complex of a question.
Thankfully, notoriously brilliant tianon was
nice enough to introduce me to Jon Johnson
who did something super similar ā adapted a technique to seek inside a
compressed gzip file, which lets his service
oci.dag.dev
seek through Docker container images super fast based on some prior work
such as soci-snapshotter, gztool, and
zran.c.
He also pulled this party trick off for apk based distros
over at apk.dag.dev, which seems apropos.
Jon was nice enough to publish a lot of his work on this specifically in a
central place under the name ātargzā
on his GitHub, which has been a ton of fun to read through.
The gist is that, by dumping the decompressorās state (window of previous
bytes, in-memory data derived from the last N-1 bytes) at specific
ācheckpointsā along with the compressed data stream offset in bytes and
decompressed offset in bytes, one can seek to that checkpoint in the compressed
stream and pick up where you left off ā creating a similar āblockā mechanism
against the wishes of gzip. It means youād need to do an O(n) run over the
file, but every request after that will be sped up according to the number
of checkpoints youāve taken.
Given the complexity of xz and lzma2, I donāt think this is possible
for me at the moment ā especially given most of the files Iāll be requesting
will not be loaded from again ā especially when I can ājustā cache the debug
header by Build-Id. I want to implement this (because Iām generally curious
and Jon has a way of getting someone excited about compression schemes, which
is not a sentence I thought Iād ever say out loud), but for now Iām going to
move on without this optimization. Such a shame, since it kills a lot of the
work that went into seeking around the .deb file in the first place, given
the debian-binary and control.tar.gz members are so small.
The Good
First, the good news right? It works! Thatās pretty cool. Iām positive
my younger self would be amused and happy to see this working; as is
current day paultag. Letās take debugfs out for a spin! First, we need
to mount the filesystem. It even works on an entirely unmodified, stock
Debian box on my LAN, which is huge. Letās take it for a spin:
And, letās prove to ourselves that this actually mounted before we go
trying to use it:
$ mount | grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)
Slick. Weāve got an open connection to the server, where our host
will keep a connection alive as root, attached to the filesystem provided
in aname. Letās take a look at it.
$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff
Outstanding. Letās try using gdb to debug a binary that was provided by
the Debian archive, and see if itāll load the ELF by build-id from the
right .deb in the unstable-debug suite:
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Yes! Yes it will!
$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped
The Bad
Linuxās support for 9p is mainline, which is great, but itās not robust.
Network issues or server restarts will wedge the mountpoint (Linux canāt
reconnect when the tcp connection breaks), and things that work fine on local
filesystems get translated in a way that causes a lot of network chatter ā for
instance, just due to the way the syscalls are translated, doing an ls, will
result in a stat call for each file in the directory, even though linux had
just got a stat entry for every file while it was resolving directory names.
On top of that, Linux will serialize all I/O with the server, so thereās no
concurrent requests for file information, writes, or reads pending at the same
time to the server; and read and write throughput will degrade as latency
increases due to increasing round-trip time, even though there are offsets
included in the read and write calls. It works well enough, but is
frustrating to run up against, since thereās not a lot you can do server-side
to help with this beyond implementing the 9P2000.L variant (which, maybe is
worth it).
The Ugly
Unfortunately, we donāt know the file size(s) until weāve actually opened the
underlying tar file and found the correct member, so for most files, we donāt
know the real size to report when getting a stat. We canāt parse the tarfiles
for every stat call, since thatād make ls even slower (bummer). Only
hiccup is that when I report a filesize of zero, gdb throws a bit of a
fit; letās try with a size of 0 to start:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]
This obviously wonāt work since gdb will throw away all our hard work because
of statās output, and neither will loading the real size of the underlying
file. That only leaves us with hardcoding a file size and hope nothing else
breaks significantly as a result. Letās try it again:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Much better. I mean, terrible but better. Better for now, anyway.
Kilroy was here
Do I think this is a particularly good idea? I mean; kinda. Iām probably going
to make some fun 9parigato-based filesystems for use around my LAN, but I
donāt think Iāll be moving to use debugfs until I can figure out how to
ensure the connection is more resilient to changing networks, server restarts
and fixes on i/o performance. I think it was a useful exercise and is a pretty
great hack, but I donāt think thisāll be shipping anywhere anytime soon.
Along with me publishing this post, Iāve pushed up all my repos; so you
should be able to play along at home! Thereās a lot more work to be done
on arigato; but it does handshake and successfully export a working
9P2000.u filesystem. Check it out on on my github at
arigato,
debugfs
and also on crates.io
and docs.rs.
At least I can say I was here and I got it working after all these years.
It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.
In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!
Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !
Test Kubuntu 24.04 Beta and Experience Innovation with KubuQA!
Weāre thrilled to announce the availability of the Kubuntu 24.04 Beta! This release is packed with new features and enhancements, and weāre inviting you, our valued community, to join us in fine-tuning this exciting new version. Whether youāre a seasoned tester or new to software testing, your feedback is crucial to making Kubuntu 24.04 the best it can be.
To make your testing journey as easy as pie, weāre introducing a fantastic new tool: KubuQA. Designed with both new and experienced users in mind, KubuQA simplifies the testing process by automating the download, VirtualBox setup, and configuration steps. Now, everyone can participate in testing Kubuntu with ease!
This beta release also debuts our fresh new branding, artwork, and wallpapersācreated and chosen by our own community through recent branding and wallpaper contests. These additions reflect the spirit and creativity of the Kubuntu family, and we canāt wait for you to see them.
Get Testing
By participating in the beta testing of Kubuntu 24.04, youāre not just helping improve the software; youāre becoming an integral part of a global community that values open collaboration and innovation. Your contributions help us identify and fix issues, ensuring Kubuntu remains a high-quality, stable, and user-friendly Linux distribution.
The benefits of joining our testing team extend beyond improving the software. Youāll gain valuable experience, meet like-minded individuals, and perhaps discover a new passion in the world of open-source software.
So why wait? Download the Kubuntu 24.04 Beta today, try out KubuQA, or follow our wiki to upgrade and help us make Kubuntu better than ever! Remember, your feedback is the key to our success.
Ready to make an impact?
Join us in this exciting phase of development and see your ideas come to life in Kubuntu. Plus, enjoy the satisfaction of knowing that youāve contributed to a project used by millions around the world. Become a tester today and be part of something big!
Interested in more than testing?
By the way, have you thought about becoming a member of the Kubuntu Community? Itās a fantastic way to contribute more actively and help shape the future of Kubuntu. Learn more about joining the community.
The Ubuntu team is pleased to announce the Beta release of the Ubuntu 24.04 LTS Desktop, Server, and Cloud products.
Ubuntu 24.04 LTS, codenamed āNoble Numbatā, continues Ubuntuās proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been very hard at work through this cycle, introducing new features and fixing bugs.
This Beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity and Xubuntu flavors.
The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of Ubuntu 24.04 LTS that should be representative of the features intended to ship with the final release expected on April 25, 2024.
Ubuntu, Ubuntu Server, Cloud Images:
Noble Beta includes updated versions of most of our core set of packages, including a current 6.8 kernel, and much more.
To upgrade to Ubuntu 24.04 LTS Beta from Ubuntu 23.10 or Ubuntu 22.04 LTS, follow these instructions:
As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20240411 or higher) should be considered a Beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.
The full release notes for Ubuntu 24.04 LTS Beta can be found at:
Lubuntu is a flavor of Ubuntu which uses the Lightweight Qt Desktop Environment (LXQt). The projectās goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu base.
Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflow: audio, graphics, video, photography and publishing.
Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 24.04 LTS, codenamed āNoble Numbatā.
While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 24.04 is released on April 25, 2024.
Special Notes
The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Please note that upgrading before the release of 24.04.1,due August 2024, is unsupported.
New Features This Release
PipeWire continues to improve with every release and is so robust it can be used for professional and prosumer use. Version 1.0.4
Ubuntu Studio Installerās included Ubuntu Studio Audio Configurationutility for fine-tuning the PipeWire setup or changing the configuration altogether now includes the ability to create or remove a dummy audio device. Version 1.9
Major Package Upgrades
Ardour version 8.4.0
Qtractor version 0.9.39
OBS Studio version 30.0.2
Audacity version 3.4.2
digiKam version 8.2.0
Kdenlive version 23.08.5
Krita version 5.2.2
There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.
Known Issues
Ubuntu Studioās classic PulseAudio-JACK configuration cannot be used on Ubuntu Desktop (GNOME) due to a known issue with the ubuntu-desktop metapackage. (LP: #2033440)
We now discourage the use of the aforementioned classic PulseAudio-JACK configuration as PulseAudio is becoming deprecated with time in favor of PipeWire. PipeWireās JACK configuration can be disabled to use JACK2 via QJackCTL for advanced users.
Due to the Ubuntu repositories being in-flux following the time_t transition and xz-utils security issue resolution, some items in the repository are uninstallable or causing other packaging conflicts. The Ubuntu Release Team is working around the clock to help resolve these issues, so patience is required.
Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. See this post as to the reasons why and go here to see how you can contribute financially (options are also in the sidebar).
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozillaās distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird has become a snap this cycle in order for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Q: If I install this Beta release, will I have to reinstall when the final release comes out? A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since weāre completely volunteer-run, we donāt have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio ā which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I donāt want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you donāt want or need!
We are happy to announce the Beta release for Lubuntu Noble (what will become 24.04 LTS)! What makes this cycle unique? Lubuntu is a lightweight flavor of Ubuntu, based on LXQt and built for you. As an official flavor, we benefit from Canonicalās infrastructure and assistance, in addition to the support and enthusiasm from the [ā¦]
In March, we made many minor improvements to existing features. Still, there are some significant features: many new services are available for configuration sync, drive encryption via LUKS with TPM support, and a new command to trigger commit archive manually.
After experts noticed a rapid increase in cyberattacks on local authorities and government agencies in 2023, the horror stories donāt stop in 2024. The pressure to act is enormous, as the EUās NIS2 Directive will come into force in October and makes risk and vulnerability management mandatory.
āThe threat level is higher than ever,ā said Claudia Plattner, President of the German Federal Office for Information Security (BSI), at Bitkom in early March. The question is not whether an attack will be successful, but only when. The BSIās annual reports, for example the most recent report from 2023, also speak volumes in this regard. However, according to Plattner, it is striking how often local authorities, hospitals and other public institutions are at the centre of attacks. There is ānot a problem with measures but with implementation in companies and authoritiesā, said Plattner. One thing is clear: vulnerability management such as Greenboneās can provide protection and help to avoid the worst.
US authorities infiltrated by Chinese hackers
In view of the numerous serious security incidents, vulnerability management is becoming more important every year. Almost 70 new security vulnerabilities have been added every day in recent months. Some of them opened the door to attackers deep inside US authorities, as reported in the Greenbone Enterprise Blog:
According to the media, US authorities have been infiltrated by Chinese hacker groups such as the probably state-sponsored āVolt Typhoonā for years via serious security gaps. The fact that Volt Typhoon and similar groups are a major problem was even confirmed by Microsoft itself in a blog back in May 2023. But thatās not all: German media reported that Volt Typhoon is taking advantage of the abundant vulnerabilities in VPN gateways and routers from FortiNet, Ivanti, Netgear, Citrix and Cisco. These are currently considered to be particularly vulnerable.
The fact that the quasi-monopolist in Office, groupware, operating systems and various cloud services also had to admit in 2023 that it had the master key for large parts of its Microsoft cloud let stolen destroyed trust in the Redmond software manufacturer in many places. Anyone who has this key doesnāt need a backdoor for Microsoft systems any longer. Chinese hackers are also suspected in this case.
Software manufacturers and suppliers
The supply chain for software manufacturers has been under particular scrutiny by manufacturers and users not only since log4j or the European Cyber Resilience Act. The recent example of the attack on the XZ compression algorithm in Linux also shows the vulnerability of manufacturers. In the case of the ā#xzbackdoorā, a combination of pure coincidence and the activities of Andres Freund, a German developer of open source software for Microsoft with a strong focus on performance, prevented the worst from happening.
An abyss opened up here: It was only thanks to open source development and a joint effort by the community that it came to light that actors had been using changing fake names with various accounts for years with a high level of criminal energy and with methods that would otherwise be more likely to be used by secret services. With little or no user history, they used sophisticated social scams, exploited the notorious overload of operators and gained the trust of freelance developers. This enabled them to introduce malicious code into software almost unnoticed. In the end, it was only thanks to Freundās interest in performance that the attack was discovered and the attempt to insert a backdoor into a tool failed.
US officials also see authorities and institutions as being particularly threatened in this case, even if the attack appears to be rather untargeted and designed for mass use. The issue is complex and far from over, let alone fully understood. One thing is certain: the usernames of the accounts used by the attackers were deliberately falsified. We will continue to report on this in the Greenbone blog.
European legislators react
Vulnerability management cannot prevent such attacks, but it provides indispensable services by proactively warning and alerting administrators as soon as such an attack becomes known ā usually before an attacker has been able to compromise systems. In view of all the difficulties and dramatic incidents, it is not surprising that legislators have also recognised the magnitude of the problem and are declaring vulnerability management to be standard and best practice in more and more scenarios.
Laws and regulations such as the EUās new NIS2 directive make the use of vulnerability management mandatory, including in the software supply chain. Even if NIS2 only actually applies to around 180,000 organisations and companies in the critical infrastructure (KRITIS) or āparticularly importantā or āsignificantā companies in Europe, the regulations are fundamentally sensible ā and will be mandatory from October. The EU Commission emphasises that āoperators of essential servicesā must ātake appropriate security measures and inform the competent national authorities of serious incidentsā. Important providers of digital services such as search engines, cloud computing services and online marketplaces must fulfil the security and notification requirements of the directive.ā
Mandatory from October: A āminimum set of cyber security measuresā
The āDirective on measures for a high common level of cybersecurity across the Union (NIS2)ā forces companies in the European Union to āimplement a benchmark of minimum cybersecurity measuresā, including risk management, training, policies and procedures, also and especially in cooperation with software suppliers. In Germany, the federal states are to define the exact implementation of the NIS2 regulations.
Do you have any questions about NIS2, the Cyber Resilience Act (CRA), vulnerability management in general or the security incidents described? Write to us! We look forward to working with you to find the right compliance solution and give your IT infrastructure the protection it needs in the face of todayās serious attacks.
To make our ecological progress even more sustainable, we keep up to date with regular internal training courses on energy efficiency. In this way, we are helping to make the world even āgreenerā outside of Greenbone.
Weāre thrilled to announce the launch of something special for our beloved Volumio community: the Volumio Rivo Black Edition. This release is more than just a product variant; itās a testament to our commitment to listening to our customers and pushing the boundaries of craftsmanship.
Crafted with meticulous attention to detail, the Volumio Rivo Black Edition is a result of our dedication to creating a product that not only meets but exceeds the expectations of our users. Many of you have expressed a desire for a Volumio Rivo streamer with a sleek black front panel, and weāve taken your feedback to heart.
But this edition is more than just a color change. Itās a labor of love, carefully designed and handcrafted in the heart of Florence, Italy. We wanted to leverage the rich tradition of Italian craftsmanship to bring you a product that not only sounds exceptional but also looks stunning in any environment.
The front panel of the Volumio Rivo Black Edition is made of upcycled black leather, boasting a unique and captivating texture that sets it apart from any other streamer on the market. This choice not only adds a touch of luxury but also aligns with our commitment to sustainability.
In celebration of Volumioās recent IF Design Award win, we wanted the Rivo Black Edition to represents a fusion of design and environmental sustainability.
Collaboration with Apellelab for Eco-Friendly Design
The Volumio Rivo Black Edition has been achieved in partnership with ApelleLab: an artisanal leather workshop based in Florence, incorporating upcycled leather into the design of the Rivo Black Edition.
This collaboration elevates the aesthetic appeal of our bestselling streamer. And it also contributes to the reduction of environmental impact by reusing premium materials. ApelleLabis a hub of leather craftsmanship operating within a circular economy framework committed to upcycling and recycling. It serves as a creative space where artisanal connections are forged, breathing new life into traditional leatherworking techniques. We could not have found a better partner to bring this vision to life!
The Rivo Black Edition inherits its predecessorās unparalleled performance and versatility. Featuring pure digital transport and an extensive array of digital outputs for seamless integration with a wide range of audio setups. With support for high-resolution audio playback and intuitive user interface options, it delivers an unparalleled listening experience for audiophiles and music enthusiasts.
Volumio Rivo Black Edition is a limited edition offered at the same price as the Classic version. With its sleek design, uncompromising performance, and environmentally conscious ethos, it standsĀ as a symbol of Volumioās dedication to pushing boundaries and shaping the future of audio technology.
This is a limited edition with a very limited number of pieces available, so get yours while they last. Once they are gone, they are gone!
This blog is co-authored by Gordan MarkuÅ”, Canonical and Kumar Sankaran, Ventana Micro Systems
Unlocking the future of semiconductor innovationĀ
RISC-V, an open standard instruction set architecture (ISA), is rapidly shaping the future of high-performance computing, edge computing, and artificial intelligence. The RISC-V customizable and scalable ISA enables a new era of processor innovation and efficiency. Furthermore, RISC-V democratizes innovation by allowing new companies to develop their own products on its open ISA, breaking down barriers to entry and fostering a diverse ecosystem of technological advancement.Ā
By fostering a more open and innovative approach to product design, the RISC-V technology vendors are not just a participant in the future of technology; they are a driving force behind the evolution of computing across multiple domains. Its impact extends from the cloud to the edge:
In modern data centers, enterprises seek a range of infrastructure solutions to support the breadth of modern workloads and requirements. RISC-V provides a versatile solution, offering a comprehensive suite of IP cores under a unified ISA that scales efficiently across various applications. This scalability and flexibility makes RISC-V an ideal foundation for addressing the diverse demands of todayās data center environments.
In HPC, its adaptability allows for the creation of specialized processors that can handle complex computations at unprecedented speeds, while also offering a quick time to market for product builders.Ā Ā
For edge computing, RISC-Vās efficiency and the ability to tailor processors for specific tasks mean devices can process more data locally, reducing latency and the need for constant cloud connectivity.Ā
In the realm of AI, the flexibility of RISC-V paves the way for the development of highly optimized AI chips. These chips can accelerate machine learning tasks by executing AI centric computations more efficiently, thus speeding up the training and inference of AI workloads.
One of the unique products that can be designed with RISC-V ISA are chiplets. Chiplets are smaller, modular blocks of silicon that can be integrated to form a larger, more complex chip. Instead of designing a single monolithic chip, a process that is increasingly challenging and expensive at cutting-edge process nodes, manufacturers can create chiplets that specialize in different functions and combine them as needed. RISC-V and chiplet technology is empowering a new era of chip design, enabling more companies to participate in innovation and tailor their products to specific market needs with unprecedented flexibility and cost efficiency.
Ventana and Canonical partnership and technology leadership
Canonical makes open source secure, reliable and easy to use, providing support for Ubuntu and a growing portfolio of enterprise-grade open source technologies. One of the key missions of Canonical is to improve the open source experience across ISA architectures. At the end of 2023, Canonical announced joining the RISC-V Software Ecosystem (RISE) community toĀ support the open source community and ecosystem partners in bringing the best of Ubuntu and open source to RISC-V platforms.Ā
As a part of our collaboration with the ecosystem, Canonical has been working closely with Ventana Micro Systems (Ventana). Ventana is delivering a family of high-performance RISC-V data center-class CPUs delivered in the form of multi-core chiplets or core IP for high-performance applications in the cloud, enterprise data center, hyperscale, 5G, edge compute, AI/ML and automotive markets.Ā
The relationship between Canonical and Ventana started with a collaboration on improving the upstream software availability of RISC-V in projects such as u-boot, EDKII and the Linux kernel.Ā
Over time, the teams have started enabling Ubuntu on Ventanaās Veyron product family. Through the continuous efforts of this partnership Ubuntu is available on the Ventana Veyron product family and as a part of Ventanaās Veyron Software Development Kit (SDK).
Furthermore, the collaboration extends to building full solutions for the datacenter, HPC, AI/ML and Automotive, integrating Domain Specific Accelerators (DSAs) and SDKs, promising to unlock new levels of performance and efficiency for developers and enterprises alike. Some of the targeted software stacks can be seen in the figure below.Ā Ā
Today, Ventana and Canonical collaborate on a myriad of topics. Together through their joint efforts across open source communities and as a part of RISC-V Software Ecosystem (RISE), Ventana and Canonical are actively contributing to the growth of the RISC-V ecosystem. We are proud of the innovation and technology leadership our partnership brings to the ecosystem.Ā
Enabling the ecosystem with enterprise-grade and easy to consume open source on RISC-V platforms
Ubuntu is the reference OS for innovators and developers, but also the vehicle to enable enterprises to take products to market faster. Ubuntu enables teams to focus on their core applications without worrying about the stability of the underlying frameworks. Ventana and the RISC-V ecosystem recognise the value of Ubuntu and are using it as a base platform for their innovation.Ā
Furthermore, the availability of Ubuntu on RISC-V platforms not only allows developers to prototype their solutions easily but provides a path to market with enterprise-grade, secureĀ and supported open source solutions.Whether itās for networking offloads in the data center, training AI models in the cloud, or running AI inference at the edge, Ubuntu is an established platform of choice.
Learn more about Canonicalās engagement in the RISC-V ecosystemĀ
Artificial intelligence is the most exciting technology revolution of recent years. Nvidia, Intel, AMD and others continue to produce faster and faster GPUās enabling larger models, and higher throughput in decision making processes.
Outside of the immediate AI-hype, one area still remains somewhat overlooked: AI needs data (find out more here). First and foremost, storage systems need to provide high performance access to ever growing datasets, but more importantly they need to ensure that this data is securely stored, not just for the present, but also for the future.
There are multiple different types of data used in typical AI systems:
Raw and pre-processed data
Training data
Models
Results
All of this data takes time and computational effort to collect, process and output, and as such need to be protected. In some cases, like telemetry data from a self-driving car, this data might never be able to be reproduced.Ā Even after training data is used to create a model, its value is not diminished; improvements to models require consistent training data sets so that any adjustments can be fairly benchmarked.
Raw, pre-processed, training and results data sets can contain personally identifiable information and as such steps need to be taken to ensure that it is stored in a secure fashion. And more than just the moral responsibility of safely storing data, there can be significant penalties associated with data breaches.
Challenges with securely storing AI data
We covered many of the risks associated with securely storing data in this blog post. The same risks apply in an AI setting as well. Afterall machine learning is another application that consumes storage resources, albeit sometimes at a much larger scale.Ā
AI use cases are relatively new, however the majority of modern storage systems, including the open source solutions like Ceph, have mature features that can be used to mitigate these risks.
Physical theft thwarted by data at rest encryption
Any disk used in a storage system could theoretically be lost due to theft, or when returned for warranty replacement after a failure event. By using at rest encryption, every byte of data stored on a disk, spinning media, or flash, is useless without the cryptographic keys needed to unencrypt the data. Thus protecting sensitive data, or proprietary models created after hours or even days of processing.
Strict access control to keep out uninvited guests
A key tenet of any system design is ensuring that users (real people, or headless accounts) have access only to the resources they need, and that at any time that access can easily be removed. Storage systems like Ceph use both their own access control mechanisms and also integrate with centralised auth systems like LDAP to allow easy access control.
Eavesdropping defeated by in flight encryption
There is nothing worse than someone listening into a conversation that they should not be privy to. The same thing can happen in computer networks too. By employing encryption on all network flows: client to storage, and internal storage system networks no data can be leaked to 3rd parties eavesdropping on the network.
Recover from ransomware with snapshots and versioning
It seems like every week another large enterprise has to disclose a ransomware event, where an unauthorised 3rd party has taken control of their systems and encrypted the data. Not only does this lead to downtime but also the possibility of having to pay a ransom for the decryption key to regain control of their systems and access to their data. AI projects often represent a significant investment of both time and resources, so having an initiative undermined by a ransomware attack could be highly damaging.
Using point in time snapshots or versioning of objects can allow an organisation to revert to a previous non-encrypted state, and potentially resume operations sooner.
Learn more
Ceph is one storage solution that can be used to store various AI datasets, and is not only scalable to meet performance and capacity requirements, but also has a number of features to ensure data is stored securely.Ā Ā
Find out more about how Ceph solves AI storage challenges:
"Hey, why are you still using QtCreator? We already have deepin-IDE now. Why don't you give it a try?" "Although deepin-IDE has all the basic functions, it's not very user-friendly. Many people in the forums have said the same." "Oh? We should look at this from a perspective of development. Now, deepin-IDE is much more lightweight and even supports bootstrapping. You can use an IDE developed by yourself to develop your own projects. Don't believe me? Let's dig into the issues on Deepin's forum together and see for ourselves." Editor Improvements "You see, this deepiner below mentions that the font ...Read more
The collaborationĀ will bring Ubuntu and Ubuntu Core to devices powered by QualcommĀ® processors
Today Canonical, the publisher of Ubuntu, announced a collaboration with Qualcomm Technologies, Inc., the latest major System-on-Chip manufacturer and designer to join Canonicalās silicon partner program.
Through the partner program, Qualcomm Technologies will have access to a secure, open source operating system, and optimised flavour of Ubuntu for Qualcomm Technologiesā software.Ā In addition, optimised Ubuntu and Ubuntu Core images will be available for Qualcomm SoCs, enabling enterprises to meet their regulatory, compliance and security demands for AI at the edge and the broader IoT market with a secure operating system that is supported for 10 years.Ā
Security-first and AI ready
The massive growth in AI and edge computing is exciting for device manufacturers. However, it also brings considerable challenges due to cybersecurity regulations which place increased security demands on embedded devices. On top of this, devices have to be easy to adopt and use by developers, and need to remain performant.Ā
To help meet these challenges, Qualcomm Technologies chose to partner with Canonical to create an optimised Ubuntu for Qualcomm IoT chipsets,Ā giving developers an easy path to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation.
āThe combination of Qualcomm Technologiesā processors with the popularity of Ubuntu among AI and IoT developers is a game changer for the industry,ā commented Dev Singh, Vice President, Business Development and Head of Building, Enterprise & Industrial Automation, Qualcomm Technologies, Inc. āThe collaboration was a natural fit, with Qualcomm Technologiesās Product Longevity program complementing the 10-year enterprise security and support commitments made by Canonical.ā
Ideal to speed up time to market
Canonical and Ubuntu offer Qualcomm Technologies the tools and peace of mind to meet new IoT, AI and edge computing market challenges head on.Ā
By placing Ubuntu and Ubuntu Core at the centre of its devices and products, Qualcomm Technologies is creating a generation of devices that will be easy for developers to use and adopt.
The collaboration between Qualcomm Technologies and Canonical will provide options to the industry to accelerate time to market and reduce development costs. Ā Developers and enterprises can benefit from the Ubuntu Certified Hardware program, which features a growing list of certified ODM boards and devices based on Qualcomm SoCs. These certified devices deliver an optimised Ubuntu experience out-of-the-box, enabling developers to focus on developing applications and bringing products to market.Ā
āCanonicalās partner programs, in conjunction with Canonicalās expertise in guiding customers navigate their AI and IoT journey, help set the industry bar for performance with robustness, security and compliance. The work to integrate and optimise Qualcomm Technologiesā software with Ubuntu will enable channel partners and manufacturers to bring Ubuntu and Ubuntu Core platforms to a wide range of devicesā, said Olivier Philippe, VP for Devices Engineering at Canonical.
Join Canonical and Qualcomm at Embedded World
The collaboration between Canonical and Qualcomm Technologies kicks off at the Embedded World conference, held at the exhibition centre in Nuremberg, Germany, on 9 to 11 April 2024.Ā
Visit Canonical booth at [4-354]
Visit Qualcomm booth at [5-161]
To find out more about Canonicalās partnership and optimised services for IoT, edge and AI products, stop by Canonicalās booth , or visit https://ubuntu.com/internet-of-things
About CanonicalĀ
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.Ā Learn more at https://canonical.com/
Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patented technologies are licensed by Qualcomm Incorporated.
Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.
XSA-456 (At the time of publication, this page was missing from the Xen Project website, so we are also including a link to the email announcement for XSA-456.)
Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.
---===[ Qubes Security Bulletin 102 ]===---
2024-04-09
Multiple speculative-execution vulnerabilities:
Spectre-BHB, BTC/SRSO (XSA-455, XSA-456)
User action
------------
Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.
Summary
--------
The Xen Project published the following security advisories on
2024-04-09:
XSA-455 [3] "x86: Incorrect logic for BTC/SRSO mitigations":
| Because of a logical error in XSA-407 (Branch Type Confusion), the
| mitigation is not applied properly when it is intended to be used.
| XSA-434 (Speculative Return Stack Overflow) uses the same
| infrastructure, so is equally impacted.
|
| For more details, see:
| https://xenbits.xen.org/xsa/advisory-422.html
| https://xenbits.xen.org/xsa/advisory-434.html
XSA-456 [4] "x86: Native Branch History Injection":
| In August 2022, researchers at VU Amsterdam disclosed Spectre-BHB.
|
| Spectre-BHB was discussed in XSA-398. At the time, the susceptibility
| of Xen to Spectre-BHB was uncertain so no specific action was taken in
| XSA-398. However, various changes were made thereafter in upstream
| Xen as a consequence; more on these later.
|
| VU Amsterdam have subsequently adjusted the attack to be pulled off
| entirely from userspace, without the aid of a managed runtime in the
| victim context.
|
| For more details, see:
| https://vusec.net/projects/native-bhi
| https://vusec.net/projects/bhi-spectre-bhb
| https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/branch-history-injection.html
| https://xenbits.xen.org/xsa/advisory-398.html
Impact
-------
On affected systems, an attacker who manages to compromise a qube may be
able to use it to infer the contents of arbitrary system memory,
including memory assigned to other qubes. For more information, see:
- QSB-077 [5] for XSA-389
- QSB-083 [6] for XSA-407
- QSB-093 [7] for XSA-434
Affected systems
-----------------
For XSA-455, the affected systems are the same as in QSB-083 [6] and
QSB-093 [7].
For XSA-456, only Intel CPUs with the eIBRS feature (available since
2019) are affected. You can check for the presence of the eIBRS feature
by looking for "eibrs" in the "Dynamic Sets" section of the `xen-cpuid
-v` command output. For example, you can execute the following command
in dom0:
xen-cpuid -v | sed -n '/^Dynamic/,$ { /eibrs/p }'
Empty output means that XSA-456 does not affect the CPU, while non-empty
output means that XSA-456 does affect the CPU.
Patching
---------
The following packages contain security updates that address the
vulnerabilities described in this bulletin:
For Qubes 4.1, in dom0:
- Xen packages, version 4.14.6-8
For Qubes 4.2, in dom0:
- Xen packages, version 4.17.3-5
These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]
Dom0 must be restarted afterward in order for the updates to take
effect.
If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.
Credits
--------
See the original Xen Security Advisory.
References
-----------
[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-455.html
[4] https://xenbits.xen.org/xsa/advisory-456.html
[5] https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-077-2022.txt
[6] https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-083-2022.txt
[7] https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-093-2023.txt
--
The Qubes Security Team
https://www.qubes-os.org/security/
The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.
What is a Qubes security bulletin (QSB)?
A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).
Why should I care about QSBs?
QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.
What are the PGP signatures that accompany QSBs?
A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.
Why should I care whether a QSB is authentic?
A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.
How do I verify the PGP signatures on a QSB?
The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)
Obtain the Qubes Master Signing Key (QMSK), e.g.:
$gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
gpg: directory '/home/user/.gnupg' created
gpg: keybox '/home/user/.gnupg/pubring.kbx' created
gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
gpg: Total number processed: 1
gpg: imported: 1
View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)
$gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
gpg (GnuPG) 2.2.27;Copyright (C) 2021 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
gpg>fpr
pub rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
Primary key fingerprint: 427F 11FD 0FAA 4B08 0123 F01C DDFA 1A3E 3687 9494
Important: At this point, you still donāt know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.
Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you donāt have to repeat this step in the future.
Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (āultimateā), then quit GnuPG with q.
gpg>trust
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: unknown validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
pub rsa4096/DDFA1A3E36879494
created: 2010-04-01 expires: never usage: SC
trust: ultimate validity: unknown
[ unknown] (1). Qubes Master Signing Key
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg>q
Import the included PGP keys. (See our PGP key policies for important information about these keys.)
$gpg --import qubes-secpack/keys/*/*gpg: key 063938BA42CFA724: public key "Marek Marczykowski-GĆ³recki (Qubes OS signing key)" imported
gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
gpg: key F32894BE9684938A: public key "Marek Marczykowski-GĆ³recki (Qubes Documentation Signing Key)" imported
gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key D655A4F21830E06A: public key "Marek Marczykowski-GĆ³recki (Qubes security pack)" imported
gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
gpg: no valid OpenPGP data found.
gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
gpg: Total number processed: 17
gpg: imported: 16
gpg: unchanged: 1
gpg: marginals needed: 3 completes needed: 1 trust model: pgp
gpg: depth: 0 valid: 1 signed: 6 trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1 valid: 6 signed: 0 trust: 6-, 0q, 0n, 0m, 0f, 0u
Verify signed Git tags.
$cd qubes-secpack/
$git tag -v`git describe`object 266e14a6fae57c9a91362c9ac784d3a891f4d351
type commit
tag marmarek_sec_266e14a6
tagger Marek Marczykowski-GĆ³recki 1677757924 +0100
Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-GĆ³recki (Qubes security pack)" [full]
The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.
Verify PGP signatures, e.g.:
$cd QSBs/
$gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-GĆ³recki (Qubes security pack)" [full]
$gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
$cd ../canaries/
$gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
gpg: using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
gpg: Good signature from "Marek Marczykowski-GĆ³recki (Qubes security pack)" [full]
$gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
gpg: using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.
For this announcement (QSB-102), the commands are:
You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-102 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.
We are thrilled to announce the winners of the Kubuntu Brand Graphic Design contest and the Wallpaper Contest! These competitions brought out the best in creativity, innovation, and passion from the Kubuntu community, and we couldnāt be more pleased with the results.
Kubuntu Brand Graphic Design Contest Winners
The Kubuntu Council is excited to reveal that after much deliberation and awe at the sheer talent on display, the winner of the Kubuntu Brand Graphic Design contest is Fabio Maricato! Fabioās entry captivated us with its innovative approach and deep understanding of the Kubuntu brand essence. Coming in a close second is Desodi, whose creative flair and original design impressed us all. In third place, we have John Tolorunlojo, whose submission showcased exceptional creativity and skill.
Wallpaper Contest Honours
For the Wallpaper Contest, we had the pleasure of selecting three outstanding entries that will grace the screens of Kubuntu 24.04 LTS users worldwide. Congratulations to Gregorio, Dilip, and Jack Sharp for their stunning wallpaper contributions. Each design brings a unique flavor to the Kubuntu desktop experience, embodying the spirit of our community.
A Heartfelt Thank You
We extend our deepest gratitude to every participant who shared their artistry and vision with us. The number and quality of the submissions were truly beyond our expectations, reflecting the vibrant and creative spirit of the Kubuntu community. Itās your passion and engagement that make Kubuntu not just a powerful operating system, but a canvas for creativity.
Looking Ahead
The Kubuntu Council is thrilled with the success of these contests, and we are already looking forward to future opportunities to showcase the talents within our community. We believe that these winning designs not only celebrate the individuals behind them but also symbolise the collective creativity and innovation that Kubuntu stands for.
Stay tuned for the official inclusion of the winning wallpaper designs in Kubuntu 24.04 LTS, and keep an eye on our website for future contests and community events.
Once again, congratulations to our winners and a massive thank you to all who participated. Your contributions continue to shape and enrich the Kubuntu experience for users around the globe.
Celebrate with Us!
Check out our special banner commemorating the announcement and join us in celebrating the creativity and dedication of our winners and participants alike. Your efforts have truly made this contest a memorable one.
Hereās to many more years of innovation, creativity, and community in the world of Kubuntu.
The results of our contest, our proudly displayed in our Github Repository
Today, Canonical is thrilled to announce our expanded collaboration with Google Cloud to provide Ubuntu images for Google Distributed Cloud. This partnership empowers Google Distributed Cloud customers with security-focused Ubuntu images, ensuring they meet the most stringent compliance standards.
Since 2021, Google Cloud, with its characteristic vision, has built a strong partnership with Canonical. This collaboration highlights both companiesā commitment to providing customers with the air-gapped cloud solutions they need. Through this partnership, Google Cloud demonstrates its strategic brilliance ā delegating foundational image creation and maintenance to Canonicalās expertise, allowing Google Cloud to focus on the heart of Google Distributed Cloud development. Canonicalās dedication to rigorous testing upholds the reliability that data centers demand. Moreover, proactive support helps swiftly tackle critical issues, ensuring seamless data center operations. This partnership is a testament to the power of strategic collaborations in the tech sector:
GDC Ready OS Images: Canonical supports multiple active releases of Google Distributed Cloud (1.9.x, 1.10.x, 1.11.x, and 1.12.x) ensuring Google Cloud has flexibility and choice.
Risk Mitigation: Canonical employs a two-tiered image systemāādevelopmentā and āstable.ā This allows for thorough testing of changes before they are released into the stable production environment, minimizing potential problems.
These key benefits are the result of our unwavering pursuit of progress and innovation. Google Distributed Cloud customers can expect to reap the rewards of our continuous hard work:
FIPS & CIS Compliance: Google Distributed Cloud customers operating in highly regulated industries can confidently deploy FIPS-compliant and CIS-hardened Ubuntu images, knowing they adhere to critical security standards.
Multi-distro Support: Ubuntuās adaptability allows Google Distributed Cloud users to run a diverse range of distro images, maximizing their choice and flexibility within the cloud environment.
Air-gapped Innovation: Canonical and Google Cloud are dedicated to supporting air-gapped cloud technology, providing secure, cutting-edge solutions for customers with even the most sensitive data requirements.
At Canonical, weāre committed to open-source innovation. This collaboration with Google Cloud is a prime example of how we can work together to deliver industry-leading cloud solutions to our customers. We look forward to continued partnership and providing even more value to the Google Distributed Cloud ecosystem.
New subscription for IoT deployments brings security and long term compliance to the most advanced open source stackĀ
Nuremberg, Germany. 9 April 2024. Today, Canonical, the publisher of Ubuntu, announced the launch of Ubuntu Pro for Devices ā a comprehensive offering that simplifies security and compliance for IoT device deployments. Ubuntu Pro for Devices provides 10 years of security maintenance for Ubuntu and thousands of open source packages, such as Python, Docker, OpenJDK, OpenCV, MQTT, OpenSSL, Go, and Robot Operating System (ROS).Ā The subscription also provides device management capabilities through Landscape, Canonicalās systems management tool, and access to Real-time Ubuntu for latency-critical use cases.Ā Ubuntu Pro for Devices is available directly from Canonical, and from a wide range of original device manufacturers (ODMs) in Canonicalās partner ecosystem, including ADLINK, AAEON, Advantech and DFI.
With this launch, Canonical is expanding its collaboration with ODMs as demand for open source security and compliance grows in the embedded space.Ā Ubuntu Pro for Devices can be combined withĀ Canonicalās existing Ubuntu Certified Hardware programme to offer a best in class Ubuntu experience on devices out-of-the-box and for up to 10 years.
A secure open source supply chain
Today, most application stacks contain open source software, but companies donāt always have the in-house expertise to secure and support their full stack. Canonical patches over 1,000 CVEs each year and provides a 10 year security maintenance commitment for popular toolchains like Python and Go, as well as commonly-used IoT software frameworks like ROS.Companies can consume secure and maintained open source with the same set of guarantees from the same vendor.Ā
āAs new legislation is introduced for IoT embedded devices, it is crucial that our customers have a means to securely maintain the operating system along with commonly used applications and dependenciesā, said Ethan Chen, General Manager of the Edge Computing Platforms BU at ADLINK. āUbuntu Pro ensures that IoT devices receive reliable security patches from a trusted sourceā.
Streamlined compliance
The regulatory landscape is evolving, with the EU Cyber Resilience Act and the U.S. Cyber Trust Mark resulting in a growing need for reliable, long-term access to software security fixes. Ubuntu Pro provides access to critical vulnerability fixes for most of the open source packages enterprises use, providing security coverage for developers and peace of mind for CISOs.Ā
āMany of our customers from across different sectors are using computer vision software that requires regulatory approval. In particular, the latest US regulation makes it important to provide timely CVE fixes for all of the components used in our products. Thanks to Ubuntu Pro for Devices, this is now coveredā, said Jason Huang, Director of AAEONās UP Division.
Ubuntu Pro for Devices offers more than security patching. It also provides certified modules and hardening profiles that enable organisations to achieve compliance with standards such as FIPS, HIPAA, PCI-DSS and others.Ā
āUbuntu is the most popular Linux distribution. Many of our public sector customers in the US need FIPS compliance, and Ubuntu Pro for Devices is a perfect solution for themā, said Joe Chen, Director at Advantech.
Cost-effective and convenient fleet management
Remote device management is critical for IoT, as a lot of devices are physically inaccessible. Ubuntu Pro for Devices includes device management with Landscape, which automates security patching and audits across Ubuntu estates. Landscape allows administrators to manage their Ubuntu instances from a single portal. They can securely authenticate and add new devices to their IoT fleet, manage software versions and configurations, and monitor device performance and compliance.Ā By grouping multiple devices together, administrators can perform these operations on numerous devices simultaneously, saving both time and effort.
Ā āDFI leverages virtualisation technology to introduce a robust Workload Consolidation platform integrated with our embedded solutions for EV charging stations and other cutting-edge industrial applications.Ā With the ability to use Landscape to manage devices built with DFI boards, we can now provide more reliable solutions to our customers with 10 years of security updates and streamlined fleet maintenanceā, said Jarry Chang, DFI Product Center General Manager.
Learn more
Download our datasheet to learn about the capabilities offered in Ubuntu Pro for Devices.Ā
To discuss your use case, contact Canonical or stop by our booth [4-354, Hall 4] at Embedded World in Nuremberg this week.Ā