A few weeks ago, in episode 25 of Linux Matters Podcast I brought up the subject of ‘Coding Joy’. This blog post is an expanded follow-up to that segment. Go and listen to that episode - or not - it’s all covered here.
Not a Developer
I’ve said this many times - I’ve never considered myself a ‘Developer’. It’s not so much imposter syndrome, but plain facts. I didn’t attend university to study software engineering, and have never held a job with ‘Engineer’ or Developer’ in the title.
(I do have Engineering Manager and Developer Advocate roles in my past, but in popey’s weird set of rules, those don’t count.)
I have written code over the years. Starting with BASIC on the Sinclair ZX81 and Sinclair Spectrum, I wrote stuff for fun and no financial gain. I also coded in Z80 & 6502 assembler, taught myself Pascal on my Epson 8086 PC in 1990, then QuickBasic and years later, BlitzBasic, Lua (via LÖVE) and more.
In the workplace, I wrote some alarmingly complex utilities in Windows batch scripts and later Bash shell scripts on Linux. In a past career, I would write ABAP in SAP - which turned into an internal product mildly amusingly called “Alan’s Tool”.
These were pretty much all coding for fun, though. Nobody specced up a project and assigned me as a developer on it. I just picked up the tools and started making something, whether that was a sprite routine in Z80 assembler, an educational CPU simulator in Pascal, or a spreadsheet uploader for SAP BiW.
In 2003, three years before Twitter launched in 2006, I made a service called ‘Clunky.net’. It was a bunch of PHP and Perl smashed together and published online with little regard for longevity or security. Users could sign up and send ’tweet’ style messages from their phone via SMS, which would be presented in a reverse-chronological timeline. It didn’t last, but I had fun making it while it did.
They were all fun side-quests.
None of this makes me a developer.
Volatile Memories
It’s rapidly approaching fifty years since I first wrote any code on my first computer. Back then, you’d typically write code and then either save it on tape (if you were patient) or disk (if you were loaded). Maybe you’d write it down - either before or after you typed it in - or perhaps you’d turn the computer off and lose it all.
When I studied for a BTEC National Diploma in Computer Studies at college, one of our classes was on the IBM PC with two floppy disc drives. The lecturer kept hold of all the floppies because we couldn’t be trusted not to lose, damage or forget them. Sometimes the lecturer was held up at the start of class, so we’d be sat twiddling our thumbs for a bit.
In those days, when you booted the PC with no floppy inserted, it would go directly into BASICA, like the 8-bit microcomputers before it. I would frequently start writing something, anything, to pass the time.
With no floppy disks on hand, the code - beautiful as it was - would be lost. The lecturer often reset the room when they entered, hitting a big red ‘Stop’ button, which instantly powered down all the computers, losing whatever ‘work’ you’d done.
I was probably a little irritated at the moment, just as I would when the RAM pack wobbled on my ZX81, losing everything. You move on, though, and make something else, or get on with your college work, and soon forget about it.
Or you bitterly remember it and write a blog post four decades later. Each to their own.
Sharing is Caring
This part was the main focus of the conversation when we talked about this on the show.
In the modern age, over the last ten to fifteen years or so, I’ve not done so much of the kind of coding I wrote about above. I certainly have done some stuff for work, mostly around packaging other people’s software as snaps or writing noddy little shell scripts. But I lost a lot of the ‘joy’ of coding recently.
Why?
I think a big part is the expectation that I’d make the code available to others. The public scrutiny others give your code may have been a factor. The pressure I felt that I should put my code out and continue to maintain it rather than throw it over the wall wouldn’t have helped.
I think I was so obsessed with doing the ‘right’ thing that coding ‘correctly’ or following standards and making it all maintainable became a cognitive roadblock.
I would start writing something and then begin wondering, ‘How would someone package this up?’ and ‘Am I using modern coding standards, toolkits, and frameworks?’ This held me back from the joy of coding in the first place. I was obsessing too much over other people’s opinions of my code and whether someone else could build and run it.
I never used to care about this stuff for personal projects, and it was a lot more joyful an experience - for me.
I used to have an idea, pick up a text editor and start coding. I missed that.
Realisation
In January this year, Terence Edenwrote about his escapades making a FourSquare-like service using ActivityPub and OpenStreetMap. When he first mentioned this on Mastodon, I grabbed a copy of the code he shared and had a brief look at it.
The code was surprisingly simple, scrappy, kinda working, and written in PHP. I was immediately thrown back twenty years to my terrible ‘Clunky’ code and how much fun it was to throw together.
In February, I bumped into Terence at State of Open Con in London and took the opportunity to quiz him about his creation. We discussed his choice of technology (PHP), and the simple ’thrown together in a day’ nature of the project.
At that point, I had a bit of a light-bulb moment, realising that I could get back to joyful coding. I don’t have to share everything; not every project needs to be an Open-Source Opus.
I can open a text editor, type some code, and enjoy it, and that’s enough.
Joy Rediscovered
I had an idea for a web application and wanted to prototype something without too much technological research or overhead. So I created a folder on my home server, ran php -S 0.0.0.0:9000 in a terminal there, made a skeleton index.php and pointed a browser at the address. Boom! Application created!
I created some horribly insecure and probably unmaintainable PHP that will almost certainly never see the light of day.
I had fun doing it though. Which is really the whole point.
The latest release of uCareSystem, version 24.04.0, introduces enhanced maintenance and cleanup capabilities for Ubuntu and its derivatives. It’s definitely worth exploring the new features As uCareSystem joyfully celebrates its 15th anniversary, its latest release unveils a host of new features that I have incorporated to address the evolving needs since the previous version, 4.4.0Continue reading "Ucaresystem 24.04.0 released"
Over coffee this morning, I stumbled upon simone, a fledgling Open-Source tool for repurposing YouTube videos as blog posts. The Python tool creates a text summary of the video and extracts some contextual frames to illustrate the text.
A neat idea! In my experience, software engineers are often tasked with making demonstration videos, but other engineers commonly prefer consuming the written word over watching a video. I took simone for a spin, to see how well it works. Scroll down and tell me what you think!
I was sat in front of my work laptop, which is a mac, so roughly speaking, this is what I did:
(.venv) $ python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")Traceback (most recent call last):
File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 255, in run_tesseract
proc = subprocess.Popen(cmd_args, **subprocess_args()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1026, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/subprocess.py", line 1955, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)FileNotFoundError: [Errno 2] No such file or directory: 'C:/Program Files/Tesseract-OCR/tesseract.exe'During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 47, in <module>
blogpost(url) File "/Users/alan/Work/rajtilakjee/simone/src/main.py", line 39, in blogpost
score = scores.score_frames() ^^^^^^^^^^^^^^^^^^^^^
File "/Users/alan/Work/rajtilakjee/simone/src/utils/scorer.py", line 20, in score_frames
extracted_text = pytesseract.image_to_string( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 423, in image_to_string
return{ ^
File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 426, in <lambda>
Output.STRING: lambda: run_and_get_output(*args),
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 288, in run_and_get_output
run_tesseract(**kwargs) File "/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/pytesseract/pytesseract.py", line 260, in run_tesseract
raise TesseractNotFoundError()pytesseract.pytesseract.TesseractNotFoundError: C:/Program Files/Tesseract-OCR/tesseract.exe is not installed or it's not in your PATH. See README file for more information.
(.venv) python src/main.py
Enter YouTube URL: https://www.youtube.com/watch?v=VDIAHEoECfM
/Users/alan/Work/rajtilakjee/simone/.venv/lib/python3.12/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Look for results
(.venv) $ ls -l generated_blogpost.txt *.jpg
-rw-r--r-- 1 alan staff 216326 Apr 09:26 generated_blogpost.txt
-rw-r--r--@ 1 alan staff 13298426 Apr 09:27 top_frame_4_score_106.jpg
-rw-r--r-- 1 alan staff 18470526 Apr 09:27 top_frame_5_score_105.jpg
-rw-r--r-- 1 alan staff 12614826 Apr 09:27 top_frame_9_score_101.jpg
In my test I pointed simone at a short demo video from my employer, Anchore’sYouTube channel. The results are below, with no editing, I even included the typos. The images at the bottom of this post are frames from the video that simone selected.
Static stick checker tool helps developers identify security vulnerabilities in Docker images by running open-source security checks and generating remediation recommendations. This blog post summarizes a live demo of the tool’s capabilities.
How it works
The tool works by:
Downloading and analyzing the Docker image.
Detecting the base operating system distribution and selecting the appropriate stick profile.
Running open-source security checks on the image.
Generating a report of identified vulnerabilities and remediation actions.
Demo Walkthrough
The demo showcases the following steps:
Image preparation: Uploading a Docker image to a registry.
Tool execution: Running the static stick checker tool against the image.
Results viewing: Analyzing the generated stick results and identifying vulnerabilities.
Remediation: Implementing suggested remediation actions by modifying the Dockerfile.
Re-checking: Running the tool again to verify that the fixes have been effective.
Key findings
The static stick checker tool identified vulnerabilities in the Docker image in areas such as:
Verifying file hash integrity.
Configuring cryptography policy.
Verifying file permissions.
Remediation scripts were provided to address each vulnerability.
By implementing the recommended changes, the security posture of the Docker image was improved.
Benefits of using the static stick checker tool
Identify security vulnerabilities early in the development process.
Automate the remediation process.
Shift security checks leftward in the development pipeline.
Reduce the burden on security teams by addressing vulnerabilities before deployment.
Conclusion
The Ancors static stick checker tool provides a valuable tool for developers to improve the security of their Docker images. By proactively addressing vulnerabilities during the development process, organizations can ensure their applications are secure and reduce the risk of security incidents
Here’s the images it pulled out:
Not bad! It could be better - getting the company name wrong, for one!
I can imagine using this to create a YouTube description, or use it as a skeleton from which a blog post could be created. I certainly wouldn’t just pipe the output of this into blog posts! But so many videos need better descriptions, and this could help!
On April 18, 2024, the highly anticipated Intel Channel Partner Networking Fair was held at the JW Marriott Hotel in Hong Kong. At this industry event, deepin was invited to participate in the Intel Demo Showcase segment, where it showcased the latest achievements in the application of the deepin operating system and artificial intelligence large-scale models to elite members of the global technology community. Since September 2023, when deepin announced its integration of large-scale models and achieved intelligent upgrades for multiple self-developed applications, it has been continuously strengthening its exploration in the integration of operating systems and AI. Currently, it ...Read more
Ubuntu 24.04 LTS, codenamed “Noble Numbat”, is here. This release continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, together with the community and our partners, to introduce new features and fix bugs.
Our 10th Long Term Supported release sets a new standard in performance engineering, enterprise security and developer experience.
Ubuntu Desktop brings the Subiquity installer to an LTS for the first time. In addition to a refreshed user experience and a minimal install by default, the installer now includes experimental support for ZFS and TPM-based full disk encryption and the ability to import auto-install configurations. Post install, users will be greeted with the latest GNOME 46 alongside a new App Center and firmware-updater. Netplan is now the default for networking configuration and supports bidirectionality with NetworkManager.
Ubuntu now enables frame pointers by default on 64-bit architectures to enable CPU and off-CPU profiling for workload optimisation, alongside a suite of critical performance tools pre-installed. The Linux 6.8 kernel now enables low-latency features by default. For IoT vendors leveraging 32-bit arm hardware, our armhf build has been updated to resolve the upcoming 2038 issue by implementing 64-bit time_t in all necessary packages.
As always, Ubuntu ships with the latest toolchain versions. .NET 8 is now fully supported on Ubuntu 24.04 LTS (and Ubuntu 22.04 LTS) for the full lifecycle of the release and OpenJDK 21 and 17 are both TCK certified to adhere to Java interoperability standards. Ubuntu 24.04 LTS ships Rust 1.75 and a simpler Rust toolchain snap framework to enable future rust versions to be delivered to developers on this release in years to come.
The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being released today. More details can be found for these at their individual release notes under the Official Flavours section:
Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Extended Security Maintenance).
Users of Ubuntu 23.10 will soon be offered an automatic upgrade to 24.04. Users of 22.04 LTS will be offered the automatic upgrade when 24.04.1 LTS is released, which is scheduled for the 15th of August. For further information about upgrading, see:
As always, upgrades to the latest version of Ubuntu are entirely free of charge.
We recommend that all users read the release notes, which document caveats and workarounds for known issues, and provide more in-depth information on the release itself. They are available at:
Ubuntu is a full-featured Linux distribution for desktops, laptops, IoT, cloud, and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:
The Kubuntu Team is happy to announce that Kubuntu 24.04 has been released, featuring the ‘beautiful’ KDE Plasma 5.27 simple by default, powerful when needed.
Codenamed “Noble Numbat”, Kubuntu 24.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.
Note: For upgrades from 23.10, there may a delay of a few hours to days between the official release announcements and the Ubuntu Release Team enabling upgrades.
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 24.04 LTS, code-named “Noble Numbat”. This marks Ubuntu Studio’s 34th release. This release is a Long-Term Support release and as such, it is supported for 3 years (36 months, until April 2027).
Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a more complete list of changes and known issues. Listed here are some of the major highlights.
You can download Ubuntu Studio 24.04 LTS from our download page.
Special Notes
The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a standard DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Minimum installation media requirements: Dual-Layer DVD or 8GB USB drive.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Please note that upgrading from 22.04 before the release of 24.04.1,due August 2024, is unsupported.
Upgrades from 23.10 should be enabled within a month after release, so we appreciate your patience.
New This Release
All-New System Installer
In cooperation with the Ubuntu Desktop Team, we have an all-new Desktop installer. This installer uses the underlying code of the Ubuntu Server installer (“Subiquity”) which has been in-use for years, with a frontend coded in “Flutter”. This took a large amount of work for this release, and we were able to help a lot of other official Ubuntu flavors transition to this new installer.
Be on the lookout for a special easter egg when the graphical environment for the installer first starts. For those of you who have been long-time users of Ubuntu Studio since our early days (even before Xfce!), you will notice exactly what it is.
PipeWire 1.0.4
Now for the big one: PipeWire is now mature, and this release contains PipeWire 1.0. With PipeWire 1.0 comes the stability and compatibility you would expect from multimedia audio. In fact, at this point, we recommend PipeWire usage for both Professional, Prosumer, and Everyday audio needs. At Ubuntu Summit 2023 in Riga, Latvia, our project leader Erich Eickmeyer used PipeWire to demonstrate live audio mixing with much success and has since done some audio mastering work using it. JACK developers even consider it to be “JACK 3”.
PipeWire’s JACK compatibility is configured to use out-of-the-box and is zero-latency internally. System latency is configurable via Ubuntu Studio Audio Configuration.
However, if you would rather use straight JACK 2 instead, that’s also possible. Ubuntu Studio Audio Configuration can disable and enable PipeWire’s JACK compatibility on-the-fly. From there, you can simply use JACK via QJackCtl.
With this, we consider audio production with Ubuntu Studio so mature that it can now rival operating systems such as macOS and Windows in ease-of-use since it’s ready to go out-of-the-box.
Deprecation of PulseAudio/JACK setup/Studio Controls
Due to the maturity of PipeWire, we now consider the traditional PulseAudio/JACK setup, where JACK would be started/stopped by Studio Controls and bridged to PulseAudio, deprecated. This configuration is still installable via Ubuntu Studio Audio Configuration, but we do not recommend it. Studio Controls may return someday as a PipeWire fine-tuning solution, but for now it is unsupported by the developer. For that reason, we recommend users not use this configuration. If you do, it is at your own risk and no support will be given. In fact, it’s likely to be dropped for 24.10.
Ardour 8.4
While this does not represent the latest release of Ardour, Ardour 8.4 is a great release. If you would like the latest release, we highly recommend purchasing one-time or subscribing to Ardour directly from the developers to help support this wonderful application. Also, for that reason, this will be an application we will not directly backport. More on that later.
Ubuntu Studio Audio Configuration
Ubuntu Studio Audio Configuration has undergone a UI overhaul and contains the ability to start and stop a Dummy Audio Device which can also be configured to start or stop upon login. When assigned as the default, this will free-up channels that would normally be assigned to your system audio to be assigned to a null device.
Meta Package for Music Education
In cooperation with Edubuntu, we have created a metapackage for music education. This package is installable from Ubuntu Studio Installer and includes the following packages:
FMIT: Free Musical Instrument Tuner, a tool for tuning musical Instruments (also included by default)
GNOME Metronome: Exactly what it sounds like (pun unintended): a metronome.
Minuet: Ear training for intervals, chords, scales, and more.
MuseScore: Create, playback, and print sheet music for free (this one is no stranger to the Ubuntu Studio community)
Piano Booster: MIDI player/game that displays musical notes and teaches you how to play piano, optionally using a MIDI keyboard.
Solfege: Ear training program for harmonic and melodic intervals, chords, scales, and rhythms.
Deprecation of Ubuntu Studio Backports Is In Effect
As stated in the Ubuntu 23.10 Release Announcement, the Ubuntu Studio Backports PPA is now deprecated in favor of the official Ubuntu Backports repository. However, the Backports repository only works for LTS releases and for good reason. There are a few requirements for backporting:
It must be an application which already exists in the Ubuntu repositories
It must be an application which would not otherwise qualify for a simple bugfix, which would then qualify it to be a Stable Release Update. This means it must have new features.
It must not rely on new libraries or new versions of libraries.
It must exist within a later supported release or the development release of Ubuntu.
If you have a suggestion for an application for which to backport that meets those requirements, feel free to join and email the Ubuntu Studio Users Mailing List with your suggestion with the tag “[BPO]” at the beginning of the subject line. Backports to 22.04 LTS are now closedand backports to 24.04 LTS are now open. Additionally, suggestions must pertain to Ubuntu Studio and preferably must be applications included with Ubuntu Studio. Suggestions can be rejected at the Project Leader’s discretion.
One package that is exempt to backporting is Ardour. To help support Ardour’s funding, you may obtain later versions directly from them. To do so, please one-time purchase or subscribe to Ardour from their website. If you wish to get later versions of Ardour from us, you will have to wait until the next regular release of Ubuntu Studio, due in October 2024.
We’re back on Matrix
You’ll notice that the menu links to our support chat and on our website will now take you to a Matrix chat. This is due to the Ubuntu community carving its own space within the Matrix federation.
However, this is not only a support chat. This is also a creativity discussion chat. You can pass ideas to each other and you’re welcome to it if the topic remains within those confines. However, if a moderator or admin warns you that you’re getting off-topic (or the intention for the chat room), please heed the warning.
This is a persistent connection, meaning if you close the window (or chat), it won’t lose your place as you may only need to sign back in to resume the chat.
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird also became a snap during this cycle for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!
Looking Toward the Future
Plasma 6
Ubuntu Studio, in cooperation with Kubuntu, will be switching to Plasma 6 during the 24.10 development cycle. Likewise, Lubuntu will be switching to LXQt 2.0 and Qt 6, so the three flavors will be cooperating to do the move.
New Look
Ubuntu Studio has been using the same theming, “Materia” (except for the 22.04 LTS release which was a re-colored Breeze theme) since 19.04. However, Materia has gone dead upstream. To stay consistent, we found a fork called “Orchis” which seems to match closely and will be switching to that. More on that soon.
Minimal Installation
The new system installer has the capability to do minimal installations. This was something we did not have time to implement this cycle but intend to do for 24.10. This will let users install a minimal desktop to get going and then install what they need via Ubuntu Studio Installer. This will make a faster installation process but will not make the installation .iso image smaller. However, we have an idea for that as well.
Minimal Installation .iso Image
We are going to research what it will take to create a minimal installer .iso image that will function much like the regular .iso image minus the ability to install everything and allow the user to customize the installation via Ubuntu Studio Installer. This should lead to a much smaller initial download. Unlike creating a version with a different desktop environment, the Ubuntu Technical Board has been on record as saying this would not require going through the new flavor creation process. Our friends at Xubuntu recently did something similar.
Get Involved!
A wonderful way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!
Our project leader, Erich Eickmeyer, is now working on Ubuntu Studio at least part-time, and is hoping that the users of Ubuntu Studio can give enough to generate a monthly part-time income. Your donations are appreciated! If other distributions can do it, surely we can! See the sidebar for ways to give!
Special Thanks
Huge special thanks for this release go to:
Eylul Dogruel: Artwork, Graphics Design
Ross Gammon: Upstream Debian Developer, Testing, Email Support
Sebastien Ramacher:Upstream Debian Developer
Dennis Braun: Upstream Debian Developer
Rik Mills: Kubuntu Council Member, help with Plasma desktop
Scarlett Moore: Kubuntu Project Lead, help with Plasma desktop
Zixing Liu: Simplified Chinese translations in the installer
Simon Quigley: Lubuntu Release Manager, help with Qt items, Core Developer stuff, keeping Erich sane and focused
Steve Langasek: Help with livecd-rootfs changes to make the new installer work properly.
Dan Bungert: Subiquity, seed fixes
Dennis Loose: Ubuntu Desktop Provision (installer)
Mauro Gaspari: Tutorials, Promotion, and Documentation, Testing, keeping Erich sane
Krytarik Raido: IRC Moderator, Mailing List Moderator
Erich Eickmeyer: Project Leader, Packaging, Development, Direction, Treasurer
A Note from the Project Leader
When I started out working on Ubuntu Studio six years ago, I had a vision of making it not only the easiest Linux-based operating system for content creation, but the easiest content creation operating system… full-stop.
With the release of Ubuntu Studio 24.04 LTS, I believe we have achieved that goal. No longer do we have to worry about whether an application is JACK or PulseAudio or… whatever. It all just works! Audio applications can be patched to each other!
If an audio device doesn’t depend on complex drivers (i.e. if the device is class-compliant), it will just work. If a user wishes to lower the latency or change the sample rate, we have a utility that does that (Ubuntu Studio Audio Configuration). If a user wants to have finer control use pure JACK via QJackCtl, they can do that too!
I honestly don’t know how I would replicate this on Windows, and replicating on macOS would be much harder without downloading all sorts of applications. With Ubuntu Studio 24.04 LTS, it’s ready to go and you don’t have to worry about it.
Where we are now is a dream come true for me, and something I’ve been hoping to see Ubuntu Studio become. And now, we’re finally here, and I feel like it can only get better.
Canonical’s 10th Long Term Supported release sets a new standard in performance engineering, enterprise security and developer experience.
London, 25 April 2024.
Today Canonical announced the release of Ubuntu 24.04 LTS, codenamed “Noble Numbat”, available to download and install from https://ubuntu.com/download.
Ubuntu 24.04 LTS builds on the advancements of the last three interim releases as well as the contributions of open source developers from around the world to ensure a secure, optimised and forward looking platform.
“Ubuntu 24.04 LTS takes a bold step into performance engineering and confidential computing to deliver an enterprise-grade innovation platform, supported for at least 12 years”, said Mark Shuttleworth, CEO of Canonical. “For developers we are delighted to announce TCK certified Java, an LTS for .NET and the latest Rust toolchain.”
Performance engineering tools pre-enabled and pre-loaded
Canonical is dedicated to raising the bar for quality and performance across the entire Ubuntu ecosystem.
Ubuntu 24.04 LTS delivers the latest Linux 6.8 kernel with improved syscall performance, nested KVM support on ppc64el, and access to the newly landed bcachefs filesystem. In addition to upstream improvements, Ubuntu 24.04 LTS has merged low-latency kernel features into the default kernel, reducing kernel task scheduling delays.
Ubuntu 24.04 LTS also enables frame pointers by default on all 64-bit architectures so that performance engineers have ready access to accurate and complete flame graphs as they profile their systems for troubleshooting and optimisation.
“Frame pointers allow more complete CPU profiling and off-CPU profiling. The performance wins that these can provide far outweigh the comparatively tiny loss in performance. Ubuntu enabling frame pointers by default will be a huge win for performance engineering and the default developer experience”, said Brendan Gregg, Computer Performance Expert and Fellow at Intel. Tracing with bpftrace is now standard in Ubuntu 24.04 LTS, alongside pre-existing profiling tools to provide site reliability engineers with immediate access to essential resources.
Integrated workload accelerators bring additional performance improvements. Canonical and Intel worked together to integrate Intel® QuickAssist Technology (Intel® QAT) for the first time ever in an LTS. Intel QAT enables users to accelerate encryption and compression in order to reduce CPU utilisation and improve networking and storage application performance on 4th Gen and newer Intel Xeon Scalable processors.
“Ubuntu is a natural fit to enable the most advanced Intel features. Canonical and Intel have a shared philosophy of enabling performance and security at scale across platforms”, said Mark Skarpness, Vice President and General Manager of System Software Engineering at Intel.
Increased developer productivity with LTS toolchains
Ubuntu 24.04 LTS includes Python 3.12, Ruby 3.2, PHP 8.3 and Go 1.22 with additional focus dedicated to the developer experience for .NET, Java and Rust.
With the introduction of .NET 8, Ubuntu is taking a significant step forward in supporting the .NET community. NET 8 will be fully supported on Ubuntu 24.04 LTS and 22.04 LTS for the entire lifecycle of both releases, enabling developers to upgrade their applications to newer .NET versions prior to upgrading their Ubuntu release. This .NET support has also been extended to the IBM System Z platform.
“We are pleased about the release of Canonical Ubuntu 24.04 LTS and the increased performance, developer productivity, and security that it provides our joint customers,” said Jeremy Winter, Corporate Vice President, Azure Cloud Native. “Ubuntu is an endorsed Linux distro on Microsoft Azure, and an important component for many of Microsoft’s technologies, including .NET, Windows Subsystem for Linux, Azure Kubernetes Service, and Azure confidential computing. Microsoft and Canonical have a close engineering relationship spanning everything from update infrastructure in Azure to developer tooling, notably .NET 8 which is part of the Noble Numbat release from day one. We look forward to continuing our strong collaboration with Canonical to enhance developer productivity and provide a robust experience for Ubuntu on Azure.”
For Java developers, OpenJDK 21 is the default in Ubuntu 24.04 LTS while maintaining support for versions 17, 11, and 8. OpenJDK 17 and 21 are also TCK certified, which means they adhere to Java standards and ensure interoperability with other Java platforms. A special FIPS-compliant OpenJDK 11 package is also available for Ubuntu Pro users.
Ubuntu 24.04 LTS ships with Rust 1.75 and a simpler Rust toolchain snap framework. This will support the increasing use of Rust in key Ubuntu packages, like the kernel and Firefox, and enables future Rust versions to be delivered to developers on 24.04 LTS in years to come.
New management tools for Ubuntu Desktop and WSL
For the first time in an LTS, Ubuntu Desktop now uses the same installer technology as Ubuntu Server. This means that desktop administrators can now use image customisation tools like autoinstall and cloud-init to create tailored experiences for their developers. The user interface has also received a makeover, with a modern design built in Flutter.
For those managing mixed Windows and Ubuntu environments, the Active Directory Group Policy client available via Ubuntu Pro now supports enterprise proxy configuration, privilege management and remote script execution.
Canonical continues to invest in Ubuntu on Windows Subsystem for Linux (WSL) as a first class platform for developers and data scientists. Starting with Ubuntu 24.04 LTS, Ubuntu on WSL now supports cloud-init to enable image customisation and standardisation across developer estates.
Confidential computing on the cloud and private data centres
Confidential computing secures data at runtime from vulnerabilities within the host privileged system software, including the hypervisor. It also protects data against unauthorised access by
infrastructure administrators. Today, Ubuntu offers the most extensive portfolio of confidential virtual machines, available across Microsoft Azure, Google Cloud, and Amazon Web Services.
Ubuntu is also the first and only Linux distribution to support confidential GPUs on the public cloud, starting with a preview on Microsoft Azure. Building on the silicon innovation of NVIDIA H100 Tensor Core GPUs and AMD 4th Gen EPYC processors with SEV-SNP, Ubuntu confidential VMs are ideal to perform AI training and inference tasks on sensitive data.
Ubuntu also supports confidential computing in private data centres. Thanks to a strategic collaboration between Intel and Canonical, Ubuntu now seamlessly supports Intel® Trust Domain Extensions (Intel® TDX) on both the host and guest sides, starting with an Intel-optimised Ubuntu 23.10 build. With no changes required to the application layer, VM isolation with Intel TDX greatly simplifies the porting and migration of existing workloads to a confidential computing environment.
12 years of support with new Ubuntu Pro add-on
To meet the needs of Canonical’s enterprise customers, Ubuntu 24.04 LTS gets a 12 year commitment for security maintenance and support. As with other long term supported releases, Noble Numbat will get five years of free security maintenance on the main Ubuntu repository. Ubuntu Pro extends that commitment to 10 years on both the main and universe repositories. Ubuntu Pro subscribers can purchase an extra two years with the Legacy Support add-on.
The 12 year commitment also applies to earlier Ubuntu releases, starting with 14.04 LTS. The LTS expansion offers benefits for individuals and organisations who want to gain even more stability while building on top of Ubuntu’s wide array of open source software libraries.
Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.
20 years in the making. Ubuntu 24.04 LTS brings together the latest advancements from the Linux ecosystem into a release that is built to empower open source developers and deliver innovation for the next 12 years.
The road to Noble Numbat has proven to be an exciting journey through successively ambitious interim releases, experimenting with new approaches to security (and tackling last minute CVEs), evolving our core desktop apps, and continuing our commitment to performance and compatibility across a wide array of hardware supported by the brand new Linux 6.8 kernel.
Whilst each LTS is a significant milestone, it’s never the final destination. We look forward to extending and expanding on what we’ve delivered today both within the lifecycle of Ubuntu 24.04 LTS and in future releases, always considering how we can live up to our mission, and the values of Ubuntu Desktop.
Let’s get into the details.
Rethinking provisioning
Addressing the fundamental issue of “how do I get Ubuntu on this machine?” is still one of our biggest priorities. Whilst today Ubuntu ships pre-installed on millions of desktops, laptops and workstations around the world thanks to our partnerships with OEMs like Dell, HP and Lenovo, more than ten times as many users install the operating system themselves each year. Here’s what we’re adding to simplify Ubuntu installations.
Unifying the stack
Over the last few interim releases we have aligned the underlying tech stack of the desktop installer to use the same Subiquity back end as Ubuntu server, creating a consistent codebase across both platforms to deliver feature parity and easier maintainability. This is complemented by a brand new front end built in Flutter which has been iterated on significantly over the past year to improve access to accessibility options, increase clarity on the user experience and deliver a polished and improved experience.
Additional encryption options
As part of this migration we’ve brought ZFS guided install back as a filesystem option and added support for ZFS encryption. We’ve also added improved guidance for dual-boot setups, particularly in relation to BitLocker. One major request from users has been support for hardware-backed full disk encryption and it makes its first appearance in an experimental form in Ubuntu 24.04 LTS. This implementation has certain limitations at launch which restrict its use to those devices that only require a generic kernel with no third party drivers or kernel modules, and does not currently support firmware upgrades. We intend to extend the hardware compatibility of this feature over time within the lifecycle of this release, with support for NVIDIA drivers as our first priority.
Integrated autoinstall
One of the most exciting new additions is the surfacing of autoinstall support in the graphical installer. Users or enterprises who want to create a customised, repeatable, automated installation flow can now provide the address of a local or remote autoinstall.yaml file and let Subiquity take over from there.
Check out this getting started tutorial to see how easy it is to automate user-creation, install additional apps and configure your filesystem in a format you can use across multiple machines.
This brings us a number of steps closer to the long term goal of zero touch provisioning, and we plan to add additional support for SSO authentication to access protected autoinstall files in a corporate environment at a later date.
New core apps
The new features don’t stop once you’ve installed Ubuntu Desktop. The new App Center (also flutter-based) is another notable highlight, bringing a modern, more performant new look to app discovery with clearer categories and application management functionality. Since its initial launch, the App Center now includes a new ratings service to allow users to vote on the quality of their apps and view an aggregated score from other users. These scores, combined with the other rich meta-data available from the Snap Store, will make it easier for us to deliver additional discovery mechanisms such as top charts, most popular or recently updated.
While the App Center defaults to a snap-centric view by default to enable us to deliver these usability features, you can still use it to find and install deb packages via the search toggles.
As part of the new App Center development we’ve split out firmware updates into their own dedicated app. This not only allows a richer experience managing firmware but also improves performance since the old Ubuntu Software application would need to remain permanently running in the background to check for new firmware on previous releases.
GNEW GNOME
Ubuntu Desktop 24.04 LTS continues our commitment to shipping the latest and greatest GNOME with version 46. This release delivers a host of performance and usability improvements including file manager search and performance, expandable notifications and consolidated settings options for easier access.
As usual, Ubuntu builds on the excellent foundation provided by GNOME with a number of extensions and additions. The colour picker allows users to tailor their desktop highlights to their taste, triple buffering improves performance on Intel and Raspberry Pi graphics drivers and the addition of the Tiling Assistant extension enables quarter screen tiling support for better workspace management.
Consistent networking across desktop and server with Netplan 1.0
In Ubuntu 23.10 we included Netplan as the default tool to configure networking on desktop, unifying the stack across server and cloud where Netplan has been the default since 2016. This change enables administrators to consistently configure their Ubuntu estate regardless of platform. With the recent release of Netplan 1.0, all platforms also benefit from new features around wireless compatibility and usability improvements such as netplan status –diff.
It is important to note that Netplan does not replace NetworkManager and will not impact workflows that prefer the previous configuration methods. NetworkManager has bidirectional integration with Netplan, meaning changes made in either configuration are updated and reflected in both.
You can read more about this bidirectionality in Lukas’ previous blog. To find out what’s new in Netplan 1.0, check out his recent announcement.
Comprehensive GPO support with Active Directory
Ubuntu Desktop is highly prevalent in enterprise engineering and data science teams in enterprise, academic and federal institutions around the globe, whilst Windows remains the corporate OS of choice for other departments. Canonical’s Landscape is highly effective at monitoring, managing and reporting on the compliance of Ubuntu instances across desktop, server and cloud, however desktop IT administrators are often looking for solutions that help them manage mixed Ubuntu and Windows devices.
On-premise Active Directory has been the preferred management tool for Windows administrators for many years, and still represents the majority share of organisations. User authentication with Active Directory on Linux has been a standard for some time as part of the System Services Security Daemon (SSSD), however in Ubuntu 22.04 LTS we introduced additional support for Group Policy Objects (GPOs) allowing further compliance configuration. Over the course of our interim releases this GPO support has been expanded to cover the majority device and user policies requested by Active Directory administrators, including:
Privilege management and removal of local admins
Remote scripts execution
Managing apparmor profiles
Configuring network shares
Configuring proxy settings
Certificate autoenrollment
In addition to the pre-existing policies available on Ubuntu 22.04 LTS. This delivers a best in class solution for administrators looking to empower their developers with Ubuntu Desktop.
Going forward, our attention is now turning to support third party cloud-based identity providers following a proof of concept implementation of Azure Active Directory enrollment in Ubuntu 23.04. We are currently in the process of expanding on the functionality delivered in that release as part of a new implementation and look forward to talking more about that in the near future.
Finally, for those developers who remain on Windows due to internal policy requirements, we are continuing to invest in enterprise tooling for Ubuntu on Windows Subsystem for Linux (WSL). Ubuntu 24.04 LTS supports cloud-init instance initialisation, enabling administrators to seed custom config files on their developer’s machines to create standardised Ubuntu environments. This is a more robust solution than existing import/export workflows and represents the first step toward future management and compliance tooling.
Secure software management in Ubuntu Desktop 24.04 LTS
Underneath the hood, Ubuntu 24.04 LTS also includes a number of security improvements for those developing and distributing software within the Ubuntu ecosystem. In Ubuntu 23.10 we landed a new version of software-properties that changed the way Personal Package Archives (PPAs) are managed on Ubuntu.
PPAs are a critical tool for development, testing and customisation, enabling users to install software outside of the official Ubuntu archives. This allows for a great deal of software freedom but also comes with potential security risks due to the access they are granted to your OS. In Ubuntu 24.04 LTS, PPAs are now distributed as deb822-formatted.sources files with their signing key directly embedded into the file’s signed-by field. This establishes a 1:1 relationship between the key and the repository, meaning one key cannot be used to sign multiple repositories and removing a repository also removes its associated key. In addition, APT now requires repositories to be signed using stronger public key algorithms.
Unprivileged user namespace restrictions
Another significant security enhancement is the restriction of unprivileged user namespaces. These are a widely used feature of the Linux kernel that provide additional security isolation for applications that construct their own sandboxes, such as browsers which would then use that space to execute untrusted web content. So far so good, however the ability to create unprivileged user namespaces can expose additional attack surfaces within the Linux kernel and has proven to be a step in a significant number of exploits. In Ubuntu 24.04 LTS, AppAmor is now used to selectively control access to unprivileged user namespaces on a per application basis so that only applications with legitimate need can leverage this functionality.
You can read more about this change as well as a range of other security enhancements to the latest Ubuntu release in the security team’s deep dive.
Improved proposed pocket
The proposed pocket is used as a staging area for software updates prior to their release to the wider Ubuntu user base. In the past this pocket has been an all-or-nothing experience, with users who opt in to updates from proposed needing to take all updates that were available. As a result the chance of introducing system instability was significantly increased, disincentivising those who wanted to provide testing support for specific features in advance of their wider availability.
In Ubuntu 24.04 LTS we have lowered the default apt priority of updates in “proposed” to allow users to specify exactly which packages they want to install and which they want to remain stable. This change is designed to increase the confidence of users who want to test specific features ahead of their general release.
Building the future, together
This brings us to the end of this deep dive into the motivations and decisions behind just some of the features of the latest Long Term Supported release of Ubuntu Desktop. It has been a challenging and exciting experience to see each of these building blocks come together over the last three interim releases. With Ubuntu Desktop 24.04 LTS our goal has been to build a platform ready to stand the test of time, and the foundation for your next, great open source project.
As always, the story continues. Thank you for joining us.
Ubuntu MATE 24.04 is more of what you like, stable MATE Desktop on top of current Ubuntu.
This release rolls up some fixes and more closely aligns with Ubuntu. Read on to learn more 👓️
Ubuntu MATE 24.04 LTS
Thank you! 🙇
I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏
I’d like to acknowledge the close collaboration with all the Ubuntu flavour teams and the Ubuntu Foundations and Desktop Teams.
The assistance and support provided by Erich Eickmeyer (Ubuntu Studio), Simon Quigley (Lubuntu) and David Muhammed (Ubuntu Budgie) have been invaluable.
Thank you! 💚
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
We are pleased to announce the release of the next version of our distro, 24.04 Long Term Support. The LTS version is supported for 3 years while the regular releases are supported for 9 months. The new release rolls-up various fixes and optimizations that the Ubuntu Budgie team have been released since the 22.04 release in April 2022: We also inherits hundreds of stability…
Thanks to the hard work from our contributors, Lubuntu 24.04 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Download and Support Lifespan With Lubuntu 24.04 being a long-term support interim release, it will follow […]
Recently, there have been a lot of questions about LTS release-building procedures. We are making changes in that area — not least due to specific patterns in user behavior, and now it's a good time to discuss that.
we're happy to announce the release of Proxmox Backup Server 3.2. It's based on Debian 12.5 "Bookworm", but uses the newer Linux kernel 6.8, and includes ZFS 2.2.3
Here are the highlights
Debian Bookworm 12.5, with a newer Linux kernel 6.5
ZFS 2.2.3
Flexible notification system
Automated installation
Exclude backup groups from jobs
Overview of prune and GC jobs
We have included countless bugfixes and improvements for general client and backend usability; see...
The Xubuntu team is happy to announce the immediate release of Xubuntu 24.04.
Xubuntu 24.04, codenamed Noble Numbat, is a long-term support (LTS) release and will be supported for 3 years, until 2027.
Xubuntu 24.04 features the latest updates from Xfce 4.18, GNOME 46, and MATE 1.26. For new users and those coming from Xubuntu 22.04, you’ll appreciate the performance, stability, and improved hardware support found in Xubuntu 24.04. Xfce 4.18 is stable, fast, and full of user-friendly features. Enjoy frictionless bluetooth headphone connections and out-of-the-box touchpad support. Updates to our icon theme and wallpapers make Xubuntu feel fresh and stylish.
The final release images for Xubuntu Desktop and Xubuntu Minimal are available as torrents and direct downloads from xubuntu.org/download/.
As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.
We’d like to thank everybody who contributed to this release of Xubuntu!
Highlights and Known Issues
Highlights
Xfce 4.18 is included and well-polished since it’s initial release in December 2022
Xubuntu Minimal is included as an officially supported subproject
GNOME Software has been replaced by Snap Store and GDebi
Snap Desktop Integration is now included for improved snap package support
Firmware Updater has been added to enable firmware updates in Xubuntu is included to support firmware updates from the Linux Vendor Firmware Service (LVFS)
Thunderbird is now distributed as a Snap package
Ubiquity has been replaced by the Flutter-based Ubuntu Installer to provide fast and user-friendly installation
Pipewire (and wireplumber) are now included in Xubuntu
Improved hardware support for bluetooth headphones and touchpads
Color emoji is now included and supported in Firefox, Thunderbird, and newer Gtk-based apps
Significantly improved screensaver integration and stability
Known Issues
The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
Xorg crashes and the user is logged out after logging in or switching users on some virtual machines, including GNOME Boxes. (LP: #1861609)
You may experience choppy audio or poor system performance while playing audio, but only in some virtual machines (observed in VMware and VirtualBox)
OEM installation options are not currently supported or available, but will be included for Xubuntu 24.04.1
For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.
The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.
Support
For support with the release, navigate to Help & Support for a complete list of methods to get help.
With the work that has been done in the debian-installer/netcfg merge-proposal !9 it is possible to install a standard Debian system, using the normal Debian-Installer (d-i) mini.iso images, that will come pre-installed with Netplan and all network configuration structured in /etc/netplan/.
In this write-up I’d like to run you through a list of commands for experiencing the Netplan enabled installation process first-hand. For now, we’ll be using a custom ISO image, while waiting for the above-mentioned merge-proposal to be landed. Furthermore, as the Debian archive is going through major transitions builds of the “unstable” branch of d-i don’t currently work. So I implemented a small backport, producing updated netcfg and netcfg-static for Bookworm, which can be used as localudebs/ during the d-i build.
Let’s start with preparing a working directory and installing the software dependencies for our virtualized Debian system:
Next we’ll prepare a VM, by copying the EFI firmware files, preparing some persistent EFIVARs file, to boot from FS0:\EFI\debian\grubx64.efi, and create a virtual disk for our machine:
Finally, let’s launch the installer using a custom preseed.cfg file, that will automatically install Netplan for us in the target system. A minimal preseed file could look like this:
For this demo, we’re installing the full netplan.io package (incl. Python CLI), as the netplan-generator package was not yet split out as an independent binary in the Bookworm cycle. You can choose the preseed file from a set of different variants to test the different configurations:
We’re using the custom linux kernel and initrd.gz here to be able to pass the PRESEED_URL as a parameter to the kernel’s cmdline directly. Launching this VM should bring up the normal debian-installer in its netboot/gtk form:
Now you can click through the normal Debian-Installer process, using mostly default settings. Optionally, you could play around with the networking settings, to see how those get translated to /etc/netplan/ in the target system.
After you confirmed your partitioning changes, the base system gets installed. I suggest not to select any additional components, like desktop environments, to speed up the process.
During the final step of the installation (finish-install.d/55netcfg-copy-config) d-i will detect that Netplan was installed in the target system (due to the preseed file provided) and opt to write its network configuration to /etc/netplan/ instead of /etc/network/interfaces or /etc/NetworkManager/system-connections/.
Done! After the installation finished you can reboot into your virgin Debian Bookworm system.
To do that, quit the current Qemu process, by pressing Ctrl+C and make sure to copy over the EFIVARS.fd file that was written by grub during the installation, so Qemu can find the new system. Then reboot into the new system, not using the mini.iso image any more:
Finally, you can play around with your Netplan enabled Debian system! As you will find, /etc/network/interfaces exists but is empty, it could still be used (optionally/additionally). Netplan was configured in /etc/netplan/ according to the settings given during the d-i installation process.
In our case we also installed the Netplan CLI, so we can play around with some of its features, like netplan status:
Thank you for following along the Netplan enabled Debian installation process and happy hacking! If you want to learn more join the discussion at Salsa:installer-team/netcfg and find us at GitHub:netplan.
Desta vez recebemos a visita do André Bação e a conversa seguiu animada em torno de leis de termodinâmica, panelas de esquentadores, alegria de famílias chinesas, computadores azeiteiros e - como não podia deixar de ser - casas espertas. Ainda falámos da última beta de Ubuntu 24.04 e descobrimos que há empacotadores chamados Carlão. Nos próximos dias vai haver muita celebração da Liberdade na comunidade do Software Livre e todos vamos querer lá estar! 25 de Abril Sempre, Software Proprietário Nunca Mais!
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
Ubuntu MATE 23.10 is more of what you like, stable MATE Desktop on top of current Ubuntu.
This release rolls up a number of bugs fixes and updates that continues to build on recent releases, where the focus has been on improving stability 🪨
Ubuntu MATE 23.10
Thank you! 🙇
I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively
testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! 💚
MATE Desktop has been updated to 1.26.2 with a selection of bugs fixes 🐛 and minor improvements 🩹 to associated components.
caja-rename 23.10.1-1 has been ported from Python to C.
libmatemixer 1.26.0-2+deb12u1 resolves heap corruption and application crashes when removing USB audio devices.
mate-desktop 1.26.2-1 improves portals support.
mate-notification-daemon 1.26.1-1 fixes several memory leaks.
mate-system-monitor 1.26.0-5 now picks up libexec files from /usr/libexec
mate-session-manager 1.26.1-2 set LIBEXECDIR to /usr/libexec/ for correct interaction with mate-system-monitor ☝️
mate-user-guide 1.26.2-1 is a new upstream release.
mate-utils 1.26.1-1 fixes several memory leaks.
Yet more AI Generated wallpaper
My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created a stunning AI-generated 🤖🧠 wallpaper for Ubuntu MATE using bleeding edge diffusion models 🖌 The sample below is 1920x1080 but the version included in Ubuntu MATE 23.10 are 3840x2160.
Here’s what Simon has to say about the process of creating this new wallpaper for Mantic Minotaur:
Since Minotaurs are imaginary creatures, interpretations tend to vary widely. I wanted to produce an image of a powerful creature in a graphic novel style, although not gruesome like many depictions. The latest open source Stable Diffusion XL base model was trained at a higher resolution and the difference in quality has been noticeable, particularly at better overall consistency and detail, while reducing anatomical irregularities in images. The image was produced locally using Linux and an NVIDIA A100 80GB GPU, starting from an initial text prompt and refined using img2img, inpainting and upscaling features.
Major Applications
Accompanying MATE Desktop 1.26.2 🧉 and Linux 6.5 🐧 are Firefox 118 🔥🦊,
Celluloid 0.25 🎥, Evolution 3.50 📧, LibreOffice 7.6.1 📚
See the Ubuntu 23.10 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 23.10
This new release will be first available for PC/Mac users.
You can upgrade to Ubuntu MATE 23.10 from Ubuntu MATE 23.04. Ensure that you
have all updates installed for your current version of Ubuntu MATE before you
upgrade.
Open the “Software & Updates” from the Control Center.
Select the 3rd Tab called “Updates”.
Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
Press Alt+F2 and type in update-manager -c -d into the command box.
Update Manager should open up and tell you: New distribution release ‘23.10’ is available.
If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
Click “Upgrade” and follow the on-screen instructions.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Feedback
Is there anything you can help with or want to be involved in? Maybe you just
want to discuss your experiences or ask the maintainers some questions. Please
come and talk to us.
Ubuntu MATE 23.04 is the least exciting Ubuntu MATE release ever. The good news is, if you liked Ubuntu MATE 22.10 then it is more of the same; just with better artwork! 🖌️🖼️ I entered this development cycle full of energy and enthusiasm off the back of the Ubuntu Summit in Prague, but then I was seriously ill 🤒 and had a long stay in hospital. I’m recovering well and should be 100% in a couple of months. This setback and also changing jobs a couple of months ago has meant that I’ve not been able to invest the usual time and effort into Ubuntu MATE. I’m happy to say that I’ve been able to deliver another solid 🪨 release with the help of the Ubuntu community.
Thank you! 🙇
**I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd-funding, developing new features, creating artwork, offering community support, actively
testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! 💚
My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created **some stunning **AI-generated 🤖🧠 ** for Ubuntu MATE using bleeding edge diffusion models** 🖌 The samples below are 1920x1080 but the versions included in Ubuntu MATE 23.04 are 3840x2160.
Here’s what Simon has to say about the process of creating these new wallpapers for Lunar Lobster:
My usual workflow involves checking reddit, etc for the latest techniques, and then installing the latest open-source tools and checkpoints for unlimited experimentation (e.g. stable diffusion), plus some selective use of Dall-e and Midjourney, while trying not to exhaust my credits. I then experiment with lot of different prompts (including negative prompts to discourage certain features), settings, styles and ideas from each tool to see what sort of images I can get, then tweak and evolve my approach based on the results.
Lobsters are fascinating creatures, but in real life, I find them a bit ugly, with all those antennae and legs akimbo. For the theme of “Lunar Lobster”, rather precise anatomy, I explored ideas of stylised alien robotic space lobsters, lunar landers and other lobster-themed spacecraft. After a producing a shortlist of varied images, I then perform any necessary AI processing such as inpainting, outpainting (generating new parts of an image beyond the existing canvas - particularly useful for getting the correct aspect ratio) and AI upscaling to make them suitable for use as wallpaper.
As a podcaster and streamer I’m delighted to have PipeWire installed by default since Ubuntu MATE 22.10. The Ubuntu MATE meta packages have been updated to correctly install the revised pipewire packages in Ubuntu. Special thanks to Erich Eickmeyer, from the Ubuntu Studio project, for his work on this.
Major Applications
Accompanying MATE Desktop 1.26.1 🧉 and Linux 6.20 🐧 are Firefox 111 🔥🦊,
Celluloid 0.20 🎥, Evolution 3.48 📧, LibreOffice 7.5.2 📚
See the Ubuntu 23.04 Release Notes
for details of all the changes and improvements that Ubuntu MATE benefits from.
Download Ubuntu MATE 23.04
This new release will be first available for PC/Mac users.
You can upgrade to Ubuntu MATE 23.04 from Ubuntu MATE 22.10. Ensure that you
have all updates installed for your current version of Ubuntu MATE before you
upgrade.
Open the “Software & Updates” from the Control Center.
Select the 3rd Tab called “Updates”.
Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
Press Alt+F2 and type in update-manager -c -d into the command box.
Update Manager should open up and tell you: New distribution release ‘23.04’ is available.
If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
Click “Upgrade” and follow the on-screen instructions.
There are no offline upgrade options for Ubuntu MATE. Please ensure you have
network connectivity to one of the official mirrors or to a locally accessible
mirror and follow the instructions above.
Feedback
Is there anything you can help with or want to be involved in? Maybe you just
want to discuss your experiences or ask the maintainers some questions. Please
come and talk to us.
We’re thrilled to share our latest strides in enhancing partnerships, expanding support, and advancing innovations within the Armbian ecosystem. Here’s a roundup of recent developments:
1. Strengthening Partnerships in Shenzhen: We’ve embarked on a mission to fortify our collaborations with partners in Shenzhen, aimed at fostering better support for our community. During our visit, we engaged with both existing and potential partners to deepen our ties and enhance the services we offer to you. Read more
2. Platinum Support and Giveaway for Bananapi M7: We’re excited to announce the launch of platinum support and a special giveaway for the latest Bananapi M7, a collaborative effort between Bananapi and ARMSOM. This initiative aims to provide unparalleled assistance and rewards to our valued users. Learn more
3. Expansion of Community Build Framework: Our community interaction has led to the integration of several new boards, including SakuraPi and H96 TV box, into our build framework. Additionally, we’ve upgraded u-boot on 32-bit Rockchip devices and successfully ported Khadas Edge 2 to kernel 6.1. Moreover, FriendlyElec NAS now runs on mainline-based kernels, enriching our ecosystem with more versatile options.
4. Ongoing Upgrades and Future Ventures: In the pipeline, we have several exciting upgrades underway. We’re working on upgrading the Odroid XU kernel to version 6.6 and adding support for Orangepi 5 PRO. Furthermore, we’re introducing mainline kernel support for Orangepi Zero 2W, and our team is eagerly diving into the development of the new Radxa Rock 5 ITX board and Rock 5C.
Stay tuned for more updates as we continue to elevate the Armbian experience!
We are excited to announce that our latest software version 8.2 for Proxmox Virtual Environment is now available for download. This release is based on Debian 12.5 "Bookworm" but uses a newer Linux kernel 6.8, QEMU 8.1, LXC 6.0, Ceph 18.2 and ZFS 2.2.
We have an import wizard to migrate VMware ESXi guests to Proxmox VE. The integrated VM importer is presented as storage plugin for native integration into the API and web-based user interface. You can use this to import the VM as a whole...
We’re excited about the upcoming Ubuntu 24.04 LTS release, Noble Numbat. Like all Ubuntu releases, Ubuntu 24.04 LTS comes with 5 years of free security maintenance for the main repository. Support can be expanded for an extra 5 years, and to include the universe repository, via Ubuntu Pro. Organisations looking to keep their systems secure without needing a major upgrade can also get the Legacy Support add-on to expand that support beyond the 10 years. Combined with the enhanced security coverage provided by Ubuntu Pro and Legacy Support, Ubuntu 24.04 LTS provides a secure foundation on which to develop and deploy your applications and services in an increasingly risky environment. In this blog post, we will look at some of the enhancements and security features included in Noble Numbat, building on those available in Ubuntu 22.04 LTS.
Unprivileged user namespace restrictions
Unprivileged user namespaces are a widely used feature of the Linux kernel, providing additional security isolation for applications, and are often employed as part of a sandbox environment. They allow an application to gain additional permissions within a constrained environment, so that a more trusted part of an application can then use these additional permissions to create a more constrained sandbox environment within which less trusted parts can then be executed. A common use case is the sandboxing employed by modern web browsers, where the (trusted) application itself sets up the sandbox where it executes the untrusted web content. However, by providing these additional permissions, unprivileged user namespaces also expose additional attack surfaces within the Linux kernel. There has been a long history of (ab)use of unprivileged user namespaces to exploit various kernel vulnerabilities. The most recent interim release of Ubuntu, 23.10, introduced the ability to restrict the use of unprivileged user namespaces to only those applications which legitimately require such access. In Ubuntu 24.04 LTS, this feature has both been improved to cover additional applications both within Ubuntu and from third parties, and to allow better default semantics of the feature. For Ubuntu 24.04 LTS, the use of unprivileged user namespaces is then allowed for all applications but access to any additional permissions within the namespace are denied. This allows more applications to more better gracefully handle this default restriction whilst still protecting against the abuse of user namespaces to gain access to additional attack surfaces within the Linux kernel.
Binary hardening
Modern toolchains and compilers have gained many enhancements to be able to create binaries that include various defensive mechanisms. These include the ability to detect and avoid various possible buffer overflow conditions as well as the ability to take advantage of modern processor features like branch protection for additional defence against code reuse attacks.
The GNU C library, used as the cornerstone of many applications on Ubuntu, provides runtime detection of, and protection against, certain types of buffer overflow cases, as well as certain dangerous string handling operations via the use of the _FORTIFY_SOURCE macro. FORTIFY_SOURCE can be specified at various levels providing increasing security features, ranging from 0 to 3. Modern Ubuntu releases have all used FORTIFY_SOURCE=2 which provided a solid foundation by including checks on string handling functions like sprintf(), strcpy() and others to detect possible buffer overflows, as well as format-string vulnerabilities via the %n format specifier in various cases. Ubuntu 24.04 LTS enables additional security features by increasing this to FORTIFY_SOURCE=3. Level three greatly enhances the detection of possible dangerous use of a number of other common memory management functions including memmove(), memcpy(), snprintf(), vsnprintf(), strtok() and strncat(). This feature is enabled by default in the gcc compiler within Ubuntu 24.04 LTS, so that all packages in the Ubuntu archive which are compiled with gcc, or any applications compiled with gcc on Ubuntu 24.04 LTS also receive this additional protection.
The Armv8-M hardware architecture (provided by the “arm64” software architecture on Ubuntu) provides hardware-enforced pointer authentication and branch target identification. Pointer authentication provides the ability to detect malicious stack buffer modifications which aim to redirect pointers stored on the stack to attacker controlled locations, whilst branch target identification is used to track certain indirect branch instructions and the possible locations which they can target. By tracking such valid locations, the processor can detect possible malicious jump-oriented programming attacks which aim to use existing indirect branches to jump to other gadgets within the code. The gcc compiler supports these features via the -mbranch-protection option. In Ubuntu 24.04 LTS, the dpkg package now enables -mbranch-protection=standard, so that all packages within the Ubuntu archive enable support for these hardware features where available.
AppArmor 4
The aforementioned unprivileged user namespace restrictions are all backed by the AppArmor mandatory access control system. AppArmor allows a system administrator to implement the principle of least authority by defining which resources an application should be granted access to and denying all others. AppArmor consists of a userspace package, which is used to define the security profiles for applications and the system, as well as the AppArmor Linux Security Module within the Linux kernel which provides enforcement of the policies. Ubuntu 24.04 LTS includes the latest AppArmor 4.0 release, providing support for many new features, such as specifying allowed network addresses and ports within the security policy (rather than just high level protocols) or various conditionals to allow more complex policy to be expressed. An exciting new development provided by AppArmor 4 in Ubuntu 24.04 LTS is the ability to defer access control decisions to a trusted userspace program. This allows for quite advanced decision making to be implemented, by taking into account the greater context available within userspace or to even interact with the user / system administrator in a real-time fashion. For example, the experimental snapd prompting feature takes advantage of this work to allow users to exercise direct control over which files a snap can access within their home directory. Finally, within the kernel, AppArmor has gained the ability to mediate access to user namespaces as well as the io_uring subsystem, both of which have historically provided additional kernel attack surfaces to malicious applications.
Disabling of old TLS versions
The use of cryptography for private communications is the backbone of the modern internet. The Transport Layer Security protocol has provided confidentiality and integrity to internet communications since it was first standardised in 1999 with TLS 1.0. This protocol has undergone various revisions since that time to introduce additional security features and avoid various security issues inherent in the earlier versions of this standard. Given the wide range of TLS versions and options supported by each, modern internet systems will use a process of auto-negotiation to select an appropriate combination of protocol version and parameters when establishing a secure communications link. In Ubuntu 24.04 LTS, TLS 1.0, 1.1 and DTLS 1.0 are all forcefully disabled (for any applications that use the underlying openssl or gnutls libraries) to ensure that users are not exposed to possible TLS downgrade attacks which could expose their sensitive information.
Upstream Kernel Security Features
Linux kernel v5.15 was used as the basis for the Linux kernel in the previous Ubuntu 22.04 LTS release. This provided a number of kernel security features including core scheduling, kernel stack randomisation and unprivileged BPF restrictions to name a few. Since that time, the upstream Linux kernel community has been busy adding additional kernel security features. Ubuntu 24.04 LTS includes the v6.8 Linux kernel which provides the following additional security features:
Intel shadow stack support
Modern Intel CPUs support an additional hardware feature aimed at preventing certain types of return-oriented programming (ROP) and other attacks that target the malicious corruption of the call stack. A shadow stack is a hardware enforced copy of the stack return address that cannot be directly modified by the CPU. When the processor returns from a function call, the return address from the stack is compared against the value from the shadow stack – if the two differ, the process is terminated to prevent a possible ROP attack. Whilst compiler support for this feature has been enabled for userspace packages since Ubuntu 19.10, it has not been able to be utilised until it was also supported by the kernel and the C library. Ubuntu 24.04 LTS includes this additional support for shadow stacks to allow this feature to be enabled when desired by setting the GLIBC_TUNABLES=glibc.cpu.hwcaps=SHSTK environment variable.
Secure virtualisation with AMD SEV-SNP and Intel TDX
Confidential computing represents a fundamental departure from the traditional threat model, where vulnerabilities in the complex codebase of privileged system software like the operating system, hypervisor, and firmware pose ongoing risks to the confidentiality and integrity of both code and data. Likewise, unauthorised access by a malicious cloud administrator could jeopardise the security of your virtual machine (VM) and its environment. Building on the innovation of Trusted Execution Environments at the silicon level, Ubuntu Confidential VMs aim to restore your control over the security assurances of your VMs.
For the x86 architecture, both AMD and Intel processors provide hardware features (named AMD SEV SNP and Intel TDX respectively) to support running virtual machines with memory encryption and integrity protection. They ensure that the data contained within the virtual machine is inaccessible to the hypervisor and hence the infrastructure operator. Support for using these features as a guest virtual machine was introduced in the upstream Linux kernel version 5.19.
Thanks to Ubuntu Confidential VMs, a user can make use of compute resources provided by a third party whilst maintaining the integrity and confidentiality of their data through the use of memory encryption and other features. On the public cloud, Ubuntu offers the widest portfolio of confidential VMs. These build on the innovation of both the hardware features, with offerings available across Microsoft Azure, Google Cloud and Amazon AWS.
For enterprise customers seeking to harness confidential computing within their private data centres, a fully enabled software stack is essential. This stack encompasses both the guest side (kernel and OVMF) and the host side (kernel-KVM, QEMU, and Libvirt). Currently, the host-side patches are not yet upstream. To address this, Canonical and Intel have forged a strategic collaboration to empower Ubuntu customers with an Intel-optimised TDX Ubuntu build. This offering includes all necessary guest and host patches, even those not yet merged upstream, starting with Ubuntu 23.10 and extending into 24.04 and beyond. The complete TDX software stack is accessible through this github repository.
This collaborative effort enables our customers to promptly leverage the security assurances of Intel TDX. It also serves to narrow the gap between silicon innovation and software readiness, a gap that grows as Intel continues to push the boundaries of hardware innovation with 5th Gen Intel Xeon scalable processors and beyond.
Strict compile-time bounds checking
Similar to hardening of binaries within the libraries and applications distributed in Ubuntu, the Linux kernel itself gained enhanced support for detecting possible buffer overflows at compile time via improved bounds checking of the memcpy() family of functions. Within the kernel, the FORTIFY_SOURCE macro enables various checks in memory management functions like memcpy() and memset() by checking that the size of the destination object is large enough to hold the specified amount of memory, and if not will abort the compilation process. This helps to catch various trivial memory management issues, but previously was not able to properly handle more complex cases such as when an object was embedded within a larger object. This is quite a common pattern within the kernel, and so the changes introduced in the upstream 5.18 kernel version to enumerate and fix various such cases greatly improves this feature. Now the compiler is able to detect and enforce stricter checks when performing memory operations on sub-objects to ensure that other object members are not inadvertently overwritten, avoiding an entire class of possible buffer overflow vulnerabilities within the kernel.
Wrapping up
Overall, the vast range of security improvements that have gone into Ubuntu 24.04 LTS greatly improve on the strong foundation provided by previous Ubuntu releases, making it the most secure release to date. Additional features within both the kernel, userspace and across the distribution as a whole combine to address entire vulnerability classes and attack surfaces. With up to 12 years of support, Ubuntu 24.04 LTS provides the best and most secure foundation to develop and deploy Linux services and applications. Expanded Security Maintenance, kernel livepatching and additional services are all provided to Ubuntu Pro subscribers to enhance the security of their Ubuntu deployments.
We hope you've had a good Easter. We've been working hard to improve OSMC for all platforms and keep things running smoothly.
We've also finalised our support for Kodi v21 and this will be the final release of Kodi v20, with test builds for Kodi v21 being made available on the forums in the coming days before an anticipated release in May.
The end of 2023 was also busy for us, with the announcement of Vero V, our fifth iteration of our flagship device. We're happy to announce significant playback improvements to the device in this update.
Vero 4K / 4K + and V users will now experience perfect AV sync playback after several months of hard work.
Vero V users can now enjoy Dolby Vision compatible Profile 5 tonemapping with output to HDR and SDR. If you've ever played content that looks magenta and green, this is because it doesn't have a fallback layer. Vero V will now tonemap this and output it in the best possible format for your display.
This is the first step in our efforts to support UHD content and Dolby Vision content provided by streaming services such as Netflix which does not have a fallback layer. Many thanks to those that tested this on our forums and reported positive feedback.
Here's what's new:
Kodi v20.5
Kodi v20.5 (Nexus) is now available as standard on OSMC, and release details can be found here.
On the OSMC side, we've made some changes to keep everything running smoothly. Here's what's new:
Bug fixes
Fix a network issue in My OSMC
Vero 4K / 4K +: fix an issue where using the 'toothpick' recovery method can render an existing installation unbootable
My OSMC: fix an issue which could cause a build up of backup files in the Kodi user data directory
Fixed CEC issues on Vero 4K / 4K +
Fixed a number of issues with the OSMC skin
Improving the user experience
Vero V: added Dolby Vision Profile 5 colourspace conversion support. Previously playing content that was Profile 5 (did not have a fallback layer) would result in purple and green colours
Improved CPU governor performance on all devices
Improved PTS handling on Vero 4K / 4K + and Vero V
Improved playback performance and synchronisation on Vero 4K/4K+/V
My OSMC: Changing settings for updates; backup or restore no longer requires a reboot to take effect.
Vero V: improved Bluetooth range and performance when using A2DP audio
Vero 4K / 4K + / V: improve support for VESA reduced blanking modes used, eg, by Dell monitors
Miscellaneous
Vero 4K/4K+/V: add support for specific Ortek keyboard
Updated translations
Wrap up
To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.
If you enjoy OSMC, please follow us on X, like us on Facebook and consider making a donation if you would like to support further development.
You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.
Vero V is our latest and greatest flagship and the best way to enjoy OSMC. To celebrate the significant milestones in this update, we're offering Vero V at a discount for a limited period of time. Grab yours today
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
Did you say Idaho Potatoes? Earlier this week, I was at a wedding, talking to a friend. She asked if I’d ever been on a cruise – I have, once, years ago, and I proceeded to tell her the story of a couple from Idaho that we ate dinner with every night on the cruise. […]
Discover how IBM Cloud’s bare metal servers offer highly confined and high-performing single-tenant cloud isolation through the use of Ubuntu Core and Snaps, supported by the AMD Pensando Elba DPU (Data Processing Unit). This setup enables the creation of secure and efficient environments for each tenant. Its design ensures the total separation of their servers from the cloud underlay. The architecture delivers consistent performance and enables non intrusive control from the cloud provider. Learn how this innovative solution can benefit your business and enhance your cloud infrastructure.
Introduction
Public cloud bare-metal servers offer dedicated physical resources, but can present isolation and performance challenges. Isolation requirements involve maintaining full control of compute capabilities by the tenant, while preserving the backend management of its infrastructure by the cloud provider and preventing unauthorised access. Performance requirements entail providing consistent performance even under heavy workloads. Cloud providers face challenges in ensuring physical and logical isolation, resource allocation, monitoring, management, scalability, and security. To address these complex requirements, providers must invest in advanced technologies and implement best practices for resource allocation, monitoring, and management. They also need to regularly review and update infrastructure to meet tenant needs.
In the following discussion, we will explore how IBM Cloud is addressing these challenges by harnessing the distinctive capabilities of Ubuntu Core and Snaps deployed on the AMD Pensando Elba infrastructure accelerators.
IBM Cloud Bare Metal Servers for VPC
IBM has always been dedicated to keeping clients essential data secure through a strong focus on resilience, performance, and compliance. IBM Cloud executes that focus within highly regulated industries such as finance and insurance organisations. Given IBM Cloud’s long-standing commitment to data security, it is unsurprising and essential that Bare Metal Servers for VPC (VPC BM) implements the most rigorous security guarantees to meet customers expectations.
Bare metal servers, which are physical servers dedicated to a single tenant, offer benefits such as high performance and customizability, but managing them in a multi-tenant environment can be complex. A key requirement is ensuring isolation between the tenant and the cloud backend, both to maintain security and to prevent performance issues caused by noisy neighbours.
VPC BM allows customers to select a preset server profile that best matches their workloads to help accelerate the deployment of compute resources. Customers can achieve maximum performance without oversubscription deployed in 10 minutes
VPC BM is powered with the latest technology. They are built for cloud-enterprise applications, including VMware and SAP, and can also support HPC and IOT workloads. They come with enhanced high-performance networking at 100 Gbps as well as advanced security features.
A network orchestration layer handles the networking for all bare metal servers that are within an IBM Cloud VPC across regions and zones. This allows for management and creation of multiple, virtual private clouds in multi zone regions and also improves security, reduces latency, and increases high availability.
“I selected IBM Cloud VPC because of 5 points that I thought and was proven correct based on my experience using the service. First is security. Secondly is agility. The third is isolation. Fourth is the high performance. Fifth, and last, is the scalability.”
Ivo Draginov CEO BatchService
AMD Pensando DSC2-200 “Elba”
In use with some of the largest cloud providers and Hyperscalers on the planet, the AMD Pensando DSC2-200 has proven itself as the platform of choice for cloud providers seeking to optimise performance, increase scale and introduce new infrastructure services at the speed of software. The DSC2-200 is full-height, half-length PCIe card powered by AMD Pensando 2nd generation DPU “Elba”. The DSC2-200 is the ideal platform for cloud providers to implement multi-tenant SDN, stateful security, storage, encryption and telemetry at line rate. The platform’s scale architecture allows cloud provider to offer multiple services on the same DPU card.
Developers can create customised data plane services that target 400G throughput, microsecond-level latencies, and scale to tens of millions of flows. The heart of the AMD Pensando platform is a fully programmable P4 data processing unit (DPU). High-level programming languages (P4, C) enable rapid development and deployment of new features and services.
The innovative design of AMD Pensando DPU provides secure air-gap between tenant’s compute instances and cloud infrastructure as well as secure isolation between tenants. This separation enables cloud operators to manage their infrastructure functions efficiently and independently of their tenant’s workloads while freeing up the valuable compute resources from the infrastructure tasks and fully dedicating them to revenue generating business applications. The exceptional throughput and performance of the Elba DSC2-200, along with its strong alignment with IBM’s security expectations, made it a top choice for inclusion in IBM Cloud’s bare metal servers for VPC. This combination of features enables IBM Cloud to provide highly secure and powerful environments for its customers.
Achieving IBM Cloud’s target outcomes with Ubuntu Core and Snaps
The first goal was to implement a secure and reliable operating system that IBM Cloud development teams could use to launch their management interface and functionality on the AMD Pensando DPU cards. Initially IBM Cloud selected Ubuntu Server as the operating system. They were familiar with it and could easily develop on top of it using the familiar Linux toolset and API.
To develop software running on the AMD Pensando DPU cards, the development kit provides a complete container-based development environment. It allows for the development of data plane, management plane, and control plane functions. To perform correctly, these containers must be allowed direct communication with the card hardware components with fine-grained isolation. Using traditional container runtimes such as Docker and Kubernetes alone cannot meet the unique requirements of this solution. Fortunately, Snap packages provide this access through secure and controlled interfaces to the operating system.
Using Snap packages, IBM Cloud developers were able to implement all the functionalities they needed in record time. This positive experience made them turn their attention to Ubuntu Core, the version of Ubuntu specifically designed for embedded systems such as AMD Pensando DPU cards. It is entirely made up of Snap packages, creating a confined, immutable and transaction-based system. Communication among containers and between containers and the operating system is locked down under full control. In addition, Ubuntu Core provides full disk encryption and secure boot, achieving additional mandatory security compliance objectives.
IBM Cloud successfully converted their bespoke AMD Pensando system image from Ubuntu Server to Ubuntu Core and, after positive results in the pre-production tests, proceeded to deploy it in production to support Bare Metal Servers on VPC.
Conclusion
In summary, Canonical’s Ubuntu Core and IBM Cloud’s components, when packaged as Snaps, provide a unique solution that effectively addresses the challenges faced by the company. This innovative approach has enabled IBM Cloud to enhance its offerings and deliver improved performance, security, and tenant isolation. The development of the solution completed in under a year and has been successfully operating in production since then. The implementation has been a resounding success. Ultimately addressing these challenges provided IBM Cloud with several advantages, including differentiation, cost savings, and improved efficiency.
The collaboration between IBM Cloud, Canonical, and AMD Pensando remains ongoing, with plans to expand the use of Ubuntu Core and Snaps to support other non-bare metal offerings, including Virtual Server for VPC. A key medium-term goal is to achieve FedRAMP compliance, which involves upgrading to Ubuntu Core 22 and ensuring FIPS compliance at the kernel and filesystem levels. This ongoing partnership and development aim to enhance the security, performance, and functionality of IBM Cloud’s solutions.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
At Canonical, we’re committed to open-source principles and fostering collaboration. Over the last 20 years, Ubuntu’s brand has become a leader in open source, with an open operating system. Our community shapes Ubuntu’s journey, and we recognise room for improvement in how we collaborate, particularly in design at Canonical. Despite most of our development being open source, our design processes often lack transparency, particularly in visuals, user interaction, and research.
We are excited to announce that we kickstarted a working group within the Design team with a mission to empower external designers to contribute to open-source projects. Our focus is on building resources that bridge the gap between designers and open-source project maintainers, making it easier for designers to dive into projects and for maintainers to receive valuable design contributions and feedback.
Before we figure out how to support you, we’re checking out ongoing Open Design initiatives and understanding the needs, motivations, and interests of designers and project maintainers. We’re learning tons along the way and prioritising ideas on how to move forward!
As we kick things off, your input would be invaluable in shaping our efforts. Therefore, we are inviting open source maintainers and designers to participate in this survey. Your input will provide valuable insights and help us ensure we’re on the right track.
DISA, the Defense Information Systems Agency, has published their Security Technical Implementation Guide (STIG) for Ubuntu 22.04 LTS. The STIG is free for the public to download from the DOD Cyber Exchange. Canonical has been working with DISA since we published Ubuntu 22.04 LTS to draft this STIG, and we are delighted that it is now finalised and available for everyone to use.
We are now developing the Ubuntu Security Guide profile with a target release in summer 2024.
What is a STIG?
A STIG is a set of guidelines for how to configure an application or system in order to harden it. Hardening means reducing the system’s attack surface: removing unnecessary software packages, locking down default values to the tightest possible settings and configuring the system to run only what you explicitly require. System hardening guidelines also seek to lessen collateral damage in the event of a compromise.
STIGs are intended to be applied with judgement and common sense. Each mission or deployment is going to be different: where a piece of guidance doesn’t make sense for your specific needs, you can choose your own path forward whilst keeping the overall intentions of the STIG in mind.
The STIGs have been primarily developed for use within the US Department of Defense. However, because they are based on universally-recognised security principles, they can be used by anyone who wants a robust system hardening framework. As a result, STIGs are being more widely adopted across the US government and numerous industries, such as financial services and online gaming.
When will Canonical publish a DISA-STIG USG profile?
The STIG that DISA has published is primarily composed of a manual XCCDF XML document that describes in human-readable words how to configure Ubuntu 22.04 LTS. This XML file contains nearly 200 individual pieces of guidance, which can be quite a daunting prospect to tackle from scratch. To simplify this process, Canonical produces the Ubuntu Security Guide (USG), an automation tool that handles both the checking and remediation of the STIG rules. USG is available as part of Ubuntu Pro, and can be enabled through the Pro client.
Our engineering team is currently working through the XCCDF document and codifying the rules into a new profile for USG. We will publish the STIG profile for USG in the coming months, with a target release in summer 2024, and will make an announcement at that time.
Conclusion
The STIG for Ubuntu 22.04 LTS will allow any users or administrators to harden their systems in accordance with this rigorous standard. Doing this by hand is a time-consuming proposition, so we recommend waiting until automated tooling is available to speed up the hardening and auditing process; the USG profile is in active development and will be published as soon as it’s ready.
Update disable_sudo_use_pty, negate it explicitly, not just comment it. This should avoid distortion of gpm with jfbterm. Thanks to ottokang for reporting this issue.
Por ocasião da Wikicon Portugal 2024 em Évora, caímos numa anomalia gravitacional no tecido espacio-temporal e viajámos até à Quinta Dimensão, um Cosmódromo de Ideias e depósito de Raspberry Pi. Fomos recebidos por João Bacelar (um ser humano da mesma espécie de Carl Sagan), que orbita uma estrela de luminosidade classe V (do tipo espectral G) à velocidade de 30 Km por segundo; técnico de som, criador multimédia, programador, astrónomo amador, rádio amador e entusiasta do Software Livre, estivemos em conversa amena com ele num cenário bucólico e rural, rodeados de bichos vários, banhados por radiação ultra-violeta e neutrinos, debaixo de uma camada de 7 Km de Azoto e Oxigénio à pressão de 1014.7 Hectopascal.
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.
MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching 10 million users in November 2022. AI enthusiasts and professionals have struggled with experiment tracking, model management and code reproducibility, so when MLflow was launched, it addressed pressing problems in the market. MLflow is lightweight and able to run on an average-priced machine. But it also integrates with more complex tools, so it’s ideal to run AI at scale.
A short history
Since MLflow was first released in June 2018, the community behind it has run a recurring survey to better understand user needs and ensure the roadmap s address real-life challenges. About a year after the launch, MLflow 1.0 was released, introducing features such as improved metric visualisations, metric X coordinates, improved search functionality and HDFS support. Additionally, it offered Python, Java, R, and REST API stability.
MLflow 2.0 landed in November 2022, when the product also celebrated 10 million users. This version incorporates extensive community feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well as improved integrations with the ML ecosystem, a revamped MLflow Tracking UI, a refresh of core APIs across MLflow’s platform components, and much more.
In September 2023, Canonical released Charmed MLflow, a distribution of the upstream project.
Why use MLflow?
MLflow is often considered the most popular ML platform. It enables users to perform different activities, including:
Reproducing results: ML projects usually start with simplistic plans and tend to go overboard, resulting in an overwhelming quantity of experiments. Manual or non-automated tracking implies a high chance of missing out on finer details. ML pipelines are fragile, and even a single missing element can throw off the results. The inability to reproduce results and codes is one of the top challenges for ML teams.
Easy to get started: MLflow can be easily deployed and does not require heavy hardware to run. It is suitable for beginners who are looking for a solution to better see and manage their models. For example, this video shows how Charmed MLflow can be installed in less than 5 minutes.
Environment agnostic: The flexibility of MLflow across libraries and languages is possible because it can be accessed through a REST API and Command Line Interface (CLI). Python, R, and Java APIs are also available for convenience.
Integrations: While MLflow is popular in itself, it does not work in a silo. It integrates seamlessly with leading open source tools and frameworks such as Spark, Kubeflow, PyTorch or TensorFlow.
Works anywhere: MLflow runs on any environment, including hybrid or multi-cloud scenarios, and on any Kubernetes.
MLFlow is an end-to-end platform for managing the machine learning lifecycle. It has four primary components:
MLflow Tracking
MLflow Tracking enables you to track experiments, with the primary goal of comparing results and the parameters used. It is crucial when it comes to measuring performance, as well as reproducing results. Tracked parameters include metrics, hyperparameters, features and other artefacts that can be stored on local systems or remote servers.
MLflow Models
MLflow Models provide professionals with different formats for packaging their models. This gives flexibility in where models can be used, as well as the format in which they will be consumed. It encourages portability across platforms and simplifies the management of the machine learning models.
MLflow projects
Machine learning projects are packaged using MLflow Projects. It ensures reusability, reproducibility and portability. A project is a directory that is used to give structure to the ML initiative. It contains the descriptor file used to define the project structure and all its dependencies. The more complex a project is, the more dependencies it has. They come with risks when it comes to version compatibility as well as upgrades.
MLflow project is useful especially when running ML at scale, where there are larger teams and multiple models being built at the same time. It enables collaboration between team members who are looking to jointly work on a project or transfer knowledge between them or to production environments.
MLflow model registry
Model Registry enables you to have a centralised place where ML models are stored. It helps with simplifying model management throughout the full lifecycle and how it transitions between different stages. It includes capabilities such as versioning and annotating, and provides APIs and a UI.
Key concepts of MLflow
MLflow is built around two key concepts: runs and experiments.
In MLflow, each execution of your ML model code is referred to as a run. All runs are associated with an experiment.
An MLflow experiment is the primary unit for MLflow runs. It influences how runs are organised, accessed and maintained. An experiment has multiple runs, and it enables you to efficiently go through those runs and perform activities such as visualisation, search and comparisons. In addition, experiments let you run artefacts and metadata for analysis in other tools.
Kubeflow vs MLflow
Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, and are driven by a thriving community whose contributions are making a difference in the development of the projects. The main purpose of both Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, and enable teams to develop and deploy machine learning models in a scalable, portable and reproducible manner.
However, comparing Kubeflow and MLflow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly, they have different strengths. On the one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. From a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects.
Charmed MLflow is Canonical’s distribution of the upstream project. It is part of Canonical’s growing MLOps portfolio. It has all the features of the upstream project, to which we add enterprise-grade capabilities such as:
Simplified deployment: the time to deployment is less than 5 minutes, enabling users to also upgrade their tools seamlessly.
Simplified upgrades using our guides.
Automated security scanning: The bundle is scanned at a regular cadence..
Security patching: Charmed MLflow follows Canonical’s process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project, and the risk of exploitation.
Maintained images: All Charmed MLflow images are actively maintained.
Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
Clouds, be they private or public, surprisingly remain one of the most DIY-favouring markets. Perhaps due to the nebulous and increasingly powerful technologies, a series of myths, or even unnecessary egos, the majority of non-tech-centric enterprises (meaning, companies whose primary business scope rests outside the realm of IT software and hardware) still try to build and nurture in-house cloud management teams, without considering outsourcing even part of their workload. Self-management has its advantages, however, thinking it’s the only option is a mistake. Reading this you may think: “managed cloud services are for lazy people, I can do it myself.” And the truth is, you indeed can. But should you?
Cloud operations
Let’s be honest: building a cloud is no easy feat. It is not for beginners, and involves a large series of considerations: is it large enough? Secure enough? Efficient enough? Does it justify the cost? So having made your way through this maze of questions and having finally concluded that you want to move towards a cloud deployment, the last thing you need is another set of considerations for operating it.
Operations can be a vague term. In the tech/cloud field, it defines the entire range of actions and activities required to keep any cloud infrastructure running consistently, reliably, and efficiently. Briefly, good operations make sure your cloud does what it’s supposed to do most of the time and does not significantly disrupt your business processes when errors happen. While different from cloud to cloud, most operations can be classified into three categories:
Monitoring – constant measurements of key metrics against a predefined schema to ensure functionality
Management – tweaks and changes to the infrastructure, such as upgrades, patches, and scaling, to ensure reliability
Troubleshooting – a system of protocols and procedures that keeps your workloads safe and ensures minimum data loss when incidents happen
This may sound complicated and complex, and in many ways it is. As an industry rule of thumb, for every 100 nodes of any cloud’s deployment, you will require at least one expert to ensure that proper operations are in place. This is very important because improper operations can cause significant disruption to your entire business, from inaccurate data and processes to major errors in processes and performance. Briefly put, cloud operations cannot be neglected.
The cost of self-managed clouds
Regardless of how big or small or simple or complex your infrastructure is, there is a range of costs that you are likely to incur when it comes to operating your cloud. These can be:
Direct – These are costs directly associated with the deployment and operation of your cloud, such as hardware purchases and maintenance, software licences, service subscriptions and more. They are relatively predictable and will allow you to budget quite easily ahead of time, but do allow a margin of +/- 10% when estimating, as the integration of components within the wider infrastructure can sometimes incur additional service costs.
Indirect – When it comes to indirect costs, the definition’s boundaries become more blurry. In general, an indirect cost is any cost that, when neglected or denied, significantly reduces the reliability, efficiency, or even mere availability of your cloud. For example, IT headcount is a significant indirect cost: it will cost you money to hire, train, retain, and grow a team of experts to manage your infrastructure, and these costs will only be augmented by the ongoing skill gap the market is currently experiencing. The opportunity costs of having people work on operations rather than innovation can range from negligible to severe, as time-to-market is an essential component for maintaining a competitive edge in any industry.
Indirect costs are highly unpredictable and involve a significant level of corporate responsibility should you choose to do everything yourself. Suppose you’ve hired your team and trained them: at any point, engineers can leave, or require additional training; sometimes their talent will be needed to sustain other technical feats within your business; and sometimes things can simply go slower than expected. It’s not impossible to navigate these indirect costs. Just note that while this has some advantages – like full independence and more freedom to allocate resources – it has increased risks of financial losses and slower time to market.”
In light of these costs, a general observation (or unwritten market consensus) is that tech-centric companies will likely be able to self-manage their clouds successfully. Non-tech-centric companies are likely to encounter a point where managed cloud services would present a more feasible and competitive opportunity.
When to opt for managed cloud services:
Before discussing when to opt for managed cloud services, let’s take a moment to clarify what they entail. Opting for Managed Cloud Services involves outsourcing your cloud infrastructure operations to an external expert, also known as a Managed Service Provider (MSP). You’ll ideally be able to relinquish all your operational concerns (along with responsibility for the efficiency of your operations) to the MSP, and focus on innovation or whatever else really matters for you.
There is a pervasive myth that managed cloud services are only a useful option when your company finds itself unable to manage anything by itself, or when you simply don’t have an IT team. Nothing could be further from the truth. There are several situations where choosing a managed cloud service provider can prove both helpful and lucrative:
Vertical growth – When you want to expand into a new territory, it is unlikely that you will have access to a well-established senior expertise within your IT team. This in turn can be expensive to acquire, and will need plenty of time to adjust to your company’s values and processes. Choosing an MSP to support you and enable you to grow vertically as soon as you want can help you accelerate your time to market and cut talent acquisition costs.
Re-focus – You probably already have an IT team, and you are probably very happy with it. But when it comes to their bandwidth, you may want to have them focus on sustaining technological innovation for your competitive advantage, rather than spending most of their time keeping the lights on in your cloud infrastructure. A managed cloud service will help offer your team enough headspace to concentrate on your primary business scope.
Cost predictability – Faced with a new project, it would be wise and appropriate to estimate your costs. But cloud infrastructure, as mentioned above, can incur a lot of unexpected costs, especially when it comes to covering a skill gap and mitigating for lost opportunities. A managed service provider should offer a stable and predictable price (usually per node per year), which can give you full control over your budgets and allow you to allocate resources more efficiently.
When venturing into unfamiliar territory, opting for managed services is advisable – especially for non-tech-centric enterprises. Cloud infrastructure operations is a perfect example of such a case: a highly complex and resource-intensive set of processes that is essential to your business success, but detrimental to your costs if improperly self-managed. For any non-tech-centric enterprise looking to enter, expand, or upgrade their open-source cloud infrastructure, Managed Cloud Services are an attractive opportunity that proposes countless advantages and can help you retain (or even augment) your competitive edge.
Recently, OS2ATC, an annual technology event in the field of open source operating systems, was held in Beijing, where many industry technologists and experts gathered to share the latest technical achievements and innovative ideas in the fields of Al and System, Hardware, Kernel, RISC-V Architecture, ARM Architecture, Longxin Architecture, Programming Technology, RUST, Intelligent Vehicles and Robotics, AIOT, Cloud Native, Virtualisation, and so on, focusing on the topic of "Open Source Intelligence". Open Source Operating System Annual Technical Conference (OS2ATC) has been held for ten consecutive years, which has been effective in promoting the development of OS-related teaching, research and industry ...Read more
Hello, Community! We haven't had clear guidelines on when and how to create suitable tasks in our Phorge development portal for a long time. As the community grows, we need such guidelines more and more to ensure that tasks get resolved. We are also changing our policy of never closing tasks for inactivity, although we will still close them only under particular and limited circumstances.
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
At the world’s leading industrial trade fair, companies from the mechanical engineering, electrical engineering and digital industries as well as the energy sector will come together to present solutions for a high-performance, but also sustainable industry at Hannover Messe. This year, Qualcomm brought its DX Summit to Hannover Messe, putting together business and technology leaders to discuss digital transformation solutions and experiences that are moving enterprise forward today, from manufacturing to logistics, transportation, energy and more.
Canonical will join the Qualcomm DX Summit at Hannover Messe on April 23rd , 2024, where industry experts will delve into the cutting-edge technologies that are driving Industry 4.0 forward. We’re looking forward to meeting our partners and customers on-site to discuss the latest in open-source innovation, and solutions on edge AI. Fill in the form and get a free ticket for Qualcomm DX Summit and Hannover Messe from Canonical.
Secure and scale your smart edge AI deployments with Ubuntu
During the event, Canonical will present a talk using a real-world case-study to showcase our joint offering with Qualcomm and illustrate how Canonical solutions benefit enterprise IoT customers to bring digital transformation and AI to their latest IoT projects.
Presenter: Aniket Ponkshe, Director of Silicon Alliances, Canonical
Date and time: 2:20 pm – 2:40 pm, April 23rd, 2024
April 15th, 2024, in the Top 10 Most Secure Mobile Phones to Buy in 2024, the cybersecurity-focused firm Efani, has ranked Purism Librem 5 as as the #1 most secure phone for the year 2024. From their article: Factors to Consider When Evaluating Smartphone Security When assessing the security of smartphones, several crucial aspects must […]
Ubuntu Budgie 24.04 LTS (Noble Numbat) is a Long Term Support release with 3 years of support by your distro maintainers, from April 2024 to May 2027. These release notes showcase the key takeaways for 22.04 upgraders to 24.04. In these release notes the areas covered are: Quarter & half tiling is pretty much self-explaining. Dragging a window to the…
Welcome to Purism, a different type of technology company. We believe you should have technology that does not spy on you. We believe you should have complete control over your digital life. We advocate for personal privacy, cyber security, and individual freedoms. We sell hardware, develop software, and provide services according to these beliefs. To […]
March 2024 was another eventful month for vulnerabilities and cybersecurity in general. It was the second consecutive month of lapsed Common Vulnerability Exposure (CVE) enrichment putting defenders in a precarious position with reduced risk visibility. The Linux kernel continued its elevated pace of vulnerability disclosures and was commissioned as a new CVE Numbering Authority (CNA). In addition, several critical vulnerabilities were added to CISA’s Known Exploited Vulnerabilities (KEV) list including Microsoft Windows, Fortinet FortiClientEMS, all the major browsers, and enterprise Continuous Integration And Delivery software vendor JetBrains.
Here’s a quick review of March 2024’s most impactful cybersecurity events.
The NIST NVD Disruption
NIST’s National Vulnerability Database (NVD) team largely abandoned CVE Enrichment in February 2024 with no warning. NIST NVD slowed to a CVE enrichment rate of just over 5% during March and it became obvious that the abrupt halt was not just a short-term outage. Disruption of CVE enrichment puts cybersecurity operations around the world at a big disadvantage because the NVD is the largest centralized repository of vulnerability severity information. Without severity enrichment, cybersecurity admins are left with very little information for vulnerability prioritization and risk management decision making.
Experts in the cybersecurity community traded public speculation until the VulnCon & Annual CNA Summit, where NIST’s Tanya Brewer announced that the non-regulatory US government agency would relinquish some aspects of the NVD management to an industry consortium. Brewer did not explain the exact cause for outage, but forecasted several additional goals for NIST NVD moving forward:
Allowing more outside parties to submit enrichment data
Improving the NVD’s software identification capabilities
Adding new types of threat intelligence data such as EPSS and the NIST Bugs Framework
Improving the NVD data’s usability and supporting new use cases
Automating some aspects of CVE analysis
Plenty Going On “In The Linux Kernel”
A total of 259 CVEs were disclosed in March 2024 with a description that began with: “In the Linux kernel” marking the second most active month ever for Linux vulnerability disclosures. The all time record was set one month prior in February with a total of 279 CVEs issued. March also marked a new milestone for kernel.org, the maintainer of the Linux kernel, as it was inducted as a CVE Numbering Authority (CNA). Kernel.org will now assume the role of assigning and enriching CVEs that impact the Linux kernel. Going forward the kernel.org asserts that CVEs will only be issued for discovered vulnerabilities after a fix is available, and CVEs will only be issued for versions of the Linux kernel that are actively supported.
Multiple High Severity Vulnerabilities In Fortinet Products
Several High severity vulnerabilities in Fortinet FortiOS and FortiClientEMS were disclosed. Of these, CVE-2023-48788 has been added to CISA’s KEV database. The risk imposed by CVE-2023-48788 is further compounded by the existence of a publicly available proof-of-concept (PoC) exploit. While CVE-2023-48788 is notably an SQL Injection [CWE-89] vulnerability, it can be exploited in tandem with the xp_cmdshell function of Microsoft SQL Server for remote code execution (RCE). Even when xp_cmdshell is not enabled by default, researchers have shown that it can be enabled via the SQL Injection weakness.
Greenbone has a network vulnerability test (NVT) that can identify systems affected by CVE-2023-48788, local security checks (LSCs) [1][2] that can identify systems affected by CVE-2023-42790 and CVE-2023-42789, and another LSC to identify systems affected by CVE-2023-36554. A proof-of-concept exploit for CVE-2023-3655 has been posted to GitHub.
CVE-2023-48788 (CVSS 9.8 Critical): A SQL Injection vulnerability allowing an attacker to execute unauthorized code or commands via specially crafted packets in Fortinet FortiClientEMS version 7.2.0 through 7.2.2.
CVE-2023-42789 (CVSS 9.8 Critical): An out-of-bounds write in Fortinet FortiOS allows an attacker to execute unauthorized code or commands via specially crafted HTTP requests. Affected products include FortiOS 7.4.0 through 7.4.1, 7.2.0 through 7.2.5, 7.0.0 through 7.0.12, 6.4.0 through 6.4.14, 6.2.0 through 6.2.15, FortiProxy 7.4.0, 7.2.0 through 7.2.6, 7.0.0 through 7.0.12, 2.0.0 through 2.0.13.
CVE-2023-42790 (CVSS 8.1 High): A stack-based buffer overflow in Fortinet FortiOS allows an attacker to execute unauthorized code or commands via specially crafted HTTP requests. Affected products include FortiOS 7.4.0 through 7.4.1, 7.2.0 through 7.2.5, 7.0.0 through 7.0.12, 6.4.0 through 6.4.14, 6.2.0 through 6.2.15, FortiProxy 7.4.0, 7.2.0 through 7.2.6, 7.0.0 through 7.0.12, 2.0.0 through 2.0.13.
CVE-2023-36554 (CVSS 9.8 Critical): FortiManager is prone to an improper access control vulnerability in backup and restore features that can allow attackers to execute unauthorized code or commands via specially crafted HTTP requests. Affected products are FortiManager version 7.4.0, version 7.2.0 through 7.2.3, version 7.0.0 through 7.0.10, version 6.4.0 through 6.4.13 and 6.2, all versions.
Zero Days In All Major Browsers
Pwn2Own, an exciting hacking competition took place at CanSecWest security conference on March 20th – 22nd. At this year’s event, 29 distinct zero-days were discovered and over one million dollars in prize money was awarded to security researchers. Independent entrant Manfred Paul earned a total of $202,500 including $100,000 for two zero day sandbox escape vulnerabilities in Mozilla Firefox. Mozilla quickly issued updates to Firefox with version 124.0.1.
Manfred Paul also achieved remote code execution (RCE) in Apple’s Safari by combining Pointer Authentication Code (PAC) [D3-PAN] bypass and integer underflow [CWE-191] zero-days. PACs in Apple’s operating systems are cryptographic signatures for verifying the integrity of pointers to prevent the exploitation of memory corruption bugs. PAC has been bypassed before for RCE in Safari. Manfred defeated Google Chrome and Microsoft Edge via an Improper Validation of Specified Quantity in Input [CWE-1284] vulnerability to complete the browser exploit trifecta.
The fact all major browsers were breached underscores the high risk of visiting untrusted Internet sites and the overall lack of security provided by major browser vendors. Greenbone includes tests to identify vulnerable versions of Firefox and Chrome.
CVE-2024-29943 (CVSS 10 Critical): An attacker was able to exploit Firefox via an out-of-bounds read or write on a JavaScript object by fooling range-based bounds check elimination. This vulnerability affects versions of Firefox before 124.0.1.
CVE-2024-29944 (CVSS 10 Critical): Firefox incorrectly handled Message Manager listeners allowing an attacker to inject an event handler into a privileged object to execute arbitrary code.
CVE-2024-2887 (High Severity): A type confusion [CWE-843] vulnerability in the Chromium browser’s implementation of WebAssembly (Wasm).
New Actively Exploited Microsoft Vulnerabilities
Microsoft’s March 2024 security advisory included a total of 61 vulnerabilities impacting many products. The Windows kernel had the most CVEs disclosed with a total of eight, five of which are rated high severity. Microsoft WDAC OLE DB provider for SQL, Windows ODBC Driver, SQL Server, and Microsoft WDAC ODBC Driver combined to account for ten high severity CVEs. There are no workarounds for any vulnerabilities in the group meaning that updates must be applied to all affected products. Greenbone includes vulnerability tests to detect the newly disclosed vulnerabilities from Microsoft’s March 2024 security advisory.
Microsoft has so far tagged six its new March 2024 vulnerabilities as “Exploitation More Likely”, while two new vulnerabilities affecting Microsoft products were added to the CISA KEV list; CVE-2023-29360 (CVSS 8.4 High) affecting Microsoft Streaming Service and CVE-2024-21338 (CVSS 7.8 High) published in 2023 were assigned actively exploited status in March.
CVE-2024-27198: Critical Severity CVE In JetBrains TeamCity
TeamCity is a popular continuous integration and continuous delivery (CI/CD) server developed by JetBrains, the same company behind other widely-used development tools like IntelliJ IDEA, the leading Kotlin Integrated Development Environment (IDE), and PyCharm, an IDE for Python. TeamCity is designed to help software development teams automate and streamline their build, test, and deployment processes and competes with other CI/CD platforms such as Jenkins, GitLab CI/CD, Travis CI, and Azure DevOps, among others. TeamCity is estimated to hold almost 6% of the total Continuous Integration And Delivery market share and ranks third overall, while according to JetBrains, over 15.9 million developers use their products, including 90 of the Fortune Global Top 100 companies.
Given JetBrains market position, a critical severity vulnerability in one of their products will quickly attract the attention of threat actors. Within three days of CVE-2024-27198 being published it was added to the CISA KEV catalog. Greenbone Enterprise vulnerability feed includes tests to identify affected products including a version check and an active check that sends a crafted HTTP GET request and analyzes the response.
When combined, CVE-2024-27198 (CVSS 9.8 Critical) and CVE-2024-27199 allow an attacker to bypass authentication using an alternative path or channel [CWE-288] to read protected files including those outside of the restricted directory [CWE-23] and perform limited admin actions.
Summary
March 2024 was another fever-pitched month for software vulnerabilities due to the NIST NVD outage and active exploitation of several vulnerabilities in enterprise and consumer software products. On the bright side, several zero-day vulnerabilities impacting all major browsers were identified and patched.
However, the fact that a single researcher was able to so quickly exploit all major browsers is serious wake-up call for all organizations since the browser plays such a fundamental role in modern enterprise operations. Vulnerability management remains a core element in cybersecurity strategy, and regularly scanning IT infrastructure for vulnerabilities ensures that the latest threats can be identified for remediation – closing the gaps that attackers seek to exploits for access to critical systems and data.
Have you ever faced the challenge of ensuring certain user properties, like usernames or email addresses, remain off-limits for future accounts after deleting a user? The new blocklist feature in Univention Corporate Server Version 5.0-6-erratum-974 is the solution. This article takes a closer look at UDM blocklists.
A Quick Look at the Basics
Blocklists are an essential tool for administrators, enabling them to proactively prevent the reuse of user or group properties. Imagine keeping previously used values like email addresses or usernames locked for a set duration. This function becomes a cornerstone in larger UCS environments, where the cycle of creating and deleting accounts is a regular affair.
So, what exactly are user or group properties? We’re talking about crucial details such as the username (username), first and last names (firstname, lastname), the password (password), and, importantly, the primary email address of a user account (mailPrimaryAddress), along with the email address associated with a group (mailAddress).
You can place any of these properties on one or more blocklists to prevent their reuse. Picture this scenario: in your organization, there’s an employee named Anna Alster with the email a.alster@organisation.de. When Anna leaves the company, her email address, along with her user account, is deleted. Fast forward a few weeks, and a new colleague, Anita Alster, joins the team. According to company policy, she’s assigned the same email address: a.alster@organisation.de. This could lead to an uncomfortable situation where Anita might access Anna’s “old” emails.
With the introduction of the new blocklists in the Univention Directory Manager (UDM), you can avert such scenarios with ease. Administrators have the power to specify in advance which properties are off-limits for reuse and for how long. Once set, the system seamlessly handles the rest.
This article presents the new feature in detail, guiding you through the steps to create, edit, and delete these blocklists. Whether you prefer the intuitive Univention Management Console (UMC) or the command-line agility of the udm tool, managing these lists is straightforward and efficient.
How to activate Blocklists and configure the Cron Job
To use the new blocklists, start by updating all UCS systems where you manage UDM objects. It’s crucial to have the latest UCS version, 5.0-6-erratum-974, running on all your machines. Don’t forget to install any available package updates for each computer too. Conveniently, both these tasks can be effortlessly completed through the Software Update module in the Univention Management Console.
Next, edit the necessary UCR variable. Navigate to the System / Univention Configuration Registry module and look for the directory/manager/blocklist/enabled entry. Change this variable to true and then save your changes.
After activating the blocklists, the next step is to set a duration for each. This duration determines how long each block remains effective. Once the specified period expires, the system automatically clears the entries from the blocklist. This removal process is managed by a script, triggered by a cron job every morning at 8 a.m. If you need to adjust this timing, simply edit the UCR variable directory/manager/blocklist/cleanup/cron and input the desired time in crontab syntax in the Value field.
The next two sections will guide you through configuring the blocklists yourself. We’ll cover two methods—once via the Univention Management Console and once on the command line.
Configuring Blocklists via UMC
To manage your blocklists, start by accessing the Domain / Blocklists module. This is your hub for creating new blocklists, as well as editing or deleting existing ones. To initiate a new list, simply click on Add. For this new blocklist, you’ll need to make some key entries:
Name: Choose an easily identifiable name for your blocklist. A descriptive, unique name is best, especially if you’ll be managing multiple blocklists.
Retention time for objects in this blocklist: In this field, specify the length of time the block should remain in effect. This duration is critical; once it’s surpassed, the blocklist will be automatically deleted. Use time units like y (years), m (months), and d (days) to define this period. For example, entering 2y3m1d sets the blocklist to stay active for 2 years, 3 months, and 1 day.
In the Properties to block section, your task is to specify which properties need to be locked from reuse. This is where you identify the UDM modules and their corresponding properties. For instance, if you aim to block the reuse of primary email addresses for user accounts, simply enter users/user in the UDM module field and mailPrimaryAddress as the property.
If you need to block additional properties, simply click the plus sign located just below the input fields. This allows you to add more modules and their respective properties to the same blocklist. For example, to block an email address used by a group, add groups/group as the module and mailAddress as the property.
Once you’ve configured the blocklist to your needs, click Save to finalize your changes. Remember, the Domain / Blocklists module in UMC isn’t just for creating new lists. You can return to this module anytime to make adjustments or delete existing blocklists.
Configuring Blocklists via Command Line
For those who prefer working outside the web interface, the Univention Directory Manager (UDM) offers a powerful command-line alternative to manage blocklists. Known as univention-directory-manager, or simply udm, this tool requires root privileges for operation. One of the key advantages here is that both UMC modules and UDM provide access to the same domain administration modules. This means you get the same functionality through the command line as you would in the web interface. To explore the range of capabilities and options available, just type udm –help. This command brings up a comprehensive list of all supported parameters and options.
When managing blocklists via the command line, use the command udm blocklists/list along with its subcommands to efficiently handle different tasks. These subcommands include:
create: Creates a new blocklist.
modify: Make changes to an existing blocklist.
remove: Delete a blocklist.
list: View all the blocklists that currently exist.
To create a new blocklist that excludes a username from reuse for one year, you’ll need to define several parameters in your udm blocklists/list command. Start with a name for the list using –set name=, followed by the time period for the block with –set retentionTime=, and then specify the UDM module and property with –append blockingProperties=. Enclose any expressions with spaces and special characters in double quotation marks. Thus, the complete udm command to achieve this would look as follows:
When you list the existing blocklists, you’ll see not only this newly created list but also all entries that have been made through the Univention Management Console.
To delete a blocklist on the command line, use the remove command, the –filter name= parameter, and enter the list’s name:
Keep in mind, if the list name contains special characters or spaces, it’s important to enclose it in double quotation marks.
Test Run: User Name Reuse Strictly Prohibited!
If you attempt to assign a user property that’s currently on a blocklist, the system will promptly notify you. The image below illustrates this: it shows an attempt to create an account with the name hej. However, this action is prevented by an existing blocklist that restricts the use of already assigned usernames for one year:
Effortless and Intelligent Administration Made Easy
The new UDM blocklists are an invaluable asset for user administration. They equip administrators with a robust tool to effectively manage the reuse of sensitive user properties, including email addresses and usernames. This feature plays a crucial role in minimizing potential mix-ups and enhancing security.
The latest version of Sparky CLI Installer provides a few changes, such as added autopartitioning option and so, setting the target system a little faster:
– autopartitioning of selected disk – removes all data of the chosen disk
– auto creates and formattes 3 partitions: root, swap and efi if required
– requires 15 GB of disk minimum
– no root password (sudo in use only)
– installs grub in mbr with 5 sec (default) delay
– autologin enabled as default if any desktop is installed
It is available to Sparky testing (8) only.
To get and test it, launch Sparky latest ISO image 2024.02 and update the installer to version => 202404014: sudo apt update
sudo apt install sparky-backup-core
The launch the cli installer as usually: sudo sparky-installer
Years ago, at what I think I remember was DebConf 15, I hacked for a while
on debhelper to
write build-ids to debian binary control files,
so that the build-id (more specifically, the ELF note
.note.gnu.build-id) wound up in the Debian apt archive metadata.
I’ve always thought this was super cool, and seeing as how Michael Stapelberg
blogged
some great pointers around the ecosystem, including the fancy new debuginfod
service, and the
find-dbgsym-packages
helper, which uses these same headers, I don’t think I’m the only one.
At work I’ve been using a lot of rust,
specifically, async rust using tokio. To try and work on
my style, and to dig deeper into the how and why of the decisions made in these
frameworks, I’ve decided to hack up a project that I’ve wanted to do ever
since 2015 – write a debug filesystem. Let’s get to it.
Back to the Future
Time to admit something. I really love Plan 9. It’s
just so good. So many ideas from Plan 9 are just so prescient, and everything
just feels right. Not just right like, feels good – like, correct. The
bit that I’ve always liked the most is 9p, the network protocol for serving
a filesystem over a network. This leads to all sorts of fun programs, like the
Plan 9 ftp client being a 9p server – you mount the ftp server and access
files like any other files. It’s kinda like if fuse were more fully a part
of how the operating system worked, but fuse is all running client-side. With
9p there’s a single client, and different servers that you can connect to,
which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.
The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9
in terms of adoption – 9p is in all sorts of places folks don’t usually expect.
For instance, the Windows Subsystem for Linux uses the 9p protocol to share
files between Windows and Linux. ChromeOS uses it to share files with Crostini,
and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re
noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol
to exchange files between hypervisor and guest. Why? I have no idea, except maybe
due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared
and validate security boundaries. Simplicity has its value.
As a result, there’s a lot of lingering 9p support kicking around. Turns out
Linux can even handle mounting 9p filesystems out of the box. This means that I
can deploy a filesystem to my LAN or my localhost by running a process on top
of a computer that needs nothing special, and mount it over the network on an
unmodified machine – unlike fuse, where you’d need client-specific software
to run in order to mount the directory. For instance, let’s mount a 9p
filesystem running on my localhost machine, serving requests on 127.0.0.1:564
(tcp) that goes by the name “mountpointname” to /mnt.
Linux will mount away, and attach to the filesystem as the root user, and by default,
attach to that mountpoint again for each local user that attempts to use
it. Nifty, right? I think so. The server is able
to keep track of per-user access and authorization
along with the host OS.
WHEREIN I STYX WITH IT
Since I wanted to push myself a bit more with rust and tokio specifically,
I opted to implement the whole stack myself, without third party libraries on
the critical path where I could avoid it. The 9p protocol (sometimes called
Styx, the original name for it) is incredibly simple. It’s a series of client
to server requests, which receive a server to client response. These are,
respectively, “T” messages, which transmit a request to the server, which
trigger an “R” message in response (Reply messages). These messages are
TLV payload
with a very straight forward structure – so straight forward, in fact, that I
was able to implement a working server off nothing more than a handful of man
pages.
Later on after the basics worked, I found a more complete
spec page
that contains more information about the
unix specific variant
that I opted to use (9P2000.u rather than 9P2000) due to the level
of Linux specific support for the 9P2000.u variant over the 9P2000
protocol.
MR ROBOTO
The backend stack over at zoo is rust and tokio
running i/o for an HTTP and WebRTC server. I figured I’d pick something
fairly similar to write my filesystem with, since 9P can be implemented
on basically anything with I/O. That means tokio tcp server bits, which
construct and use a 9p server, which has an idiomatic Rusty API that
partially abstracts the raw R and T messages, but not so much as to
cause issues with hiding implementation possibilities. At each abstraction
level, there’s an escape hatch – allowing someone to implement any of
the layers if required. I called this framework
arigato which can be found over on
docs.rs and
crates.io.
/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File {
/// OpenFile is the type returned by this File via an Open call.
typeOpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
/// the same. A Qid contains information about the mode of the file,
/// version of the file, and a unique 64 bit identifier.
fnqid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
async fnstat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
async fnwstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
async fnwalk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
async fnunlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
async fncreate(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
/// file i/o. This is split into a second type since it is genuinely
/// unrelated -- and the fact that a file is Open or Closed can be
/// handled by the `arigato` server for us.
async fnopen(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
}
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile {
/// iounit to report for this file. The iounit reported is used for Read
/// or Write operations to signal, if non-zero, the maximum size that is
/// guaranteed to be transferred atomically.
fniounit(&self) -> u32;
/// Read some number of bytes up to `buf.len()` from the provided
/// `offset` of the underlying file. The number of bytes read is
/// returned.
async fnread_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to `buf.len()` from the provided
/// `offset` of the underlying file. The number of bytes written
/// is returned.
fnwrite_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
}
Thanks, decade ago paultag!
Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call
debugfs that will serve all the debug
files shipped according to the Packages metadata from the apt archive. We’ll
fetch the Packages file and construct a filesystem based on the reported
Build-Id entries. For those who don’t know much about how an apt repo
works, here’s the 2-second crash course on what we’re doing. The first is to
fetch the Packages file, which is specific to a binary architecture (such as
amd64, arm64 or riscv64). That architecture is specific to a
component (such as main, contrib or non-free). That component is
specific to a suite, such as stable, unstable or any of its aliases
(bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for
the unstable-debugsuite, maincomponent, for all amd64 binaries.
This will return the Debian-style
rfc2822-like headers,
which is an export of the metadata contained inside each .deb file which
apt (or other tools that can use the apt repo format) use to fetch
information about debs. Let’s take a look at the debug headers for the
netlabel-tools package in unstable – which is a package named
netlabel-tools-dbgsym in unstable-debug.
So here, we can parse the package headers in the Packages.xz file, and store,
for each Build-Id, the Filename where we can fetch the .deb at. Each
.deb contains a number of files – but we’re only really interested in the
files inside the .deb located at or under /usr/lib/debug/.build-id/,
which you can find in debugfs under
rfc822.rs. It’s
crude, and very single-purpose, but I’m feeling a bit lazy.
Who needs dpkg?!
For folks who haven’t seen it yet, a .deb file is a special type of
.ar file, that contains (usually)
three files inside – debian-binary, control.tar.xz and data.tar.xz.
The core of an .ar file is a fixed size (60 byte) entry header,
followed by the specified size number of bytes.
[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...
First up was to implement a basic ar parser in
ar.rs. Before we get
into using it to parse a deb, as a quick diversion, let’s break apart a .deb
file by hand – something that is a bit of a rite of passage (or at least it
used to be? I’m getting old) during the Debian nm (new member) process, to take
a look at where exactly the .debug file lives inside the .deb file.
$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz | grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
Since we know quite a bit about the structure of a .deb file, and I had to
implement support from scratch anyway, I opted to implement a (very!) basic
debfile parser using HTTP Range requests. HTTP Range requests, if supported by
the server (denoted by a accept-ranges: bytes HTTP header in response to an
HTTP HEAD request to that file) means that we can add a header such as
range: bytes=8-68 to specifically request that the returned GET body be the
byte range provided (in the above case, the bytes starting from byte offset 8
until byte offset 68). This means we can fetch just the ar file entry from
the .deb file until we get to the file inside the .deb we are interested in
(in our case, the data.tar.xz file) – at which point we can request the body
of that file with a final range request. I wound up writing a struct to
handle a read_at-style API surface in
hrange.rs, which
we can pair with ar.rs above and start to find our data in the .deb remotely
without downloading and unpacking the .deb at all.
After we have the body of the data.tar.xz coming back through the HTTP
response, we get to pipe it through an xz decompressor (this kinda sucked in
Rust, since a tokioAsyncRead is not the same as an http Body response is
not the same as std::io::Read, is not the same as an async (or sync)
Iterator is not the same as what the xz2 crate expects; leading me to read
blocks of data to a buffer and stuff them through the decoder by looping over
the buffer for each lzma2 packet in a loop), and tarfile parser (similarly
troublesome). From there we get to iterate over all entries in the tarfile,
stopping when we reach our file of interest. Since we can’t seek, but gdb
needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory
and pass a handle to it back to the user.
I was originally hoping to avoid transferring the whole tar file over the
network (and therefore also reading the whole debug file into ram, which
objectively sucks), but quickly hit issues with figuring out a way around
seeking around an xz file. What’s interesting is xz has a great primitive
to solve this specific problem (specifically, use a block size that allows you
to seek to the block as close to your desired seek position just before it,
only discarding at most block size - 1 bytes), but data.tar.xz files
generated by dpkg appear to have a single mega-huge block for the whole file.
I don’t know why I would have expected any different, in retrospect. That means
that this now devolves into the base case of “How do I seek around an lzma2
compressed data stream”; which is a lot more complex of a question.
Thankfully, notoriously brilliant tianon was
nice enough to introduce me to Jon Johnson
who did something super similar – adapted a technique to seek inside a
compressed gzip file, which lets his service
oci.dag.dev
seek through Docker container images super fast based on some prior work
such as soci-snapshotter, gztool, and
zran.c.
He also pulled this party trick off for apk based distros
over at apk.dag.dev, which seems apropos.
Jon was nice enough to publish a lot of his work on this specifically in a
central place under the name “targz”
on his GitHub, which has been a ton of fun to read through.
The gist is that, by dumping the decompressor’s state (window of previous
bytes, in-memory data derived from the last N-1 bytes) at specific
“checkpoints” along with the compressed data stream offset in bytes and
decompressed offset in bytes, one can seek to that checkpoint in the compressed
stream and pick up where you left off – creating a similar “block” mechanism
against the wishes of gzip. It means you’d need to do an O(n) run over the
file, but every request after that will be sped up according to the number
of checkpoints you’ve taken.
Given the complexity of xz and lzma2, I don’t think this is possible
for me at the moment – especially given most of the files I’ll be requesting
will not be loaded from again – especially when I can “just” cache the debug
header by Build-Id. I want to implement this (because I’m generally curious
and Jon has a way of getting someone excited about compression schemes, which
is not a sentence I thought I’d ever say out loud), but for now I’m going to
move on without this optimization. Such a shame, since it kills a lot of the
work that went into seeking around the .deb file in the first place, given
the debian-binary and control.tar.gz members are so small.
The Good
First, the good news right? It works! That’s pretty cool. I’m positive
my younger self would be amused and happy to see this working; as is
current day paultag. Let’s take debugfs out for a spin! First, we need
to mount the filesystem. It even works on an entirely unmodified, stock
Debian box on my LAN, which is huge. Let’s take it for a spin:
And, let’s prove to ourselves that this actually mounted before we go
trying to use it:
$ mount | grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)
Slick. We’ve got an open connection to the server, where our host
will keep a connection alive as root, attached to the filesystem provided
in aname. Let’s take a look at it.
$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff
Outstanding. Let’s try using gdb to debug a binary that was provided by
the Debian archive, and see if it’ll load the ELF by build-id from the
right .deb in the unstable-debug suite:
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Yes! Yes it will!
$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped
The Bad
Linux’s support for 9p is mainline, which is great, but it’s not robust.
Network issues or server restarts will wedge the mountpoint (Linux can’t
reconnect when the tcp connection breaks), and things that work fine on local
filesystems get translated in a way that causes a lot of network chatter – for
instance, just due to the way the syscalls are translated, doing an ls, will
result in a stat call for each file in the directory, even though linux had
just got a stat entry for every file while it was resolving directory names.
On top of that, Linux will serialize all I/O with the server, so there’s no
concurrent requests for file information, writes, or reads pending at the same
time to the server; and read and write throughput will degrade as latency
increases due to increasing round-trip time, even though there are offsets
included in the read and write calls. It works well enough, but is
frustrating to run up against, since there’s not a lot you can do server-side
to help with this beyond implementing the 9P2000.L variant (which, maybe is
worth it).
The Ugly
Unfortunately, we don’t know the file size(s) until we’ve actually opened the
underlying tar file and found the correct member, so for most files, we don’t
know the real size to report when getting a stat. We can’t parse the tarfiles
for every stat call, since that’d make ls even slower (bummer). Only
hiccup is that when I report a filesize of zero, gdb throws a bit of a
fit; let’s try with a size of 0 to start:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]
This obviously won’t work since gdb will throw away all our hard work because
of stat’s output, and neither will loading the real size of the underlying
file. That only leaves us with hardcoding a file size and hope nothing else
breaks significantly as a result. Let’s try it again:
$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)
Much better. I mean, terrible but better. Better for now, anyway.
Kilroy was here
Do I think this is a particularly good idea? I mean; kinda. I’m probably going
to make some fun 9parigato-based filesystems for use around my LAN, but I
don’t think I’ll be moving to use debugfs until I can figure out how to
ensure the connection is more resilient to changing networks, server restarts
and fixes on i/o performance. I think it was a useful exercise and is a pretty
great hack, but I don’t think this’ll be shipping anywhere anytime soon.
Along with me publishing this post, I’ve pushed up all my repos; so you
should be able to play along at home! There’s a lot more work to be done
on arigato; but it does handshake and successfully export a working
9P2000.u filesystem. Check it out on on my github at
arigato,
debugfs
and also on crates.io
and docs.rs.
At least I can say I was here and I got it working after all these years.
It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.
In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!
Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !
Test Kubuntu 24.04 Beta and Experience Innovation with KubuQA!
We’re thrilled to announce the availability of the Kubuntu 24.04 Beta! This release is packed with new features and enhancements, and we’re inviting you, our valued community, to join us in fine-tuning this exciting new version. Whether you’re a seasoned tester or new to software testing, your feedback is crucial to making Kubuntu 24.04 the best it can be.
To make your testing journey as easy as pie, we’re introducing a fantastic new tool: KubuQA. Designed with both new and experienced users in mind, KubuQA simplifies the testing process by automating the download, VirtualBox setup, and configuration steps. Now, everyone can participate in testing Kubuntu with ease!
This beta release also debuts our fresh new branding, artwork, and wallpapers—created and chosen by our own community through recent branding and wallpaper contests. These additions reflect the spirit and creativity of the Kubuntu family, and we can’t wait for you to see them.
Get Testing
By participating in the beta testing of Kubuntu 24.04, you’re not just helping improve the software; you’re becoming an integral part of a global community that values open collaboration and innovation. Your contributions help us identify and fix issues, ensuring Kubuntu remains a high-quality, stable, and user-friendly Linux distribution.
The benefits of joining our testing team extend beyond improving the software. You’ll gain valuable experience, meet like-minded individuals, and perhaps discover a new passion in the world of open-source software.
So why wait? Download the Kubuntu 24.04 Beta today, try out KubuQA, or follow our wiki to upgrade and help us make Kubuntu better than ever! Remember, your feedback is the key to our success.
Ready to make an impact?
Join us in this exciting phase of development and see your ideas come to life in Kubuntu. Plus, enjoy the satisfaction of knowing that you’ve contributed to a project used by millions around the world. Become a tester today and be part of something big!
Interested in more than testing?
By the way, have you thought about becoming a member of the Kubuntu Community? It’s a fantastic way to contribute more actively and help shape the future of Kubuntu. Learn more about joining the community.
The Ubuntu team is pleased to announce the Beta release of the Ubuntu 24.04 LTS Desktop, Server, and Cloud products.
Ubuntu 24.04 LTS, codenamed “Noble Numbat”, continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been very hard at work through this cycle, introducing new features and fixing bugs.
This Beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity and Xubuntu flavors.
The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of Ubuntu 24.04 LTS that should be representative of the features intended to ship with the final release expected on April 25, 2024.
Ubuntu, Ubuntu Server, Cloud Images:
Noble Beta includes updated versions of most of our core set of packages, including a current 6.8 kernel, and much more.
To upgrade to Ubuntu 24.04 LTS Beta from Ubuntu 23.10 or Ubuntu 22.04 LTS, follow these instructions:
As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20240411 or higher) should be considered a Beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.
The full release notes for Ubuntu 24.04 LTS Beta can be found at:
Lubuntu is a flavor of Ubuntu which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu base.
Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflow: audio, graphics, video, photography and publishing.
Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit
The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 24.04 LTS, codenamed “Noble Numbat”.
While this beta is reasonably free of any showstopper installer bugs, you will find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 24.04 is released on April 25, 2024.
Special Notes
The Ubuntu Studio 24.04 LTS disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32 and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image or burning to a Dual-Layer DVD.
Full updated information, including Upgrade Instructions, are available in the Release Notes.
Please note that upgrading before the release of 24.04.1,due August 2024, is unsupported.
New Features This Release
PipeWire continues to improve with every release and is so robust it can be used for professional and prosumer use. Version 1.0.4
Ubuntu Studio Installer‘s included Ubuntu Studio Audio Configurationutility for fine-tuning the PipeWire setup or changing the configuration altogether now includes the ability to create or remove a dummy audio device. Version 1.9
Major Package Upgrades
Ardour version 8.4.0
Qtractor version 0.9.39
OBS Studio version 30.0.2
Audacity version 3.4.2
digiKam version 8.2.0
Kdenlive version 23.08.5
Krita version 5.2.2
There are many other improvements, too numerous to list here. We encourage you to look around the freely-downloadable ISO image.
Known Issues
Ubuntu Studio’s classic PulseAudio-JACK configuration cannot be used on Ubuntu Desktop (GNOME) due to a known issue with the ubuntu-desktop metapackage. (LP: #2033440)
We now discourage the use of the aforementioned classic PulseAudio-JACK configuration as PulseAudio is becoming deprecated with time in favor of PipeWire. PipeWire’s JACK configuration can be disabled to use JACK2 via QJackCTL for advanced users.
Due to the Ubuntu repositories being in-flux following the time_t transition and xz-utils security issue resolution, some items in the repository are uninstallable or causing other packaging conflicts. The Ubuntu Release Team is working around the clock to help resolve these issues, so patience is required.
Additionally, we need financial contributions. Our project lead, Erich Eickmeyer, is working long hours on this project and trying to generate a part-time income. See this post as to the reasons why and go here to see how you can contribute financially (options are also in the sidebar).
Frequently Asked Questions
Q: Does Ubuntu Studio contain snaps? A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.
Thunderbird has become a snap this cycle in order for the maintainers to get security patches delivered faster.
Additionally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded and included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.
Q: If I install this Beta release, will I have to reinstall when the final release comes out? A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.
Q: Will you make an ISO with {my favorite desktop environment}? A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio – which does *not* convert that flavor to Ubuntu Studio but adds its benefits.
Q: What if I don’t want all these packages installed on my machine? A: Simply use the Ubuntu Studio Installer to remove the features of Ubuntu Studio you don’t want or need!
We are happy to announce the Beta release for Lubuntu Noble (what will become 24.04 LTS)! What makes this cycle unique? Lubuntu is a lightweight flavor of Ubuntu, based on LXQt and built for you. As an official flavor, we benefit from Canonical’s infrastructure and assistance, in addition to the support and enthusiasm from the […]
In March, we made many minor improvements to existing features. Still, there are some significant features: many new services are available for configuration sync, drive encryption via LUKS with TPM support, and a new command to trigger commit archive manually.
After experts noticed a rapid increase in cyberattacks on local authorities and government agencies in 2023, the horror stories don’t stop in 2024. The pressure to act is enormous, as the EU’s NIS2 Directive will come into force in October and makes risk and vulnerability management mandatory.
“The threat level is higher than ever,” said Claudia Plattner, President of the German Federal Office for Information Security (BSI), at Bitkom in early March. The question is not whether an attack will be successful, but only when. The BSI’s annual reports, for example the most recent report from 2023, also speak volumes in this regard. However, according to Plattner, it is striking how often local authorities, hospitals and other public institutions are at the centre of attacks. There is “not a problem with measures but with implementation in companies and authorities”, said Plattner. One thing is clear: vulnerability management such as Greenbone’s can provide protection and help to avoid the worst.
US authorities infiltrated by Chinese hackers
In view of the numerous serious security incidents, vulnerability management is becoming more important every year. Almost 70 new security vulnerabilities have been added every day in recent months. Some of them opened the door to attackers deep inside US authorities, as reported in the Greenbone Enterprise Blog:
According to the media, US authorities have been infiltrated by Chinese hacker groups such as the probably state-sponsored “Volt Typhoon” for years via serious security gaps. The fact that Volt Typhoon and similar groups are a major problem was even confirmed by Microsoft itself in a blog back in May 2023. But that’s not all: German media reported that Volt Typhoon is taking advantage of the abundant vulnerabilities in VPN gateways and routers from FortiNet, Ivanti, Netgear, Citrix and Cisco. These are currently considered to be particularly vulnerable.
The fact that the quasi-monopolist in Office, groupware, operating systems and various cloud services also had to admit in 2023 that it had the master key for large parts of its Microsoft cloud let stolen destroyed trust in the Redmond software manufacturer in many places. Anyone who has this key doesn’t need a backdoor for Microsoft systems any longer. Chinese hackers are also suspected in this case.
Software manufacturers and suppliers
The supply chain for software manufacturers has been under particular scrutiny by manufacturers and users not only since log4j or the European Cyber Resilience Act. The recent example of the attack on the XZ compression algorithm in Linux also shows the vulnerability of manufacturers. In the case of the “#xzbackdoor”, a combination of pure coincidence and the activities of Andres Freund, a German developer of open source software for Microsoft with a strong focus on performance, prevented the worst from happening.
An abyss opened up here: It was only thanks to open source development and a joint effort by the community that it came to light that actors had been using changing fake names with various accounts for years with a high level of criminal energy and with methods that would otherwise be more likely to be used by secret services. With little or no user history, they used sophisticated social scams, exploited the notorious overload of operators and gained the trust of freelance developers. This enabled them to introduce malicious code into software almost unnoticed. In the end, it was only thanks to Freund’s interest in performance that the attack was discovered and the attempt to insert a backdoor into a tool failed.
US officials also see authorities and institutions as being particularly threatened in this case, even if the attack appears to be rather untargeted and designed for mass use. The issue is complex and far from over, let alone fully understood. One thing is certain: the usernames of the accounts used by the attackers were deliberately falsified. We will continue to report on this in the Greenbone blog.
European legislators react
Vulnerability management cannot prevent such attacks, but it provides indispensable services by proactively warning and alerting administrators as soon as such an attack becomes known – usually before an attacker has been able to compromise systems. In view of all the difficulties and dramatic incidents, it is not surprising that legislators have also recognised the magnitude of the problem and are declaring vulnerability management to be standard and best practice in more and more scenarios.
Laws and regulations such as the EU’s new NIS2 directive make the use of vulnerability management mandatory, including in the software supply chain. Even if NIS2 only actually applies to around 180,000 organisations and companies in the critical infrastructure (KRITIS) or “particularly important” or “significant” companies in Europe, the regulations are fundamentally sensible – and will be mandatory from October. The EU Commission emphasises that “operators of essential services” must “take appropriate security measures and inform the competent national authorities of serious incidents”. Important providers of digital services such as search engines, cloud computing services and online marketplaces must fulfil the security and notification requirements of the directive.”
Mandatory from October: A “minimum set of cyber security measures”
The “Directive on measures for a high common level of cybersecurity across the Union (NIS2)” forces companies in the European Union to “implement a benchmark of minimum cybersecurity measures”, including risk management, training, policies and procedures, also and especially in cooperation with software suppliers. In Germany, the federal states are to define the exact implementation of the NIS2 regulations.
Do you have any questions about NIS2, the Cyber Resilience Act (CRA), vulnerability management in general or the security incidents described? Write to us! We look forward to working with you to find the right compliance solution and give your IT infrastructure the protection it needs in the face of today’s serious attacks.
To make our ecological progress even more sustainable, we keep up to date with regular internal training courses on energy efficiency. In this way, we are helping to make the world even “greener” outside of Greenbone.
We’re thrilled to announce the launch of something special for our beloved Volumio community: the Volumio Rivo Black Edition. This release is more than just a product variant; it’s a testament to our commitment to listening to our customers and pushing the boundaries of craftsmanship.
Crafted with meticulous attention to detail, the Volumio Rivo Black Edition is a result of our dedication to creating a product that not only meets but exceeds the expectations of our users. Many of you have expressed a desire for a Volumio Rivo streamer with a sleek black front panel, and we’ve taken your feedback to heart.
But this edition is more than just a color change. It’s a labor of love, carefully designed and handcrafted in the heart of Florence, Italy. We wanted to leverage the rich tradition of Italian craftsmanship to bring you a product that not only sounds exceptional but also looks stunning in any environment.
The front panel of the Volumio Rivo Black Edition is made of upcycled black leather, boasting a unique and captivating texture that sets it apart from any other streamer on the market. This choice not only adds a touch of luxury but also aligns with our commitment to sustainability.
In celebration of Volumio’s recent IF Design Award win, we wanted the Rivo Black Edition to represents a fusion of design and environmental sustainability.
Collaboration with Apellelab for Eco-Friendly Design
The Volumio Rivo Black Edition has been achieved in partnership with ApelleLab: an artisanal leather workshop based in Florence, incorporating upcycled leather into the design of the Rivo Black Edition.
This collaboration elevates the aesthetic appeal of our bestselling streamer. And it also contributes to the reduction of environmental impact by reusing premium materials. ApelleLabis a hub of leather craftsmanship operating within a circular economy framework committed to upcycling and recycling. It serves as a creative space where artisanal connections are forged, breathing new life into traditional leatherworking techniques. We could not have found a better partner to bring this vision to life!
The Rivo Black Edition inherits its predecessor’s unparalleled performance and versatility. Featuring pure digital transport and an extensive array of digital outputs for seamless integration with a wide range of audio setups. With support for high-resolution audio playback and intuitive user interface options, it delivers an unparalleled listening experience for audiophiles and music enthusiasts.
Volumio Rivo Black Edition is a limited edition offered at the same price as the Classic version. With its sleek design, uncompromising performance, and environmentally conscious ethos, it stands as a symbol of Volumio’s dedication to pushing boundaries and shaping the future of audio technology.
This is a limited edition with a very limited number of pieces available, so get yours while they last. Once they are gone, they are gone!