July 11, 2025

hackergotchi for VyOS

VyOS

VyOS Stream 1.5-2025-Q2 is available for download

Hello, Community!

VyOS Stream 1.5-2025-Q2 and its corresponding source tarball are now available for download. This is the second VyOS Stream release on the way to the upcoming VyOS 1.5 LTS, and it includes multiple bug fixes and improvements, including the new implementation of WAN load balancing, a general mechanism for allowing conntrack-unfriendly protocols in transparent bridge firewalls, a fix for CVE-2025-30095 (active MitM in console server SSH connections) that was already delivered in VyOS 1.4.2, and more.

 

11 July, 2025 06:55PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: From sales development to renewals: Mariam Tawakol’s career progression at Canonical

Career progression doesn’t follow a single path – and at Canonical, we embrace that. Our culture encourages individuals to explore roles aligned with their evolving skills and interests, even if it means stepping into a completely new technical space. Internal mobility is more than just a policy here;  it’s something we actively support and celebrate.

In this edition, we’re excited to spotlight the journey of one of our team members who took that leap and shaped a fulfilling new chapter at Canonical. Meet Mariam Tawakol. 

Mariam is a Renewals Account Executive at Canonical, where she helps drive customer retention and supports Canonical’s global customer base.

Mariam’s journey with Canonical

When did you start with Canonical and what was your original role?

I joined Canonical just over two years ago, in July 2023, as a Sales Development Representative (SDR) for the Middle East and Africa (MEA) region. SDRs are the first point of contact for our prospects at Canonical, with a focus on inbound leads. Their goal is to qualify prospects and secure meetings. As an SDR, you work closely with our Marketing and Sales teams, bridging departments and building pipeline.

The role is demanding but rewarding, as you’re often the first to connect with prospects and identify potential customers. At the same time, you play a pivotal role in Canonical’s growth. What attracted me most to join Canonical as an SDR was the chance to be part of an important mission and contribute to the company’s growth both globally and in the MEA region.

What did you switch to and when?

When I first joined Canonical as an SDR, I didn’t want to rush into deciding my next step. Being new to both the role and the industry, I wanted to take the time to truly understand the job, the company, and the different paths available to me. About six months in, I began thinking more seriously about what I wanted to do after completing my SDR and Business Development Representative (BDR) terms. I spoke with different teams, explored the options, and found myself really interested in the Renewals team.

After finishing my SDR term, which focused on inbound leads, I moved to the BDR team, focusing on outbound lead generation. While in that role, I also began training for Renewals. After one quarter as a BDR, I officially joined the Renewals team full-time. Overall, the transition from SDR to BDR to Renewals took about a year and a half.

What was the reason behind your decision?

I was really drawn to the Renewals team because it offered a great opportunity to build relationships with customers. As an SDR and BDR, the relationship-building part of the job was always my favorite, so moving to Renewals felt like a natural way to keep doing what I loved the most. It’s especially rewarding working as a Renewals rep at a company like Canonical, where many users have a long history with Ubuntu. In this role, I get to hear their stories firsthand and understand how much the product truly means to them.

Also, I may be biased, but the Renewals team is pretty great. I enjoy working alongside people I can learn from, and having such a supportive and encouraging manager definitely makes the job even better!

Mariam enjoying Cape Town’s waterfront during a team outing at a work sprint in South Africa.

What was the process and how long did it take? 

Since I had been joining in-person work sprints as an SDR and already knew a few people on the Renewals team, I ended up having a casual chat with two team members to learn more about the role. At the time, I still had to complete my SDR term, and there wasn’t an open position, but expressing my interest early on really helped. A few months later, when a spot did open up, the people I had spoken with recommended me for the role.

From there, I had an interview with my current manager where we discussed my interest in the role, expectations, and next steps. I’d say the application process was pretty straightforward. The actual move took about three months, and after completing a full quarter as a BDR, I officially joined the Renewals team full-time.

What advice would you give to readers considering a career at Canonical? 

Canonical is a fantastic place to work, and the SDR team is an excellent place to start your journey. As an SDR, you’re perfectly positioned within the organization because you work across multiple teams, serve as the first point of contact for customers, and gain a great opportunity to learn about the company. The SDR team at Canonical is truly incredible!

Join the team 

At Canonical, growth is encouraged through a mix of personal initiative, supportive leadership, and a wide range of opportunities.  The story you’ve just read is one of many. We look forward to sharing more journeys like this in future articles as we continue highlighting the diverse paths our team members take.

If you’re thinking about your next move, or even your first one, take a look at our open roles. There’s no telling where that first step might lead.

11 July, 2025 04:30AM

hackergotchi for Qubes

Qubes

XSAs released on 2025-07-08

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • (none)

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

11 July, 2025 12:00AM

QSB-108: Transitive Scheduler Attacks (XSA-471)

We have published Qubes Security Bulletin (QSB) 108: Transitive Scheduler Attacks (XSA-471). The text of this QSB and its accompanying cryptographic signatures are reproduced below, followed by a general explanation of this announcement and authentication instructions.

Qubes Security Bulletin 108


             ---===[ Qubes Security Bulletin 108 ]===---

                              2025-07-08

                Transitive Scheduler Attacks (XSA-471)

Changelog
----------

2025-07-08: Original QSB
2025-07-11: Revise language

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

On 2025-07-08, the Xen Project published XSA-471, "x86: Transitive
Scheduler Attacks" (CVE-2024-36350, CVE-2024-36357) [3]:
| Researchers from Microsoft and ETH Zurich have discovered several new
| speculative sidechannel attacks which bypass current protections.
| They are detailed in a paper titled "Enter, Exit, Page Fault, Leak:
| Testing Isolation Boundaries for Microarchitectural Leaks".
| 
| Two issues, which AMD have named Transitive Scheduler Attacks, utilise
| timing information from instruction execution.  These are:
| 
|   * CVE-2024-36350: TSA-SQ (TSA in the Store Queues)
|   * CVE-2024-36357: TSA-L1 (TSA in the L1 data cache)

For more information, see also [4], [5] and [6].

Impact
-------

On affected systems, an attacker who manages to compromise a qube may be
able to use it to infer the contents of arbitrary system memory,
including memory assigned to other qubes.

As noted in XSA-471, the paper [6] also describes two Rogue System
Register Read (sometimes called Spectre-v3a) attacks, namely
CVE-2024-36348 and CVE-2045-36349. However, these are not believed to
affect the security of Qubes OS.

Affected systems
-----------------

Only AMD CPUs with Zen 3 or Zen 4 cores are believed to be affected
(CPUID family 0x19). For a more detailed list, see [5].

Patching
---------

As of this writing, AMD has published only non-server CPU microcode
updates via the linux-firmware repository. [7] They have not yet
published microcode updates for server CPUs. When this happens, we will
provide an updated amd-ucode-firmware package. Users with server CPUs
may be able to obtain the relevant microcode update via a motherboard
firmware (BIOS/UEFI) update, but this depends on the motherboard vendor
making such an update available. The appendix of [4] (page 5) contains a
table showing the minimum microcode version required for mitigating
transient scheduler attacks for different CPUs. The required microcode
version (not to be confused with the amd-ucode-firmware package version)
depends on the CPUID family/model/stepping. Users can compare the
values from the table with their own system's family/model/stepping and
current microcode version, which can be viewed by executing the command
`cat /proc/cpuinfo` in a dom0 terminal.

On affected systems with non-server CPUs, the following packages contain
security updates that address the vulnerabilities described in this
bulletin:

  For Qubes 4.2, in dom0:
  - Xen packages, version 4.17.5-10
  - amd-ucode-firmware version 20250708-1

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages should be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new Xen
binaries.

Credits
--------

See the original Xen Security Advisory and linked publications.

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-471.html
[4] https://www.amd.com/content/dam/amd/en/documents/resources/bulletin/technical-guidance-for-mitigating-transient-scheduler-attacks.pdf
[5] https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7029.html
[6] https://www.microsoft.com/en-us/research/publication/enter-exit-page-fault-leak-testing-isolation-boundaries-for-microarchitectural-leaks/
[7] https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/amd-ucode/README

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: qsb-108-2025.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmhxKTUACgkQ1lWk8hgw
4Go/kg//X4Vtf3hJSfyKRZLSf9Hd336j+BUk07hOQT/KsTo9ETuIWp0acUlvrTgy
TJcZ+L6vFyqoBRGFDj4I3x2f3OQ4V1gOCyha6XJqeMnxPh/hJ39tAFq8f7eEgha4
URi0jP9x6v22/UjBz3OHmrlVWOP3UnLq3jO4Hnsdcf+ark/nasX76YQ4+gnjuLwQ
8uMOpyt8eBs0/kfQS4yrqvxbLUTkIqQ3Nb8fx/I316xyc1s4mIwSNBNdG583Ql/r
sXyREUN3E8NnwUsnXLDLRg068aFCRvkmtpRPsfRgcLGsAhHro5Vo+m2Hv6HJticW
IjOGVpFW5TxS+coZP8/dSs5b6pfa/0MBlIwUfJuvPhmH1lGOnrAigPrjCCS6xRWA
AI/Idqyb/aRO1vPpj7Z+qe5swHJZ3UjzslizNxqtKcC3O2ZX75XoCRfVz75mZ41s
mE2HfIFHAk2axgAoNS8+xo4AkAlbTHeQxCpL6aGRkFOVCVHCiHEvb4B7KgRoBGiD
xdsgzoTOk+nxWrKl6AJiKjQtDn8I0oiyVDeeH4xXr5dg13TqmiDnVvTQJYISD7Uk
q7mBgZeR0u3pJm+GHgPq2wzyHsXdoqHWIfiVFy+3WkPmhrsIfrGnjUX+dnWRJ41V
vVwscsSAYR6/xgXHHu7mf9yP3B+xqaXGoCKzZN8zu7PTedTrERw=
=Inl1
-----END PGP SIGNATURE-----

Source: qsb-108-2025.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmhxJ3oACgkQSsGN4REu
FJBPxQ//VhaBX7JK4hiVSIARXOcGmgEq/5Oh1LHwpOgY9xMeDfA7rfcLJ8voXpnL
Tkjqglxj5uETUQpnJ4etmnWx4BggUWq7ePEJhTLpdoT6wkwRjpQb68O00eTr/Wta
Ib/GuDeUbKX0r17AgkcXlEnyXcjFamBzq44uWUy10c6AnmdR+CBFuMC7+aHUARju
tDQQRm/4JJLClgUPsvslcyxvp7pQ9CsVzjrZAEOe3znUVAUGU6hfNwbm7VOMaDBi
kXT3LjcEGDvrSBiwgDgrOdykon/h5dTd1XYAf22i7KCjUO6+KBrwp/kV0j1LvvJq
iH/WM+P1fp+FKy52KuAr/O07FtC3RHyNGyuj/5rfhyKF6YNIXU3UUus5e1kMI9oZ
go6A3v8mJAR4ewfwmqSeIEDO0bMYE79uH8yd+0tH0iwAdVux53Mwgjf55uw2oBgf
BM5jcV0CFBNiOiGisowA3OJT5P5faOVYc65ZjliU6icU08Ysw4hhe3CZKtKbL9VR
4V+SIbo4VZY3OrhC9GJa0meW/HLo4nskBlTu/GwglPDlKaddkniou7BtVUFelsxv
Rw7IQ2Yr95g9TJWxjYuW9Qfqo53YEvgYvF3QttOnR2Uvf1b3bTfATJ5RedUV7YG4
mzbfpBFNdZwMBSv9mumlwe0I5Hq4xeQ6qaq3URmLTOJlh/VpqzA=
=JLcN
-----END PGP SIGNATURE-----

Source: qsb-108-2025.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-108), the commands are:

$ gpg --verify qsb-108-2025.txt.sig.marmarek qsb-108-2025.txt
$ gpg --verify qsb-108-2025.txt.sig.simon qsb-108-2025.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-108 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

11 July, 2025 12:00AM

July 10, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu 24.10 (Oracular Oriole) reached End of Life on 10th July 2025

This is a follow-up to the End of Life warning sent earlier to confirm that as of 10th July 2025, Ubuntu 24.10 is no longer supported. No more package updates will be accepted to 24.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

Additionally, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 24.10.

The supported upgrade path from Ubuntu 24.10 is to Ubuntu 25.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/PluckyUpgrades

Ubuntu 25.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 10 20:46:06 UTC 2025 by Utkarsh Gupta on behalf of the Ubuntu Release Team

10 July, 2025 10:50PM

Ubuntu Blog: In pursuit of quality: UX for documentation authors

Canonical’s Platform Engineering team has been hard at work crafting documentation in Rockcraft and Charmcraft around native support for web app frameworks like Flask and Django. It’s all part of Canonical’s aim to write high quality documentation and continuously improve it over time through design and development processes. One way we improve our documentation is by engaging with our team members and the external community. Their perspectives and feedback provide valuable insight into our product design, clarify any confusing explanations, and enhance the user experience (UX) of the tooling.

We’ve focused on making this documentation user-friendly – but how do we ensure that our documentation truly benefits our readers?

Since last November, we’ve been testing tutorials for the various frameworks we support, conducting a total of 24 UX sessions (so far!). These participants spent their valuable time and energy working their way through our tutorial, allowing us to observe their attempts and collect their feedback on the instructions and explanations.

How we chose participants

We created the web app framework support as an approachable introduction to Canonical products through a familiar entry point for most users: web app development. Our goal was to attract a wide variety of users, from seasoned engineers to newcomers. To do so, we collaborated with our internal teams, like Web, who use Canonical products every day, as well as reaching out to external developers through online communities and conferences. To make sure our documentation met real-world needs, we actively sought feedback from those who were unfamiliar with Canonical. We even tested the experience with university students, to confirm it would be accessible across all skill levels.

The sessions

After recruiting each participant, we began the most important phase: the sessions themselves. We carefully crafted these sessions to provide a consistent, comfortable experience for the participant, encouraging their honest feedback about anything – and everything! – in the tutorial.

A typical session begins with a few quick questions to understand each participant’s background, so we could contextualize their experiences. Then, we begin the tutorial. We observe what the participant notices, how they interpret the instructions, and what obstacles they run into. After they complete the tutorial, we ask a set of post-session questions to collect their overall feedback, and explore if the tooling meets their expectations of the upstream framework.

What we learned about documentation UX

I’ve felt the full spectrum of human emotions over the course of the 24 sessions. First, there’s a huge deal of helplessness that comes from writing and publishing documentation – as soon as the documentation is out in the world, I’m powerless to help my readers! I found it surprisingly difficult to watch users run into problems that I couldn’t help them solve. Thankfully the engineers were there to provide some aid, although even that wasn’t enough at some points. The sessions have been a learning opportunity for me to accept the helplessness that comes with the author role.

Along with helplessness, there were also plenty of moments where I felt panic. There’s an element of risk associated with documentation: Sometimes, I would argue for documentation changes, thinking that they would provide better UX or mitigate confusion, only for those changes to blow up in my face in real time. I’ve learned to keep a straight face, and I accept any criticism or feedback directed at the changes I pushed for. New ideas (at least in documentation) are definitely worth trying, but they only become quality ideas once proven through UX.

Most of the time, the sessions were silent, and I struggled to keep my attention on the participants and their actions. There are many points in the tutorials where the user has to wait – for software to download, for rocks and charms to pack, for their apps to deploy, and so on. It’s very tempting to look away in those moments and focus on other activities, but as I learned, important observations and details can emerge at any time and stage. Paying attention, even in the most innocuous moments, is a vital part of understanding the participant’s experience and their feedback.

The participants provided insightful feedback about both the tooling and the documentation. Here are some of the most common themes we noticed:

  • When testing with university students, we found that these participants became stuck when they were asked to create a new text file from the terminal. This session marked their very first time using a terminal text editor, and we hadn’t accounted for this momentous occasion in our instructions. (I felt quite a bit of panic in these moments, too!)
  • Participants working on ARM64 machines commented about the incomplete experience, as later parts of the tutorial were only compatible for AMD64 machines.
  • We found some common places where participants would miss an instruction, causing them to experience issues down the line. The participants noted that the instructions felt “buried” in the text and wished the tutorial better highlighted their significance and impact.
  • External participants asked for more explanations of Canonical products and how the tooling works – they were curious and interested in digging into the “why” behind the tutorial.

Prioritizing and acting on feedback

For each of the sessions, we culminated all observations into individual documents. Then we collected all the direct feedback and suggestions into a main document; for the Flask tutorial, the main feedback document spans 16 pages. From there, the project lead, UX designer, technical author (myself!), and the engineers discuss the feedback to determine how we will incorporate it. While prioritizing feedback, we account for the following considerations:

  • Blocking issues: Prioritize feedback pointing out major issues.
  • Isolated incidents: Identify feedback where more research is needed.
  • Design trade-offs: Respond to feedback based on specific design choices made.

We incorporate feedback in small batches over time, prioritizing major blockers and typos. This way, we can resolve issues quicker, meaning our readers reap the benefits right away!

We’ve found that the changes proposed by earlier UX sessions have improved the quality and outcome of later sessions. Common pitfalls in the first couple of sessions are no longer an issue. Questions about how the tooling works come up less. And – some of you will be glad to hear – users with ARM64 machines can go through the entire tutorial.

Get involved: help us improve

There are always improvements to make in our documentation, and these UX sessions are a great way for us to include our community members and make our documentation more accessible. If you’re interested in getting involved, please reach out to us on our public Matrix channel!

10 July, 2025 03:01PM

Salih Emin: How to use an AI in terminal with free models

How would you feel if you could have access to AI models, many of which are free, directly from the comfort of your terminal?

In this article, we will explore how you can achieve this using the OpenRouter.ai platform and a simple script I’ve prepared, called ai.sh (… very original… I know).

Free AI in your Terminal with OpenRouter and ai.sh

ai.sh is a simple terminal app to use OpenRouter.ai with your personal API keys. Ask questions directly from your terminal and get AI responses instantly.

Features of ai.sh

  • 🚀 Simple command-line interface
  • 🔑 Secure API key and model management with .env file
  • 🎯 Clean output (only the AI response, no JSON clutter)
  • 💬 Natural language queries
  • 🆓 Uses free Mistral model by default
  • ⚡ Stateless design – each query is independent (no conversation history)

Important Note

This tool is designed for single-question interactions. Each time you run the script, it sends an independent query to the AI model without any memory of previous conversations. If you need multi-turn conversations with context retention, consider using the OpenRouter web interface or building a more advanced wrapper that maintains conversation history.

Prerequisites

  • curl (usually pre-installed on most systems)
  • jq for JSON parsing

Installing jq

Ubuntu/Debian:

sudo apt install jq

CentOS/RHEL/Fedora:

sudo yum install jq
# or for newer versions:
sudo dnf install jq

macOS:

brew install jq

Windows (WSL):

sudo apt install jq

Installation

  1. Clone or download this repository:
   git clone https://github.com/SynergOps/openrouter.ai
   cd openrouter.ai
  1. Ensure the script is executable:
   chmod +x ai.sh
  1. Get your OpenRouter API key:
  • Go to OpenRouter.ai
  • Sign up with your Github account for a free account
  • Navigate to models and then the Mistral Small 3.2 24B (free)
  • Click API keys section
  • Generate a new API key
  1. Configure your API key and model:
  • while you are in the openrouter folder, edit the .env file template and add your API key:
   vim .env
  • Add your API key and optionally configure the model:
   # OpenRouter API Configuration
   OPENROUTER_API_KEY=sk-or-v1-your-actual-api-key-here
   # OpenRouter Model Configuration (optional - leave empty for default)
   OPENROUTER_MODEL=

Usage

Basic Usage

./ai.sh your question here

Examples

# Ask a simple question
./ai.sh what is the meaning of life

# Ask for coding help
./ai.sh how do I create a function in Python

# Ask for a definition
./ai.sh define recursion

# Ask for a summary
./ai.sh summarize the plot of "The Hitchhiker's Guide to the Galaxy"

Sample Output

$ ./ai.sh what is the meaning of 42
The number 42 is famously known as "The Answer to the Ultimate Question of Life, the Universe, and Everything" from Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy."

Creating a Terminal Alias (Recommended)

For easier access, you can create an alias so you can use the script from anywhere without typing the full path:

Option 1: Temporary alias (current session only)

alias ai='/path/to/your/openrouter.ai/ai.sh'

Option 2: Permanent alias (recommended)

  1. For Bash users – Add to your ~/.bashrc or ~/.bash_profile:
   cd /path/to/your/openrouter.ai
   echo "alias ai='$(pwd)/ai.sh'" >> ~/.bashrc
   source ~/.bashrc
  1. For Zsh users – Add to your ~/.zshrc:
   cd /path/to/your/openrouter.ai
   echo "alias ai='$(pwd)/ai.sh'" >> ~/.zshrc
   source ~/.zshrc
  1. Manual method – Edit your shell config file:
   # Open your shell config file
   nano ~/.bashrc  # or ~/.zshrc for Zsh users

   # Add this line (replace with your actual path):
   alias ai='/full/path/to/openrouter.ai/ai.sh'

   # Reload your shell config
   source ~/.bashrc  # or source ~/.zshrc

After setting up the alias, you can use it from anywhere:

# Instead of ./ai.sh question
# Works from any directory
cd ~/Documents
ai explain machine learning

Configuration

Changing the AI Model

You can change the AI model by editing the OPENROUTER_MODEL variable in your .env file:

# Leave empty or unset to use the default model
OPENROUTER_MODEL=

# Or specify a different model
OPENROUTER_MODEL=qwen/qwq-32b:free

Popular free models on OpenRouter include:

  • mistralai/mistral-small-3.2-24b-instruct:free (default)
  • qwen/qwq-32b:free
  • deepseek/deepseek-r1-0528:free
  • google/gemini-2.0-flash-exp:free

Note: If OPENROUTER_MODEL is not set or left empty, the script will use the default Mistral model.

License

This project is licensed under the Apache-2.0 license – see the LICENSE file for details.

Support

If you encounter any issues or have questions, please open an issue on GitHub.

Check also, my other projects in the download page

The post How to use an AI in terminal with free models appeared first on Utappia.

10 July, 2025 12:21PM

Ubuntu Studio: Ubuntu Studio 24.10 Has Reached End-Of-Life (EOL)

As of July 10, 2025, all flavors of Ubuntu 24.10, including Ubuntu Studio 24.10, codenamed “Oracular Oriole”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 25.10 via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

10 July, 2025 12:00PM

Ubuntu Blog: Canonical announces Charmed Feast: A production-grade feature store for your open source MLOps stack

July 10, 2025: Today, Canonical announced the release of Charmed Feast, an enterprise solution for feature management with seamless integration with Charmed Kubeflow, Canonical’s distribution of the popular open source MLOps platform. Charmed Feast provides the full breadth of the upstream Feast capabilities, adding multi-cloud capabilities, and comprehensive support.

Feast is an open source operational data system for managing and serving machine learning (ML) features to models during both training and inference. It acts as a bridge between data engineering and machine learning, enabling consistent access to feature data in real-time and batch environments. Feast simplifies the process of building, versioning, and deploying features so teams can reuse features across workflows and reduce duplication of effort which helps to scale their machine learning initiatives more efficiently and deliver intelligent applications faster and with greater consistency.

Feature stores play a critical role in the AI/ML lifecycle. From model development and fine tuning to serving data in production for inference—a Feature store is a critical tool for production AI. Feast bridges the gap between Data Scientists, Data Engineers, MLOps Engineers, and Software Engineers to give users tooling to take their software to production.  Building on its deep integration with Kubeflow, Feast is now charting an even brighter future with investments in generative AI and retrieval-augmented generation (RAG).

Francisco Javier Arceo
Feast Maintainer & Kubeflow Steering Committee member

Charmed Feast is a fully supported, production-grade distribution of Feast. Designed to seamlessly integrate with Charmed Kubeflow and the rest of Canonical’s MLOps and big data ecosystem (which also includes MLFlow, OpenSearch, and Spark), Charmed Feast simplifies feature engineering and delivery, enabling teams to reproducibly manage features and serve them to models in production with ease. With Charmed Feast, organizations get the power of open source, backed by Canonical’s operational excellence and security guarantees, with long-term support, regular security updates, and optional 24/7 enterprise-grade SLAs.

Built-in integration with Charmed Kubeflow and the Canonical MLOps ecosystem
Effortless deployment and seamless integration with Charmed Kubeflow enables teams to serve consistent features to both training and production environments within Kubeflow pipelines, minimize retraining inefficiencies and prevent performance drops. Charmed Feast can run on the same Kubernetes cluster as Charmed Kubeflow, serving as a unified source of feature data for pipelines and model serving. Data teams can manage, version, and serve features from a single platform, reducing training-serving skew and accelerating deployment cycles while maintaining reliability.

Operationalizing ML pipelines at scale
As a new addition to Canonical’s growing MLOps and big data portfolio , Charmed Feast helps unify and operationalize the full machine learning lifecycle. Charmed Feast is delivered as a Juju charm (charmed operator), making installation and management highly automated and portable. With a single Juju command, teams can deploy Charmed Feast on any infrastructure – from the public cloud to an on-premises Kubernetes cluster. This model-driven approach simplifies scaling and lifecycle management: the charm encapsulates best practices for configuring Feast, handling updates, and integrating with other services. As a result, data engineers get a consistent deployment experience across environments, and can easily move feature store workloads between clouds or data centers without retooling.

Strengthening the open source community
Feast has long been a first-class add-on in the Kubeflow ecosystem, with both communities working closely to ensure a smooth and robust integration experience for users. Canonical actively contributes to the Kubeflow project and maintains a strong relationship with the Feast community. By packaging Feast as a charm and including it in our supported MLOps platform, we help amplify upstream innovation while making it more accessible and enterprise-ready. This approach enables users to adopt open standards confidently, with the peace of mind that comes from Canonical’s engineering support.

Additional benefits of Charmed Feast

  • Simple, per-node pricing: Charmed Feast is part of Canonical’s data and AI portfolio. Customers can subscribe to enterprise support through Ubuntu Pro + Support on a predictable per-node basis. This subscription covers the full suite of Canonical’s charmed applications – including Charmed Kubeflow, MLFlow, Spark, OpenSearch, PostgreSQL, MongoDB, and Kafka – without additional software license fees. This makes budgeting and financial planning straightforward, while giving teams the freedom to deploy integrated solutions like Charmed Feast at no extra cost.
  • Long-term security and support: Canonical provides up to 10 years of security maintenance and CVE patching for Charmed Feast. Optional 24/7 enterprise support ensures high availability and stability for production workloads, so you can confidently operate mission-critical feature management services.
  • Fully managed option available: Canonical’s Managed MLOps service offers automation, scalability, availability, observability integration, and hands-on support from trusted experts. This option helps teams reduce operational complexity and concentrate on building data-driven applications instead of maintaining infrastructure.

Start building with Charmed Feast today 
With Charmed Feast, you can bring order to your feature engineering workflows, reduce inconsistencies across environments, and deploy ML pipelines with greater speed and reliability. Whether you’re just getting started or scaling enterprise workloads, Charmed Feast offers the right balance of open source flexibility and production-grade assurance.
To get started with Charmed Feast, refer to the documentation. For more information, visit https://canonical.com/mlops/feast.

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. 

Learn more at https://canonical.com/ 

10 July, 2025 09:00AM

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E355 Mil E Uma Receitas

O Diogo ficou sem voz por culpa do bacalhau, o Miguel matou um transformador, a Canonical matou o Oriole (e tem muitas receitas) e o Multipass está todo aberto. Neste episódio o Diogo conspurcou as suas mãos ao usar Chato Gê-Petas, ficámos a saber onde fica a ilha de Malta e Gozo, como os Jogos Sem Fronteiras podem salvar os Jogos de Vídeo e ainda como não ter medo de ir falar à Festa do Software Livre. Estão a usar o Ubuntu 24.10? Temos más notícias e boas notícias - quais é que querem ouvir primeiro?…

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os separadores de péssima qualidade foram tocados ao vivo pelo Miguel, pelo que pedimos desculpa pelos incómodos causados. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

10 July, 2025 12:00AM

July 09, 2025

hackergotchi for Purism PureOS

Purism PureOS

Google to Pay Texas $1.4 Billion to End Privacy Cases

Google’s $1.375 billion settlement with Texas is a wake-up call for digital privacy (Bloomberg Law).

The post Google to Pay Texas $1.4 Billion to End Privacy Cases appeared first on Purism.

09 July, 2025 08:09PM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Raising the bar for automotive cybersecurity in open source – Canonical’s ISO/SAE 21434 certification

Cybersecurity in the automotive world isn’t just a best practice anymore – it’s a regulatory imperative. With vehicles becoming software-defined platforms, connected to everything from mobile phones to cloud services, the attack surface has expanded dramatically. The cybersecurity risk is serious, and concrete. And with regulations like UNECE R155 making cybersecurity compliance mandatory, the automotive industry needs suppliers it can trust.

Canonical’s processes are now officially ISO/SAE 21434 certified. That’s a big deal for us, and for the broader ecosystem of automakers, Tier 1s, and software developers building the vehicles of tomorrow. Let’s break down what this means, why it matters, and what comes next.

What the certification covers

ISO/SAE 21434 is the international gold standard for cybersecurity risk management across a vehicle’s lifecycle. Our certification covers the development of Ubuntu and related tooling, including the packaging and maintenance of open source software.

ISO/SAE 21434 is a rigorous review of our processes, supply chain security, documentation, tooling, and development practices. The certification required a review of everything from how we handle upstream patches to how we respond to CVEs – checking that everything is designed to ensure that our software can be safely used in production automotive environments.

This achievement was years in the making, and represents a major investment in aligning our development lifecycle with the needs of regulated industries.

Why it matters

This answers a basic question for OEMs and Tier 1 suppliers: Is open source software capable of meeting cybersecurity requirements for use in automobiles? With Canonical’s ISO/SAE 21434 certification, the answer is clear: yes.

You get the velocity, transparency, and flexibility of open source – backed by processes that meet the strictest standards in the industry.

In particular, the certification reinforces that open source software can meet the same high standards of cybersecurity as proprietary alternatives. With ISO/SAE 21434 certification in place, there’s no structural reason preventing open source from being used in modern automotive systems – especially in the context of software-defined vehicles (SDVs), where ease of modification, modularity, and freedom from dependency are essential. Canonical’s approach proves that open source can deliver the same level of assurance required by the industry’s most demanding security frameworks.

Consolidated Vehicle Server Architecture illustration

What it unlocks

This certification clears the road ahead for automotive-grade open source.

  • Teams evaluating Ubuntu for in-vehicle systems or automotive tooling no longer need to audit our processes from scratch, enabling faster integration.
  • Canonical now formally meets the cybersecurity expectations of OEMs operating under UNECE R155, offering assurance in procurement.
  • We support threat modeling, vulnerability handling, and supply chain traceability aligned with ISO/SAE 21434 – giving you a standardized approach to risk management.

What’s next?

Canonical’s certification is a major step in our broader journey to deliver automotive-grade open source solutions. As the industry increasingly moves toward SDV architectures, we are continuing to invest in initiatives around software quality, process maturity, and functional safety readiness.

Our next efforts will further support OEMs and Tier 1s in their compliance and product quality goals – including areas like qualification, code analysis, and robust testing strategies.

With ISO/SAE 21434 now in place, we’re doubling down on our commitment to make open source the most trusted option for next-generation vehicles. For more of an insight, read our blog on why Canonical has decided to join various consortiums. 

Stay tuned, or reach out to our team to talk more about what Canonical can do for your vehicle programs.

Contact Us

Curious about Automotive at Canonical? Check out our webpage!

Want to learn more about software-defined vehicles? Download our guide!

09 July, 2025 09:36AM

July 08, 2025

hackergotchi for ARMBIAN

ARMBIAN

Armbian Development Highlights

Armbian Development Report: Continued Progress and Community Momentum

Over the past two weeks, the Armbian project has made steady and meaningful progress across core infrastructure, board support, and kernel development. From bootloader improvements to expanded hardware compatibility, our contributors continue to push the platform forward. This update highlights recent technical advancements, bug fixes, and community contributions that help power the Armbian ecosystem.

 

Highlights

  • Pcduino2/3 Gain HDMI and Display Fixes
    HDMI output is now supported, and a regression affecting display output on Pcduino2 and Pcduino3 boards has been resolved.
    #8341

  • Key Bootloader and Memory Enhancements
    Updates include a boot fix for Inovato Quadra, u-boot bumps for Banana Pi Zero3 and 2W, and the addition of 1.5GB memory support.
    #8334

  • Enhanced Repository Security
    Improvements include a new signing key, dual signing support, and better GPG key handling via APA.
    #8323, #8320, #8316

  • Improved TI Board Support
    Texas Instruments boards now benefit from a custom Debian repo, pre-installed packages, and a Real-Time (RT) kernel config option.
    #8305, #8280

  • Meson64 Security Boost
    Kernel Address Space Layout Randomization (KASLR) is now enabled by default to improve runtime security.
    #8354

New Features

Bug Fixes

Improvements

 Community Contributions


📅 Stay Connected with the Community

Looking to join live chats with Armbian developers and users? The Armbian Community Calendar lists upcoming voice chats, planning sessions, and community events. Stay informed and be part of the conversation!

 


The post Armbian Development Highlights first appeared on Armbian.

08 July, 2025 11:15PM by Michael Robinson

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: KDE Applications snaps 25.04.3 released, plus new snaps and fixes!

I have released 25.04.3 I have upgraded the QT6 content snap to 6.9! Fixed a bug in kde-neon* extensions with cmake prefix path.

New snaps!

Audex: A CD ripping application.

GCompris – An excellent childrens education application

Labplot – Scientific plotting

Digikam – 8.7.0 with exiftool bug fixed https://bugs.kde.org/show_bug.cgi?id=501424

Krita – 5.2.11 – Excellent Graphic art platform ( compares to Photoshop )

kgraphviewer – Graphiz .dot file viewer

I am happy to report my arm is mostly functional! Unfortunately, maintaining all these snaps is an enormous amount of work, with time I don’t have! Please consider a donation for the time I should be spending job hunting / getting a website business off the ground. Thank you for your consideration!

08 July, 2025 03:25PM

hackergotchi for Volumio

Volumio

NOS Mode Now Available on Volumio Preciso

Digital music playback often involves layers of processing, such as upsampling, filtering, and error correction. These steps are designed to improve accuracy from a technical perspective. However, some listeners prefer a more direct path.

With Volumio Preciso, our high-precision dual mono DAC, you can activate NOS (Non-OverSampling) mode. This is more than just a feature. It is a different approach to audio reproduction that emphasizes timing, texture, and signal integrity.

What Happens in NOS Mode?

In standard DAC operation, incoming digital signals (such as a 44.1kHz file) are typically upsampled to higher rates before conversion to analog. This process helps reduce aliasing and simplifies analog filter requirements. However, it can also introduce digital interpolation and pre- or post-ringing effects that subtly alter the music’s character.

NOS mode avoids this process entirely:

  • No digital oversampling

  • No FIR or IIR filters

  • No mathematical reconstruction of waveforms

Instead, the original digital samples are passed directly to the DAC’s output stage. This preserves the natural timing and structure of the audio. While some high-frequency artifacts may be present, many listeners find that NOS mode produces a more natural, coherent, and emotionally engaging sound.

Precision Hardware to Match

NOS mode on the Preciso is supported by an advanced hardware platform:

  • Dual SABRE ES9039Q2M DACs, configured in true dual mono

  • Ultra-low phase-noise MEMS clock for accurate timing

  • OPA1612A op-amp based I/V output stage for low distortion and high linearity

  • Balanced and unbalanced analog outputs, supporting up to PCM 768kHz/32bit and DSD512

  • Linear power regulation, LC filtering, and isolated low-noise regulation for each section

For those who prefer a more traditional sound, the Preciso also includes eight selectable digital filters. These allow users to fine-tune their playback experience when NOS mode is not in use.

Why It Matters

NOS mode is not designed to appeal to everyone, and that is intentional. It is for those who want to hear their music without the influence of digital reconstruction algorithms. This mode focuses on preserving signal integrity and delivering a more direct connection to the original recording.

Whether you are rediscovering an old favorite or auditioning a high-resolution master, NOS mode offers a new way to experience music. If you already own a Preciso be sure to update the firmware to the latest version available here

🛒 Want to try it yourself? Learn more about Volumio Preciso and order yours

Shop Now

The post NOS Mode Now Available on Volumio Preciso appeared first on Volumio.

08 July, 2025 03:13PM by Alia Elsaady

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What our users make with Ubuntu Pro – Episode 1

Secure homelabs – and more – for the entire family

Ubuntu Pro isn’t just for enterprises – it’s for the passionate community that powers and supports open source every day. From secure remote access to homelab hardening, Ubuntu Pro helps users get more from their systems, whether at work or at home. In this series, we talk to real users about how they use Ubuntu Pro in their personal and professional lives. We begin with Marc Grondin, a longtime Linux user and Ubuntu Pro subscriber based in Quebec, Canada.

Tell us about your projects and how you came to use Ubuntu Pro.

I started using Ubuntu Pro because my employer required support and licensing assurances for Linux systems. That got me into the ecosystem and I quickly found value beyond work. My homelab now runs entirely on Ubuntu, with four PCs and three Raspberry Pis on either Ubuntu Server or Desktop.

I use services like Livepatch and Landscape to manage everything. Even my sisters’ computers run Ubuntu now, and I support them remotely using DWService.

Are you involved in the Ubuntu or open source community?

I’ve always loved the peer-to-peer nature of it. Over the years, I’ve promoted Linux by showing people how it works, installing it for friends and family, and offering support. That’s been my way of giving back.

What first pushed you toward Linux?

Back in the early 2000s, we got a family computer running a different operating system and it had constant problems. I couldn’t print and browse the internet at the same time. Nothing worked until I tried a Linux distro from a magazine and everything just worked. That moment changed everything.

Since then, I’ve leaned into Linux fully. I joined a local user group in Quebec City, where I first encountered Ubuntu CDs being handed out, started using Ubuntu, and eventually began choosing jobs that allowed me to use Linux as my daily driver.

Do you use both the free and paid versions of Ubuntu?

Yes, I use both. I’ve activated Ubuntu Pro on personal emails and work accounts – it helps keep things clear with employers. It’s funny how many people still don’t realize you can use Linux on the desktop. When they find out I run Linux at work and home, it usually sparks some great conversations.

What made you choose Ubuntu Pro?

Livepatch and Landscape sealed the deal. They give me detailed control and visibility over all my machines  and peace of mind when it comes to Linux security, updates, and system health. Right now I’m looking at my main server. On the hardware page, you have the serial number, you have monitoring, you can launch scripts remotely, ensure packages are up to date.

There’s even an overview, like a little number three that tells me: one machine needs a reboot, three haven’t contacted Landscape within five minutes. It’s very detailed, granular monitoring and control.

I’m finding that amazing. It’s everything I need in one place.

How has Ubuntu Pro improved your workflow?

It makes everything more efficient. I use virtual desktops instead of multiple monitors, organized by task: work, personal browsing, documents, and terminals. It’s a smooth setup that helps me stay focused and productive.

With Ubuntu, I don’t deal with forced reboots or performance hiccups, which is critical when you’re in a meeting or deep in a task. It just works, and that reliability saves time and reduces frustration.

If you could sum it up, what’s Ubuntu Pro’s biggest value to you?

Stability, control, and peace of mind. I use it every day, and I’m always discovering new ways it makes my setup better. It’s been a winning formula for me. No reason to change.

A look at Marc’s estate

Marc’s LAN and WAN setup overview
Machines setup by Marc in his homelab
His Landscape setup
The remote support utility : accessed via desktop or command-line (DWService)

Ubuntu Pro is always free for personal use, whether you’re running a home lab, learning new skills, or just want peace of mind with extended security and tools like Livepatch. 

Get Ubuntu Pro

Join a global community of Linux enthusiasts who believe in open source, reliability, and sharing knowledge. Do you have an Ubuntu Pro story you want to share with us? Fill out this form and we will get in touch with you!

08 July, 2025 09:05AM

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for June 2025

I. May Community Data Overview II. Community Products 1. deepin 25 Officially Released In June, the highly anticipated official release of deepin 25 made its dazzling debut! Centered around the core concept of "Innovation for All," this release deeply integrates numerous innovative features, delivering a new operating system experience that is reliable, smooth, and free. The brand-new DDE desktop environment achieves visual consistency and silky-smooth interactions through QML refactoring. The Control Center adopts an intuitive "sidebar + content area" layout, significantly improving operational efficiency. The Launcher now supports alphabetical sorting of applications. The File Manager enhances "type-to-search" functionality with keyword ...Read more

08 July, 2025 08:07AM by xiaofei

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: Making a Discord activity with PHP

Another post in what is slowly becoming a series, after describing how to make a Discord bot with PHP; today we're looking at how to make a Discord activity the same way.

An activity is simpler than a bot; Discord activities are basically a web page which loads in an iframe, and can do what it likes in there. You're supposed to use them for games and the like, but I suspect that it might be useful to do quite a few bot-like tasks with activities instead; they take up more of your screen while you're using them, but it's much, much easier to create a user-friendly experience with an activity than it is with a bot. The user interface for bots tends to look a lot like the command line, which appeals to nerds, but having to type !mybot -opt 1 -opt 2 is incomprehensible gibberish to real people. Build a little web UI, you know it makes sense.

Anyway, I have not yet actually published one of these activities, and I suspect that there is a whole bunch of complexity around that which I'm not going to get into yet. So this will get you up and running with a Discord activity that you can test, yourself. Making it available to others is step 2: keep an eye out for a post on that.

There are lots of "frameworks" out there for building Discord activities, most of which are all about "use React!" and "have this complicated build environment!" and "deploy a node.js server!", when all you actually need is an SPA web page1, a JS library, a small PHP file, and that's it. No build step required, no deploying a node.js server, just host it in any web space that does PHP (i.e., all of them). Keep it simple, folks. Much nicer.

Step 1: set up a Discord app

To have an activity, it's gotta be tied to a Discord app. Get one of these as follows:

  • Create an application at discord.com/developers/applications. Call it whatever you want
  • Copy the "Application ID" from "General Information" and make a secrets.php file; add the application ID as $clientid = "whatever";
  • In "OAuth2", "Reset Secret" under Client Secret and store it in secrets.php as $clientsecret
  • In "OAuth2", "Add Redirect": this URL doesn't get used but there has to be one, so fill it in as some URL you like (http://127.0.0.1 works fine)
  • Get the URL of your activity web app (let's say it's https://myserver/myapp/). Under URL Mappings, add myserver/myapp (no https://) as the Root Mapping. This tells Discord where your activity is
  • Under Settings, tick Enable Activities. (Also tick "iOS" and "Android" if you want it to work in the phone app)
  • Under Installation > Install Link, copy the Discord Provided Link. Open it in a browser. This will switch to the Discord desktop app. Add this app to the server of your choice (not to everywhere), and choose the server you want to add it to
  • In the Discord desktop client, click the Activities button (it looks like a playstation controller, at the end of the message entry textbox). Your app should now be in "Apps in this Server". Choose it and say Launch. Confirm that you're happy to trust it because you're running it for the first time

And this will then launch your activity in a window in your Discord app. It won't do anything yet because you haven't written it, but it's now loading.

Step 2: write an activity

  • You'll need the Discord Embedded SDK JS library. Go off to jsdelivr and see the URL it wants you to use (at time of writing this is https://cdn.jsdelivr.net/npm/@discord/embedded-app-sdk@2.0.0/+esm but check). Download this URL to get a JS file, which you should call discordsdk.js. (Note: do not link to this directly. Discord activities can't download external resources without some semi-complex setup. Just download the JS file)
  • Now write the home page for your app -- index.php is likely to be ideal for this, because you need the client ID that you put in secrets.php. A very basic one, which works out who the user is, looks something like this:
<html>
<body>
I am an activity! You are <output id="username">...?</output>
<scr ipt type="module">
import {DiscordSDK} from './discordsdk.js';
const clientid = '<?php echo $clientid; ?>';
async function setup() {
  const discordSdk = new DiscordSDK(clientid);
  // Wait for READY payload from the discord client
  await discordSdk.ready();
  // Pop open the OAuth permission modal and request for access to scopes listed in scope array below
  const {code} = await discordSdk.commands.authorize({
    client_id: clientid,
    response_type: 'code',
    state: '',
    prompt: 'none',
    scope: ['identify'],
  });
  const response = await fetch('/.proxy/token.php?code=' + code);
  const {access_token} = await response.json();
  const auth = await discordSdk.commands.authenticate({access_token});

  document.getElementById("username").textContent = auth.user.username;
  /* other properties you may find useful:
     server ID: discordSdk.guildId
     user ID: auth.user.id
     channel ID: discordSdk.channelId */
}
setup()

You will see that in the middle of this, we call token.php to get an access token from the code that discordSdk.commands.authorize gives you. While the URL is /.proxy/token.php, that's just a token.php file right next to index.php; the .proxy stuff is because Discord puts all your requests through their proxy, which is OK. So you need this file to exist. Following the Discord instructions for authenticating users with OAuth, it should look something like this:

<?php
require_once("secrets.php");

$postdata = http_build_query(
    array(
        "client_id" => $clientid,
        "client_secret" => $clientsecret,
        "grant_type" => "authorization_code",
        "code" => $_GET["code"]
    )
);

$opts = array('http' =>
    array(
        'method'  => 'POST',
        'header'  => [
            'Content-Type: application/x-www-form-urlencoded',
            'User-Agent: mybot/1.00'
        ],
        'content' => $postdata,
        'ignore_errors' => true
    )
);

$context  = stream_context_create($opts);

$result_json = file_get_contents('https://discord.com/api/oauth2/token', false, $context);
if ($result_json == FALSE) {
    echo json_encode(array("error"=>"no response"));
    die();
}

$result = json_decode($result_json, true);
if (!array_key_exists("access_token", $result)) {
    error_log("Got JSON response from /token without access_token $result_json");
    echo json_encode(array("error"=>"no token"));
    die();
}
$access_token = $result["access_token"];
echo json_encode(array("access_token" => $access_token));

And... that's all. At this point, if you Launch your activity from Discord, it should load, and should work out who the running user is (and which channel and server they're in) and that's pretty much all you need. Hopefully that's a relatively simple way to get started.

  1. it's gotta be an SPA. Discord does not like it when the page navigates around

08 July, 2025 07:11AM

hackergotchi for Qubes

Qubes

Qubes OS Summit 2025: Call for sponsors

The Qubes OS Project and 3mdeb are excited to announce the upcoming Qubes OS Summit 2025! This event will be an incredible opportunity for the community to come together, share knowledge, and discuss the future of secure computing.

Event Details:

  • Date: Fri, Sep 26, 2025 10:00 AM – Sun, Sep 28, 2025 3:00 PM GMT+2
  • Location: The Social Hub Berlin (Alexanderstraße 40, Berlin, 10179, DE)
  • Format: In-person event with online participation option featuring talks, workshops, and networking opportunities

To make this summit a success, we’re seeking sponsors who are interested in supporting our mission and engaging with our vibrant community. Sponsorship offers a unique chance to showcase your commitment to security and privacy while gaining visibility among a diverse audience of developers, researchers, and enthusiasts.

For detailed information about sponsorship opportunities, please refer to our Qubes OS Summit 2025 Sponsorship Prospectus.

We look forward to your support in making Qubes OS Summit 2025 a remarkable event!

08 July, 2025 12:00AM

July 07, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 899

Welcome to the Ubuntu Weekly Newsletter, Issue 899 for the week of June 29 – July 5, 2025. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 24.10 (Oracular Oriole) reaches End of Life on 10th July 2025
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #401
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • Ubuntu Nepal : UbuCon Asia 2025 Meetup
  • LoCo Events
  • Introducing Debcrafters
  • Ubucon Latin America 2025, Call for papers
  • Listening to contributors (code, documentation, translation, testing, etc.): participate in a feedback session
  • Call for Testing: Multipass 1.16.0 Release Candidate
  • How to get a job at Canonical
  • Other Community News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

07 July, 2025 10:08PM

hackergotchi for Purism PureOS

Purism PureOS

Trump T1 Phone Android OS vs. PureOS

Is the Trump T1 Phone Secure, Private, and Truly Made in America? The newly launched Trump T1 Phone is being marketed as a secure, privacy-respecting smartphone made in America. But while the hardware may be assembled in the U.S., the operating system—Android 15—raises significant concerns for anyone who values digital privacy, sovereignty, and freedom from surveillance capitalism.

The post Trump T1 Phone Android OS vs. PureOS appeared first on Purism.

07 July, 2025 04:59PM by Rex M. Lee

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The State of Silicon and Devices – Q2 2025 roundup

Welcome to the Q2 2025 edition of the State of Silicon and Devices by Canonical. In this quarter, we have seen momentum accelerate in edge computing, as well as growing interest in hardware platforms designed for AI, automation, and long-term maintainability. From Ubuntu Desktop arriving on Qualcomm’s Dragonwing processors, to demonstrations of RISC-V silicon running open-source LLMs, it’s clear the ecosystem is evolving toward performance, efficiency, and openness at the edge. 

In the quarter’s roundup, we will provide a brief summary of some of the major announcements from the past quarter, and help you to understand the underlying market trends they represent. Let’s begin by assessing how partnerships between silicon and OS vendors help downstream users within the context of cybersecurity mandates.

Silicon partnerships for mature compliance posture 

As the implementation timeline for the EU Cyber Resilience Act (CRA) progresses, device makers across the embedded and IoT ecosystem are under increased pressure to ensure long-term software maintenance, security patching, and compliance reporting across their fleets. According to the Linux Foundation’s 2025 CRA Readiness Report, the majority of device manufacturers are still uncertain about how they will meet CRA obligations, with 63% mentioning they “do not yet plan to contribute security fixes once CRA goes into effect.” This is concerning, as neglecting the responsibility risks creating a fragmented, insecure device landscape that undermines user trust and regulatory compliance. As the CRA makes clear, software maintenance is not an optional activity. OEMs and ODMs who don’t treat this as  a priority, risk paying a later price in terms of legal, commercial, and reputational risk.

In this context, partnerships that bring together silicon, software, and lifecycle services are gaining traction. These partnerships bring together the different aspects of the device lifecycle, and given that the CRA mandates that manufacturers are responsible for the entire lifecycle of connected products, these partnerships are a welcome development. 

What do these partnerships entail? As an example, earlier in April, Renesas joined the Canonical Silicon Partner Program, delivering optimized Ubuntu images for the RZ family of MPUs. This integration gives OEMs/ODMs streamlined access to the production-grade Ubuntu ecosystem, from snaps and cloud-native tools to a robust stream of security updates, on RZ-based platforms. By bundling Ubuntu Pro with Renesas’ AI-accelerated hardware, developers can accelerate time-to-market for robotics, smart cameras, and industrial automation, while strengthening their compliance posture in light of the EU Cyber Resilience Act.

Despite the move towards integrated partnerships, many established device manufacturers, startups and individuals alike still rely on in-house teams to maintain their  Linux builds. This is particularly concerning given the CRA’s stringent requirements. Let’s learn more about it, in the context of Yocto.

Yocto Project 5.2 “Walnascar” arrives 

The Yocto Project shipped its 5.2.0 release in April 2025, featuring key upstream components like Linux kernel 6.12, GCC 14.2, glibc 2.41, LLVM 19.1.7, and over 300 additional recipe upgrades. Yocto is widely adopted in the embedded Linux ecosystem, as it offers unparalleled flexibility for developers who need full control over every aspect of their system, from bootloaders to userland packages. Its recipe-based approach allows for deep customization, making it a popular choice for highly specialized or resource-constrained devices.

However, this flexibility comes at a cost: Yocto-based systems require substantial in-house expertise to maintain, particularly over the long lifespans typical of industrial or embedded deployments. Managing updates across a bespoke Linux stack can be error-prone. A single failed update may result in device instability, necessitating costly site visits or product recalls. In the context of the CRA, these challenges become even more pronounced.

Yocto users should ensure all components are regularly patched, dependencies are secure, and update mechanisms are robust, which adds significant operational overhead. As CRA compliance drives greater scrutiny around software maintenance and security, organizations will need to reevaluate the balance between bespokeness and maintenance, and potentially explore alternatives with long-term support and simplified update models to ensure product viability over time.

Let’s now move closer to the hardware, and see how the open ISA RISC-V keeps making waves in the ecosystem.

DeepSeek demo on ESWIN’s EIC77 RISC-V series

Global interest in RISC-V is accelerating, as the open-standard architecture holds the promise of revolutionising computing by enabling developers to customize, scale, and innovate faster. According to recent projections, RISC-V shipments will surpass 16 billion chips by 2030. That said, some challenges remain. Among those, the RISC-V ecosystem must be able to rely on a commercial-grade OS, tooling, and lifecycle support, just like mainstream architectures. 

Canonical is deeply invested in making RISC-V commercially viable, from upstream enablement and reference board support to OTA updates for security. We’ve been supporting RISC-V for a few years now – for example, by contributing upstream, enabling reference boards, – but this quarter we wanted to take advantage of recent market developments in the LLM space. 

At RISC-V Summit Europe, we recently demonstrated the DeepSeek LLM 7B model running on ESWIN Computing’s EIC77 series SoC, featuring SiFive P550 cores alongside integrated NPU, GPU, and DSP accelerators. The live demo, which achieved approximately 7 tokens per second on the EIC7700X EVB, highlighted the ability of modern RISC-V platforms to support advanced NLP inference using open-source software stacks. The achieved throughput is relevant for embedded and edge environments, where resource constraints and power efficiency are scarce.

We believe Ubuntu’s availability on RISC-V hardware underlines its role as the go-to OS for next-gen ISA adoption. By bringing production-quality Ubuntu images to platforms like the EIC77, we want to close the gap between development and deployable products, accelerating RISC-V’s shipments to production.

Moving away from RISC-V, mature ecosystem players like Qualcomm recently made announcements over the course of the past quarter. Let’s unpack them in more detail.

Canonical’s first Ubuntu Desktop image for Qualcomm Dragonwing

This quarter, we released a public beta of Ubuntu 24.04 Desktop for Qualcomm’s Dragonwing QCS6490 and QCS5430 processors, marking the first official Ubuntu Desktop experience on this IoT-focused platform. The QCS5430 processors are mid-tier IoT solutions that combine premium connectivity, high-level performance, and edge AI-powered camera capabilities specifically for robotics, industrial handhelds, retail, cameras, and drone applications. On the other hand, the QCS6490 processors focus on 5G connectivity, scalable performance, multi-camera support, and versatile I/O for transportation and logistics, smart warehousing, retail, and manufacturing. The Ubuntu image brings GPU-accelerated graphics, multimedia support, sensor integration, and on-device ML into a unified desktop environment for vision kits and industrial displays. By enabling broader hardware reach, we are extending Ubuntu Desktop beyond PCs and into Qualcomm’s embedded and industrial IoT devices, empowering developers with a familiar UI and trusted update channels. 

Credit: Qualcomm

The move reflects a broader trend in the embedded market, one which goes beyond Canonical: the convergence of traditional GUI workloads with AI inference at the edge. This is driven by a rapid growth of edge computing and AI deployments. For instance, IDC estimates that global spending on edge computing solutions will grow by 13.8%, reaching nearly $380 billion by 2028. The convergence of traditional GUI workloads with AI inference reflects a need to keep up this pace, as developers work better when they have a single platform from prototyping to deployment. Taking the example of Ubuntu images for Dragonwing, developers can work with the same underlying operating system, packages, tooling and libraries on their desktops, servers, in the cloud, and edge computing devices.

AMD adaptive and embedded computing – new Ubuntu images

This quarter also saw the release of new Ubuntu 24.04 LTS Desktop and Server images for the full range of AMD Kria System-On-Module (SOM) boards, which are Adaptive Computing systems that include an AMD FPGA alongside the CPU.  AMD has a range of systems for different purposes, aimed at markets such as IoT, automotive, industrial control, medical devices, and other embedded-system applications.  You can easily download Ubuntu images for AMD SOM boards at our download hub.

FPGAs (Field Programmable Gate Arrays) are interesting because they operate like a “programmable CPU” – hence the “adaptive” part of adaptive computing.  They can be programmed for specific tasks like video or signal processing, cryptography, networking, or many other tasks that require high-speed parallel processing in a much more efficient and speedy fashion than doing it on a normal CPU.  They’re a flexible way to achieve optimal performance in specific tasks that might otherwise require a custom-designed chip to accomplish.

In the IoT space though, it’s never quite as simple as that.  Manufacturers like AMD often provide a reference design or developer board that ODMs use for R&D and prototyping, or a SOM (System On Module) like the Kria that can be used as a development board or integrated directly into a finished product as-is.  

However after prototyping with such a system, many ODMs opt to design their own boards that are based on the same chip sets as the reference design, but customized to their specific needs with added or removed hardware, different form factors, and so forth.  In these cases, we often engage with those customers to integrate any unique drivers or features for their designs in a customized and fully-security-maintained version of Ubuntu that can be used for production deployments, using the reference platform image as a starting point.  You’ve probably run across many devices with embedded computers running Ubuntu and didn’t even know it!  With regulatory changes such as the EU Cyber Resilience Act, which is entering into full enforcement in September 2026, long-term security maintenance of any internet-connected device will be mandatory, and that’s driving a lot of interest in commercially-supported solutions like Ubuntu. 

Concluding remarks

This quarter reinforced ongoing trends, as edge computing is maturing rapidly, and hardware vendors are moving toward platforms that support on-device intelligence with streamlined development. From embedded RISC-V chips running LLMs to automotive SoCs designed for over-the-air updates and regulatory compliance, we believe the foundation of future compute is open-source. Canonical remains committed to enabling this shift with trusted, production-grade Linux that spans from cloud to edge, across all major silicon ecosystems.

07 July, 2025 01:43PM

hackergotchi for GreenboneOS

GreenboneOS

LEV: Demystifying the New Vulnerability Metrics in NIST CSWP 41

In 2025, IT security teams are overwhelmed with a deluge of new security risks. The need to prioritize vulnerability remediation is an ongoing theme among IT security and risk analysts. In a haystack of tasks, finding the needles is imperative. Factors compounding this problem include a cybersecurity talent shortage, novel attack techniques, and the increasing […]

07 July, 2025 11:30AM by Joseph Lee

hackergotchi for VyOS

VyOS

VyOS Ansible Collection 6.0.0 release

We are happy to announce the next major 6.0.0 release of the VyOS Ansible Collection. It is now available from Ansible Galaxy and is also a certified collection for the Red Hat Ansible Automation Platform.

If you are an active Ansible user, you surely noticed that the Ansible collection for VyOS lost its momentum at some point and remained stagnant for quite some time. Earlier this year, we had the repositories transferred to our organization on GitHub, took over the development, and formed a small team of dedicated maintainers — thanks to Gaige Paulsen and Evgeny Molotkov who joined us and took up the hard work!

Now, after over two years of community work and our own improvements and fixes, a new release of the vyos.vyos Ansible collection is finally available to all users!

This release brings proper support for the current 1.4 LTS release and the upcoming VyOS 1.5.

It also still fully supports VyOS 1.3.x but will be the last release to officially support it since VyOS 1.3.x reached its end of life this year. This release may still work for VyOS 1.2.x and older, but if you run into problems, you should switch to version 5.0.0, since it was the last version to officially support those legacy versions.

07 July, 2025 09:41AM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Year two of freelancing

Introduction

It was exactly two years ago today that I left my day job as Engineering Manager of LXD at Canonical and went freelance. I wrote about the one year experience last year, so here’s another update for what happened since!

Zabbly

As a reminder, Zabbly is the company I created for my freelance work. Most of it is Incus related these days, though I also make and publish some mainline kernel builds, ZFS packages and OVS/OVN packages!

On top of that, Zabbly also owns my various ARIN resources (ASN, allocations, …) as well as my hosting/datacenter contracts.

Through Zabbly I offer a mix of by-the-hour consultation with varying prices depending on the urgency of the work (basic consultation, support, emergency support) as well as fixed-cost services, mostly related to Incus (infrastructure review, migration from LXD, remote or on-site trainings, …).

Zabbly is also the legal entity for donations related to my open source work, currently supporting:

And lastly, Zabbly also runs a Youtube channel covering the various projects I’m involved with.
That part grew quite a bit over the past year, with subscriber count up 75%, frequent live streams and release videos. The channel is now part of the YouTube Partner program.

FuturFusion

In addition to the work I’m doing through Zabbly. I’m also the CTO and co-founder of FuturFusion.

FuturFusion is focused on providing a full private cloud solution to enterprise customers, primarily those looking for an alternative to VMware. The solution is comprised of:

  • Incus clusters
  • Hypervisor OS (based on Incus OS)
  • Operations Center (provisioning, global inventory, update management, ..)
  • Migration Manager (seamless VMware to Incus migrations)

While Zabbly is just a one person show, FuturFusion has a global team and offers 24/7 support.

All components of the FuturFusion Cloud suite are fully open-source (Apache 2.0).
FuturFusion customers get access to fully tested and supported builds of the software stack.

Incus

A lot has been going on with Incus over the past year!

Some of the main feature highlights are:

  • OCI application containers support
  • Automatic cluster re-balancing
  • Windows support for the VM agent
  • Linstor storage driver
  • Network address sets
  • A lot of OVN improvements (native client, ECMP for interconnect, load-balancer monitoring, ability to run isolated networks, inclusion of physical interfaces into OVN, …)
  • A lot of VM improvements (OS reporting, baseline CPU calculation, console history, import of existing QCOW2/VMDK/OVA images, live-migration of VM storage, screenshot API, IOMMU support, USB virtual devices, memory hotplug, …)

We also acquired (through Zabbly) our own MAC address prefix and transitioned all our projects over to that!

The University of Texas in Austin once again decided to actively contribute to Incus, leading to dozens of contributions by students, clearing quite a bit of our feature request backlog.

And I can’t talk about recent Incus work without talking about Incus OS. This is recent initiative to build our own immutable OS image, just to run Incus. It’s designed to be as safe as possible and easy to operate at large scale. I recently traveled to the Linux Security Summit to talk about it.

Two more things also happened that are definitely worth mentioning, the first is the decision by TrueNAS Scale to use Incus as the built-in virtualization solution. This has introduced Incus to a LOT of new people and we’re looking forward to some exciting integration work coming very soon!

The other is a significant investment from the Sovereign Tech Fund, funding quite a bit of Incus work this year, from our work on LTS bugfix releases to the aforementioned Windows agent and a major refresh of our development lab!

NorthSec

NorthSec is a yearly cybersecurity conference, CTF and training provider, usually happening in late May in Montreal, Canada. It’s been operating since 2013 and is now one of the largest on-site CTF events in the world along with having a pretty sizable conference too.

There are two main Incus-related highlights for NorthSec this year.

First, all the on-site routing and compute was running on Incus OS.
This was still extremely early days with this being (as far as I know) the first deployment of Incus OS on real server hardware, but it all went off without a hitch!

The second is that we leaned very hard on Infrastructure As Code this year, especially on the CTF part of the event. All challenges this year were published through a combination of Terraform and Ansible, using their respective providers/plugins for Incus. The entire CTF could be re-deployed from scratch in less than an hour and we got to also benefit from pretty extensive CI through Github Actions.

For the next edition we’re looking at moving more of the infrastructure over to Incus OS and make sure that all our Incus cluster configuration and objects are tracked in Terraform.

Conferences

Similar to last year, I’ve been keeping conference travel to a lower amount than I was once used to 🙂

But I still managed to make it to:

  • Linux Plumbers Conference 2024 (in Vienna, Austria)
    • Ran the containers & checkpoint/restore micro-conference and talked about immutable process tags
  • FOSDEM 2025 (in Brussels, Belgium)
  • Linux Storage, Filesystem, Memory Management & BPF Summit (in Montreal, Canada)
  • Linux Security Summit 2025 (in Denver, Colorado)

This will likely be it as far as conference travel for 2025 as I don’t expect to make it in person to Linux Plumbers this year, though I intend to still handle the CFP for the containers/checkpoint-restore micro-conference and attend the event remotely.

What’s next

I expect the coming year to be just as busy as this past year!

Incus OS is getting close to its first beta, opening it up to wider usage and with it, more feature requests and tweaks! We’ve been focusing on its use for large customers that get centrally provisioned and managed, but the intent is for Incus OS to also be a great fit for the homelab environment and we have exciting plans to make that as seamless as possible!

Incus itself also keeps getting better. We have some larger new features coming up, like the ability to run OCI images in virtual machines, the aforementioned TrueNAS storage driver, a variety of OVN improvements and more!

And of course, working with my customers, both through Zabbly and at FuturFusion to support their needs and to plan for the future!

07 July, 2025 05:11AM

July 04, 2025

Podcast Ubuntu Portugal: E354 Emíl.IA

Não tivemos tempo de gravar esta semana, mas…este é o episódio que VAI REVOLUCIONAR A INDÚSTRIA DOS PAPAGAIOS ESTOCÁSTICOS! Uma revelação em primeira mão e em exclusivo explosivo.

Já sabem: oiçam, subscrevam e partilhem!

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Os efeitos sonoros deste episódio possuem as seguintes licenças: Exclusivo Explosivo: Countdown-Boom.mp3 by Russintheus – https://freesound.org/s/165089/ – License: Creative Commons 0; Estática: Computer noise, VHF Ham radio 146.67 MHZ.wav by kb7clx – https://freesound.org/s/347524/ – License: Creative Commons 0; Geringonça: Sci-Fi Computer Ambience - Pure Data Patch by cryanrautha – https://freesound.org/s/333777/ – License: Creative Commons 0; Processamento da Emília: mechanical_calculator_looped_01 by joedeshon – https://freesound.org/s/714228/ – License: Creative Commons 0. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização. A arte de episódio foi criada por encomenda pela Shizamura - artista, ilustradora e autora de BD. Podem ficar a conhecer melhor a Shizamura na Ciberlândia e no seu sítio web.

04 July, 2025 12:00AM

July 03, 2025

hackergotchi for Univention Corporate Server

Univention Corporate Server

Getting IAM Right: Why Identity & Access Management Matters More Than Ever

Not sure who has access to what in your IT environment? You’re not alone. This post breaks down how Identity & Access Management (IAM) puts you back in control. With open solutions like Nubus, you manage access, protect data—and take back your digital sovereignty.

Access rights are like keys—and in many IT environments, the whole keychain’s just lying there on the counter. Disconnected user lists, manual updates, missing oversight? It’s a recipe for chaos. If you want to stay secure, keep things clear, and make life easier for your users, you need to ditch scattered account lists and clunky manual processes.

What you need is a smart, centralized Identity & Access Management (IAM) system. One that gives you full control over digital identities, roles, and permissions. In this article, we’ll show you why IAM is the backbone of any modern IT setup—and how open, privacy-compliant solutions like Nubus make all the difference, especially in the public sector.

What Is Identity & Access Management—and Why Your Organization Can't Do Without It

Identity & Access Management—aka IAM—might sound like a purely technical issue at first. But really, it comes down to a simple question: “Who gets to do what in your digital world?” Think of IAM as both the bouncer at the door and the building manager inside—it controls who gets in and makes sure everyone only enters the rooms theyre allowed to.

In practical terms, IAM manages your users digital identities—their accounts, roles, and permissions. It handles secure logins (thats authentication) and decides what each person can access (thats authorization). And yes, that distinction matters: authentication asks “Who are you?”, while authorization asks “What are you allowed to do?”

Instead of juggling scattered user lists across different systems, IAM gives you one central, automated place to manage it all. It becomes the control center for your organization—whether you’re running a public agency, a school, or a business. Once it’s up and running, IAM takes care of access to everything from email and learning platforms to cloud storage—securely, consistently, and transparently. That means less stress for your IT team—and more protection for everyone involved.

IAM as Your Digital Command Center: Authentication, Authorization & More

Most IT environments today are a patchwork of systems—email, file storage, business apps, learning platforms, cloud tools—and usually from a range of vendors. So who gets access to what? And what happens when someone changes teams, transfers schools, leaves the organization, or just needs temporary access for a project?

Without centralized control, things get messy fast. That’s where Identity & Access Management steps in—and puts an end to the digital equivalent of sticky notes and guesswork. IAM becomes your command center, giving you one place to manage all identities, roles, and access rights.

This isn’t just a “nice to have”—it’s mission-critical. Reliable access control is the backbone of your entire IT setup. It determines who can log in (authentication), what they’re allowed to see or do (authorization), and how quickly you can adjust permissions when something changes. Without that kind of control, security slips, things get chaotic, and your team ends up chasing problems instead of moving forward.

Open Standards in IAM: One Platform for All Applications

Lets be real—your IAM system shouldnt just manage logins. Its real power kicks in when it becomes the control center of your entire digital ecosystem. One place to handle users and plug in every app your team needs.

Take UCS@school, for example. It’s built specifically for the education sector, where things can get messy fast: You’ve got thousands of students and teachers, spread across multiple schools, switching classes, logging in from shared devices, and navigating schedules that change every semester. UCS@school keeps up: it manages user accounts and permissions in one place—and even automates school year rollovers, class groupings, and access to learning environments. No spreadsheet juggling required. And here’s where it gets even better: third-party tools like LMS platforms, cloud services, or tablet management systems can be seamlessly integrated using standard protocols.

The same goes for public sector environments, where services like intranet portals, document management systems, or VPNs all need to work together. That’s where Nubus steps in—giving you one IAM platform to replace a patchwork of access rules and outdated workflows.

The secret? Open standards like SAML, OpenID Connect, SCIM, and LDAP. These make it easy to plug in whatever you need—without vendor lock-in or licensing nightmares. Proprietary IAM systems often come with limited or expensive interfaces, turning what should be simple into a headache.

Nubus takes a different path: it’s built with openness and future-proofing in mind. That means more freedom, less stress for your IT team—and no more duct-taping your infrastructure together.

Understanding and Automating Role-Based Access Control (RBAC)

Not every user needs access to everything—and thats exactly how it should be. Teachers need class rosters and access to computer labs. Students need their learning platforms. Admin staff handle sensitive data and workflows. And your IT team? They need an all-access pass

A good IAM system makes this easy by letting you define roles—each one tied to a specific set of permissions. Assign someone a role, and bam: they get the right access, automatically. No need to manage it all by hand. Even better? Role assignments can be dynamic. That means they can be triggered by user attributes like department, group membership, or job title. It’s smart, flexible, and saves a ton of admin time.

In schools, for example: when a student moves to a new class, their access to digital materials, folders, and class systems updates automatically. Same for teachers —when they pick up a new class, the IAM system gives them what they need, no IT tickets required.

In public administration, it’s just as useful: new department? New role. New tools. New rights—all handled automatically. No manual updates. No security gaps. Just clean, consistent access that follows your people, not the other way around.

IAM as the Strategic Backbone of a Secure IT Infrastructure

With a central Identity & Access Management system, you stay in control. User accounts, roles, and access rights all managed in one place—automatically, securely, and with everything traceable and logged. That not only saves time day to day but also prevents the kind of permission errors that can lead to real trouble.

At the same time, IAM sets the stage for user-friendly features like Single sign-on (SSO), two-factor authentication (2FA), and self-service portals. No more juggling logins—users sign in once and access everything they need. They can reset their passwords or update personal info themselves, without bothering IT.

But IAM is more than just a management tool—its a key to digital sovereignty. When you control identities and access, you can also decide where and how your IT runs: in a GDPR-compliant EU cloud, in your own municipal data center, or as part of a federated platform. Especially for public sector organizations, that flexibility is critical—for data protection, transparency, and long-term independence.

Thats exactly what Nubus delivers. As an Open Source IAM platform built by Univention, Nubus was designed from the ground up with integration, automation, and security in mind. It gives you full control over your identities, works with open standards, and scales with your needs. Ready for an IAM solution made in Germany—transparent, privacy-compliant, and future-ready? Nubus puts you back in charge.

Digital sovereignty starts with the right IAM. Lets talk about what your organization needs.

Der Beitrag Getting IAM Right: Why Identity & Access Management Matters More Than Ever erschien zuerst auf Univention.

03 July, 2025 11:03AM by Yvonne Ruge

hackergotchi for Volumio

Volumio

Breaking the Algorithm: How CORRD is Revolutionizing Music Discovery

When did you last discover a song that gave you goosebumps? If you’re like most music lovers today, it’s probably been longer than you’d care to admit.

The Modern Music Discovery Paradox

We live in an unprecedented era of musical abundance. With over 100 million songs available across multiple streaming platforms, we have virtually every piece of recorded music at our fingertips. Yet paradoxically, many of us feel musically starved, trapped in endless loops of familiar tracks, struggling to break free from the algorithmic echo chambers that define our listening experience.

This contradiction sparked us at Volumio to create CORRD, our latest innovation that represents a bold new direction for music discovery.

 

When Infinite Choice Becomes No Choice

The problem isn’t a lack of music, it’s the way we discover it. Modern recommendation algorithms, despite their sophistication, operate as impenetrable black boxes. They analyze our listening habits and serve up “more of the same,” creating comfortable but creatively limiting feedback loops.

After we spoke with hundreds of music enthusiasts over the past year, a clear pattern emerged: despite having access to more music than any generation in history, people consistently reported that their most memorable musical discoveries happened years ago, often through human recommendation or serendipitous encounters.

Introducing CORRD: Discovery by Design

At Volumio, we decided to tackle this challenge head-on. CORRD represents our fundamental shift in how we approach music discovery. Rather than accepting algorithmic limitations, we’ve created the first platform that puts you in control of how recommendations work.

What Makes Our CORRD Different

Moment Driven Discovery: We designed CORRD to organize music around the moments in your life. Whether you’re hitting the gym, cooking dinner, driving to work, or winding down for the evening, CORRD crafts soundtracks that enhance these experiences with fresh music tailored to each moment.

Algorithmic Transparency: For the first time, we’re giving you the power to fine-tune how recommendations work. Want to explore underground artists? Prefer upbeat over melancholic? Simply tell CORRD what you’re looking for, and it will surface music that matches your exact preferences while prioritizing discovery over familiarity.

Quality Without Compromise: As the team behind Volumio, we couldn’t compromise on audio excellence. When streaming to Volumio devices at home, you enjoy bit-perfect playback quality that honors your music’s full sonic potential.

CORRD-Music Discovery

What Our CORRD Isn’t

We intentionally designed CORRD to break away from traditional music app conventions. There’s no endless browsing or searching, instead, CORRD guides you down a discovery path. It’s not a playlist generator requiring external apps for playback; everything happens within CORRD itself. Most importantly, we designed it for music lovers of all technical backgrounds, not just audiophiles.

The Volumio Connection

CORRD isn’t just a standalone app; it’s the foundation of a broader technological platform that will enhance the entire Volumio ecosystem. This mobile-first approach extends our mission of delivering exceptional musical experiences beyond the home.

The synergy is purposeful: CORRD excels at discovery and mobile playback, while Volumio provides the ultimate home listening experience. Together, they create a seamless journey from discovery to high fidelity enjoyment.

CORRD-Music Discovery

Join Us in the Discovery Revolution

CORRD represents months of dedicated development from our team at Volumio, but we recognize this is just the beginning. The future of music discovery will be shaped by the community of music lovers who embrace and help us refine this new approach.

We’re inviting you to be part of this evolution with us. Explore CORRD, push its boundaries, and share your feedback with us. Your insights will directly influence how we develop this platform to serve the global community of music enthusiasts.

Special Launch Offer: As a thank you to our Volumio community, we’re giving all users (both free and premium) exclusive access to CORRD with a special discount valid through July 31st.

Ready to Rediscover Music?

The question isn’t whether great new music exists, it’s whether you have the right tools to find it. CORRD represents our answer: a discovery platform that respects your agency, enhances your moments, and rekindles the joy of musical exploration.
Install CORRD Now

What will you discover?


The Team at Volumio

The post Breaking the Algorithm: How CORRD is Revolutionizing Music Discovery appeared first on Volumio.

03 July, 2025 08:53AM by Alia Elsaady

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: JetPack 4 EOL – how to keep your userspace secure during migration

NVIDIA JetPack 4 reached its end-of-life (EOL) in November 2024, marking the end of security updates for this widely deployed stack. JetPack 4 has driven innovation in countless devices powered by NVIDIA Jetson, serving as the foundation of edge AI production deployments across multiple sectors.

But now, the absence of security maintenance creates risk for businesses with deployed JetPack 4-based deployments. Devices running unmaintained software can quickly become vulnerable to known exploits. This blog will walk through your options for maintaining security with Ubuntu Pro, and outline the path toward modernizing your stack with the certified NVIDIA Jetson Orin series.

What does JetPack 4’s EOL mean for you?

As of November 2024, JetPack 4 is no longer maintained by NVIDIA. This means the end of security updates, bug fixes, and support for all JetPack 4.x versions and corresponding Jetson Linux releases. 

This marks an important shift. JetPack 4 users are now fully responsible for maintaining the security of their own deployments or upgrading to newer JetPack versions. Without vendor-backed updates, maintaining system integrity becomes increasingly difficult, especially for production fleets.

For organizations who wants to continue using JetPack 4 for deployment, this raises two pressing questions:

  • What’s the best migration path to a supported, future-proof platform?
  • How do you keep existing devices secure in the short term? 

Long term solution: Ubuntu certified with +10 LTS 

Your best long-term path is migration. The Jetson Orin series is the future of AI at the edge – and now comes with official Ubuntu certification.

Canonical and NVIDIA have collaborated to deliver out-of-the-box support for NVIDIA Jetson Orin platforms. Whether you’re using Jetson Orin Nano, Jetson Orin NX, or Jetson AGX Orin, certified Ubuntu brings:

  • Unparalleled performance: Harness the full potential of the NVIDIA Jetson platform’s AI capabilities with optimized Ubuntu images.
  • Enterprise-ready security: Benefit from Ubuntu’s robust security features and long-term support.
  • Seamless development to deployment: Experience a unified environment from cloud to edge, streamlining the AI development process.
  • Reliable stability on certified hardware: Canonical’s QA team performs an extensive set of over 500 OS compatibility-focused hardware tests to ensure that every aspect of the system is checked and verified for the best Ubuntu experience.

With Ubuntu on Jetson Orin, developers gain a consistent and modern Linux experience – the same tools used in cloud and desktop environments are now available on the edge. This continuity simplifies development, accelerates time to market, and maximises device lifespan.

Learn more: Ubuntu now officially supports NVIDIA Jetson – powering the future of AI at the edge

Short term solution: Ubuntu Pro and ESM

Immediate migration to Jetson Orin may not be feasible for every team. Legacy deployments, supply constraints, or integration timelines can slow things down.  

JetPack 4 is built upon Ubuntu 18.04 LTS. This Ubuntu userspace includes more than 23,000 packages – such as Python, OpenSSL, systemd, and bash – that are integral to the system’s functionality and your developed application.​ The standard support window for these packages ended in 2023, so they are no longer receiving security maintenance. 

Canonical’s Ubuntu Pro offers Expanded Security Maintenance (ESM) for Ubuntu LTS releases, ensuring continued security updates beyond the standard support window. For JetPack 4 users, enabling Ubuntu Pro gives access to updates from the ESM PPA until 2028, so that packages from the Ubuntu userspace remain secure. 

These updates provide critical security patches for the Ubuntu 18.04 LTS userspace only. The JetPack 4 kernel (based on the Linux kernel 4.9) remains outside of the scope of the service – so migration should still be the long term goal – but ESM can help you maintain operational security and system stability while you transition to newer platforms.

Ubuntu Pro also includes popular applications such as the Robot Operating System (ROS), where Canonical provides services such as ROS ESM for the upcoming EOL of ROS 1 Noetic.

| Remember to always test your updates before deploying them to end devices. 

Small number of devices: purchase ESM through the Ubuntu Pro store

For a few devices, purchasing ESM directly through the Ubuntu Pro Store is straightforward. Simply go to the Ubuntu Pro Store and complete your purchase. 

Go to the Store

Larger fleets: purchase ESM through Canonical’s Ubuntu Pro for Devices

Ubuntu Pro for Devices grants you access to the Ubuntu Pro subscription (and so to ESM), applying a beneficial discount-based model depending on your compute module.

Get in touch with a sales representative to get Ubuntu Pro through our Devices plan.

How to activate ESM

Security updates during ESM are accessed via a dedicated repository. This requires a subscription token, which you can get through your Ubuntu Pro account after subscribing.

To enable ESM, you just need to follow the following instructions in your welcome email: 

  1. Install the Ubuntu Pro client
  2. Attach your token to your JetPack4 machine
  3. Activate ESM 
  4. Run apt upgrade will now allow you to install available updates

For more detailed instructions, please visit the official documentation of Ubuntu Pro.

Summary

With JetPack4 EOL, businesses must proactively manage their devices. Working on an unsupported release introduces security risks no organization can afford. Although migrating to a newer Ubuntu Certified Jetson Orin remains our primary recommendation, we acknowledge the challenges involved. When migration isn’t immediately feasible, activating ESM provides some security maintenance.

Get in touch for tailored advice on the best migration or support options for your organization. 

03 July, 2025 08:30AM

hackergotchi for Deepin

Deepin

Next-Gen Compatibility | Linyaps: Universal Adaptability,Deep Ecosystem Roots

In the evolution of the Linux ecosystem, the compatibility and distribution efficiency of software package management have long been critical challenges. After years of deep technical investment, deepin introduces Linyaps—an independent package management toolkit incubated by the OpenAtom Foundation. Centered on "cross-distro compatibility, sandbox-based security, and minimal dependencies", it leverages cutting-edge technology and ecosystem integration to resolve long-standing Linux software challenges—like dependency conflicts and fragmented distribution—revolutionizing the experience of software development, management, and deployment." This article delves into the historical challenges of traditional package management and explores Linyaps' core technical architecture, ecosystem achievements, and usage guide.   Solving the Historical ...Read more

03 July, 2025 06:27AM by xiaofei

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: A (limited) defence of footnotes

So, Jake Archibald wrote that we should "give footnotes the boot", and... I do not wholly agree. So, here are some arguments against, or at least perpendicular to. Whether this is in grateful thanks of or cold-eyed revenge about him making me drink a limoncello and Red Bull last week can remain a mystery.

Commentary about footnotes on the web tends to boil down into two categories: that they're foot, and that they're notes. Everybody1 agrees that being foot is a problem. Having a meaningless little symbol in some text which you then have to scroll down to the end of a document to understand is stupid. But, and here's the point, nobody does this. Unless a document on the web was straight up machine-converted from its prior life as a printed thing, any "footnotes" therein will have had some effort made to conceptually locate the content of the footnote inline with the text that it's footnoting. That might be a link which jumps you down to the bottom, or it might be placed at the side, or it might appear inline when clicked on, or it might appear in a popover, but the content of a "footnote" can be reached without your thread of attention being diverted from the point where you were previously2.

He's right about the numbers3 being meaningless, though, and that they're bad link text; the number "3" gives no indication of what's hidden behind it, and the analogy with "click here" as link text is a good one. We'll come back to this, but it is a correct objection.

What is a footnote, anyway?

The issue with footnotes being set off this way (that is: that they're notes) isn't, though, that it's bad (which it is), it's that the alternatives are worse, at least in some situations. A footnote is an extra bit of information which is relevant to what you're reading, but not important enough that you need to read it right now. That might be because it's explanatory (that is: it expands and enlarges on the main point being made, but isn't directly required), or because it's a reference (a citation, or a link out to where this information was found so it can be looked up later and to prove that the author didn't just make this up), or because it's commentary (where you don't want to disrupt the text that's written with additions inline, maybe because you didn't write it). Or, and this is important, because it's funnier to set it off like this. A footnote used this way is like the voice of the narrator in The Perils of Penelope Pitstop being funny about the situation. Look, I'll choose a random book from my bookshelf4, Reaper Man by Terry Pratchett.

A photograph of a book page. Most of the text is a little blurred to distract attention from it. Midway down the page, unblurred text reads: 'Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge.¹ She could have a revelation in a panful of frying bacon.' At the bottom of the page is the text referenced by the footnote marker, which reads: '¹ It would say, for example, that you would shortly undergo a painful bowel movement.'

This is done because it's funny. Alternatives... would not be funny.5

If this read:

Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge. (It would say, for example, that you would shortly undergo a painful bowel movement.) She could have a revelation in a panful of frying bacon.

then it's too distracting, isn't it? That's giving the thing too much prominence; it derails the point and then you have to get back on board after reading it. Similarly with making it a long note via <details> or via making it <section role="aside">, and Jake does make the point that that's for longer notes.

Even the industrial-grade crystal ball was only there as a sop to her customers. Mrs Cake could actually read the future in a bowl of porridge.

NoteIt would say, for example, that you would shortly undergo a painful bowel movement.

She could have a revelation in a panful of frying bacon.

Now, admittedly, half the reason Pratchett's footnotes are funny is because they're imitating the academic use. But the other half is that there is a place for that "voice of the narrator" to make snarky asides, and we don't really have a better way to do it.

Sometimes the parenthesis is the best way to do it. Look at the explanations of "explanatory", "reference", and "commentary" in the paragraph above about what a footnote is. They needed to be inline; the definition of what I mean by "explanatory" should be read along with the word, and you need to understand my definition to understand why I think it's important. It's directly relevant. So it's inline; you must not proceed without having read it. It's not a footnote. But that's not always the case; sometimes you want to expand on what's been written without requiring the reader to read that expansion in order to proceed. It's a help; an addition; something relevant but not too relevant. (I think this is behind the convention that footnotes are in smaller text, personally; it's a typographic convention that this represents the niggling or snarky or helpful "voice in your head", annotating the ongoing conversation. But I haven't backed this up with research or anything.)

What's the alternative?

See, this is the point. Assume for the moment that I'm right6 and that there is some need for this type of annotation -- something which is important enough to be referenced here but not important enough that you must read it to proceed. How do we represent that in a document?

Jake's approaches are all reasonable in some situations. A note section (a "sidebar", I think newspaper people would call it?) works well for long clarifying paragraphs, little biographies of a person you've referenced, or whatever. If that content is less obviously relevant then hiding it behind a collapsed revealer triangle is even better. Short stuff which is that smidge more relevant gets promoted to be entirely inline and put in brackets. Stuff which is entirely reference material (citations, for example) doesn't really need to be in text in the document at all; don't footnote your point and then make a citation which links to the source, just link the text you wrote directly to the source. That certainly is a legacy of print media. There are annoying problems with most of the alternatives (a <details> can't go in a <p> even if inline, which is massively infuriating; sidenotes are great on wide screens but you still need to solve this problem on narrow, so they can't be the answer alone.) You can even put the footnote text in a tooltip as well, which helps people with mouse pointers or (maybe) keyboard navigation, and is what I do right here on this site.

But... if you've got a point which isn't important enough to be inline and isn't long enough to get its own box off to the side, then it's gotta go somewhere, and if that somewhere isn't "right there inline" then it's gotta be somewhere else, and... that's what a footnote is, right? Some text elsewhere that you link to.

We can certainly take advantage of being a digital document to display the annotation inline if the user chooses to (by clicking on it or similar), or to show a popover (which paper can't do). But if the text isn't displayed to you up front, then you have to click on something to show it, and that thing you click on must not itself be distracting. That means the thing you click on must be small, and not contentful. Numbers (or little symbols) are not too bad an approach, in that light. The technical issues here are dispensed with easily enough, as Lea Verou points out: yes, put a bigger hit target around your little undistracting numbers so they're not too hard to tap on, that's important.

But as Lea goes on to say, and Jake mentioned above... how do we deal with the idea that "3" needs to be both "small and undistracting" but also "give context so it's not just a meaningless number"? This is a real problem; pretty much by definition, if your "here is something which will show you extra information" marker gives you context about what that extra information is, then it's long enough that you actually have to read it to understand the context, and therefore it's distracting.7 This isn't really a circle that can be squared: these two requirements are in opposition, and so a compromise is needed.

Lea makes the same point with "How to provide context without increasing prominence? Wrapping part of the text with a link could be a better anchor, but then how to distinguish from actual links? Perhaps we need a convention." And I agree. I think we need a convention for this. But... I think we've already got a convention, no? A little superscript number or symbol means "this is a marker for additional information, which you need to interact with8 to get that additional information". Is it a perfect convention? No: the numbers are semantically meaningless. Is there a better convention? I'm not sure there is.

An end on't

So, Jake's right: a whole bunch of things that are currently presented on the web as "here's a little (maybe clickable) number, click it to jump to the end of the document to read a thing" could be presented much better with a little thought. We web authors could do better at this. But should footnotes go away? I don't think so. Once all the cases of things that should be done better are done better, there'll still be some left. I don't hate footnotes. I do hate limoncello and Red Bull, though.

  1. sensible
  2. for good implementations, anyway; if you make your footnotes a link down to the end of the document, and then don't provide a link back via either the footnote marker or by adding it to the end, then you are a bad web author and I condemn you to constantly find unpaired socks, forever
  3. or, ye gods and little fishes, a selection of mad typographic symbols which most people can't even type and need to be copied from the web search results for "that little squiggly section thingy"
  4. alright, I chose a random Terry Pratchett book to make the point, I admit; I'm not stupid. But it really was the closest one to hand; I didn't spend extra time looking for particularly good examples
  5. This is basically "explaining the joke", something which squashes all the humour out of it like grapes in a press. Sorry, PTerry.
  6. I always do
  7. I've seen people do footnote markers which are little words rather than numbers, and it's dead annoying. I get what they're trying to do, which is to solve this context problem, but it's worse
  8. you might 'interact' with this marker by clicking on it in a digital document, or by looking it up at the bottom of the page in a print doc, but it's all interaction

03 July, 2025 06:12AM

July 02, 2025

Ubuntu Blog: Source to production: Spring Boot containers made easy

This blog is contributed by Pushkar Kulkarni, a Software Engineer at Canonical.

Building on the rise in popularity of Spring Boot and the 12 factor paradigm, our Java offering also includes a way to package Spring workloads in production grade, minimal, well organized containers with a single command. This way, any developer can generate production-grade container images without intricate knowledge of software packaging.

This is possible thanks to Rockcraft, a command-line tool for building container images, and its related Spring Boot extension – a set of pre-packaged configurations that encapsulate common needs for specific application types or technologies.

Creating containers becomes as simple as running the rockcraft init --profile spring-boot-framework command and pointing the resulting configuration file to your project folder. This makes building containers in CI or on developer machines easy, fast, and predictable.

The foundation: Rockcraft, Profiles, and Pebble

Under the hood, the aforementioned command leverages the following Canonical open source tools:

  • Rockcraft is an open source tool developed by Canonical for building secure, stable, and OCI-compliant container images based on Ubuntu. It is designed to simplify and standardize the process of creating production-grade container images, thanks to a declarative configuration and predictable image structure. By default all containers have Pebble as the entrypoint.
  • Profile is a configuration option for Rockcraft that tailors the project structure and configuration files to a specific framework or use case. The spring-boot-framework extension dynamically determines the plugin to use to build the rock. Depending on the presence of pom.xml or build.gradle file, the extension will use either maven or gradle plugin, respectively.
  • Pebble is the default entrypoint for all generated containers. Pebble is a lightweight Linux service manager that helps you orchestrate a set of local processes as an organized set. It resembles popular tools such as supervisord, runit, or s6, in that it can easily manage non-system processes independently from the system services. However, it was designed with unique features such as layered configuration and an HTTP API that help with more specific use cases like log forwarding and health checks.

By default, the spring-boot-framework uses the Ubuntu default-jdk package to build the rock, which means that a different Java JDK version is used depending on the build base. To provide an efficient runtime for Java, the extension calls the Jlink plugin plugin to trim out any unused parts of the JDK. This reduces the size of the rock and improves performance

You can read more about Rockcraft and Pebble in the respective official product documentation.

Setting up your development environment

The first step is to install and initialize Rockcraft and LXD. The latter is used by Rockcraft to provide isolated and reproducible build environments, without the need to pollute the host.

sudo snap install lxd
lxd init --auto
sudo snap install rockcraft --classic --channel latest/edge

If you already have a fully tested jar that’s all you need to get started packaging your application as a production grade container. 

If not, thanks to the OpenJDK packages in the Ubuntu archive and the newly released Devpack for Spring snap you get a fully functional Spring Boot development environment simply by running the following 2 commands:

sudo snap install devpack-for-spring --classic
sudo apt update && sudo apt install -y openjdk-21-jdk

Building production grade containers in one command

Start by creating a project file. Rockcraft will automate its creation and tailor it for a Spring Boot application when we tell it to use the spring-boot-framework profile:

rockcraft init --profile spring-boot-framework

This command generates the following rockcraft.yaml file, where the only thing left to do is to point the name to the desired file and uncomment the host architecture.

name: spring
# see https://documentation.ubuntu.com/rockcraft/en/latest/explanation/bases/
# for more information about bases and using 'bare' bases for chiselled rocks
base: bare # as an alternative, a ubuntu base can be used
build-base: ubuntu@24.04 # build-base is required when the base is bare
version: '0.1' # just for humans. Semantic versioning is recommended
summary: A summary of your application # 79 char long summary
description: |
  This is spring's description. You have a paragraph or two to tell the
  most important story about it. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the
  container registries out there.
# the platforms this rock should be built on and run on.
# you can check your architecture with `dpkg --print-architecture`
platforms:
  amd64:
  # arm64:
  # ppc64el:
  # s390x:

# to ensure the spring-boot-framework extension functions properly, your
# Spring Boot project should have either a mvnw or a gradlew file.
# see https://documentation.ubuntu.com/rockcraft/en/latest/reference/extensions/spring-boot-framework
# for more information.
# +-- spring
# |  |-- gradlew # either one of the two files should be present
# |  |-- mvnw    # either one of the two files should be present

extensions:
  - spring-boot-framework

# uncomment the sections you need and adjust according to your requirements.
# parts:

#  spring-boot-framework/gradle-init-script:
#    override-build: |
#      cp  ${CRAFT_STAGE}

#  spring-boot-framework/install-app:
#    # select a specific Java version to build the application. Otherwise the
#    # default-jdk will be used.
#    build-packages:
#    - default-jdk

#  spring-boot-framework/runtime:
#    # select a specific Java version to run the application. Otherwise the
#    # default-jdk will be used. Note that the JDK should be used so that Jlink
#    # tool can be used to create a custom runtime image.
#    build-packages:
#    - default-jdk 

We are now ready to pack the rock, which can be done with:

rockcraft pack

Once Rockcraft has finished packing the Spring Boot application, we’ll find a new file in the working directory (an OCI image) with the .rock extension. You are now able to deploy the newly created container image on the platform of your choice.

Learn more about Canonical builds of OpenJDK

02 July, 2025 02:09PM

hackergotchi for Purism PureOS

Purism PureOS

After a week, Trump Mobile drops claim that the T1 Phone is “Made in the USA”

Trump Phone’s “Made in USA” Claims Raised Eyebrows a week ago, today it has been confirmed that the device is not made in the USA according to ARS Technica (link to article).

The post After a week, Trump Mobile drops claim that the T1 Phone is “Made in the USA” appeared first on Purism.

02 July, 2025 11:23AM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Spring support available on Ubuntu

This blog is contributed by Vladimir Petko, a Software Engineer at Canonical.

The release of Plucky Puffin earlier this year introduced the availability of the devpack for Spring, a new snap that streamlines the setup of developer environments for Spring on Ubuntu.

In this blog, we’ll explain what devpacks are and provide an overview of what the devpack for Spring brings to users. Whether you’re a seasoned user of Spring, or want to explore what it can do, in this blog we’ll provide you with the overview you need to unlock the powerful functionality that the devpack for Spring offers.

Introducing devpacks

Devpacks are snaps with collections of tools that assist in setting up, maintaining and publishing software projects. They provide tooling such as linters and formatters to maintain source code and integrations with packaging tools such as Rockcraft to deploy applications to the containers and create a self-contained build environment for the applications. The intention of devpacks is to lower the barrier of entry for developers looking to build software according to the best practices.

What is devpack for Spring?

For those who aren’t familiar, Spring is a popular open-source Java framework designed to help developers build robust enterprise Java applications. According to a 2023 report by JetBrains, 72% of respondents reported using Spring as their framework of choice. 

Experienced users may already be familiar with Spring CLI, a command line productivity tool that lowers the barrier to entry for starting a new Spring project, as well as for adding new features and performing day-to-day maintenance. It provides:

  • A boot start command to bootstrap a new Spring project.
  • A boot new command to clone an existing project. 
  • A boot add to merge the current project with an existing one.
  • The ability to declare user-specific commands to perform tasks such as controller creation, dependency addition, and file configuration.

The devpack for Spring is a classic confinement snap that packages Spring CLI and brings it to Ubuntu. In addition to the upstream functionality, devpack for Spring offers the following features:

  • Offline installation of Spring project libraries: these pre-installed Spring libraries reduce the initial build time, provide a consistent set of dependencies and simplify setting up offline builds.
  • Pre-configured plugins for Maven and Gradle projects: this provides a consistent set of defaults across the organisation for source code formatting, static analysis and other best practices and policies. This functionality also reduces the boilerplate in the projects, and allows central upgrades of defaults without updating every project.

A special use case we’d like to highlight is that you can export build and runtime images for your projects. Whilst the current version of this devpack only supports exporting build and runtime Ubuntu rocks, we will also look into integrating JIB and Spring Boot Cloud Native Buildpack plugins to generate runtime container images for Spring applications. 

Let’s walk through the features of the devpack for Spring, focusing on the added features. Or if you prefer, there is a video walkthrough:

Getting started with devpack for Spring

Installation

At each stage of this demo, we’ve provided a timestamped video link, for you to see the features in action. However, you can still follow the demo without the videos, if you wish. Let’s dive in.

Devpack for Spring is a snap with classic confinement and can be installed with 

snap install devpack-for-spring --classic --edge

We will bootstrap a new project for the demo:

devpack-for-spring boot start

This command uses Spring Initializr to generate a new Spring Boot project. We will use Gradle, Java 21 default and Spring Boot 3.4.4

Installed and available offline libraries

Devpack for Spring provides prebuilt binaries of Spring Project libraries as snaps. They are configured to be used with Gradle and Maven builds.

This command lists installed and available offline libraries.:

devpack-for-spring snap list

The Spring Boot project that was created uses Spring Boot 3.4.4, which depends on Spring Framework 6.2.5.

This command installs available libraries:

devpack-for-spring snap install

The command removes the installed libraries:

devpack-for-spring snap remove

Building the project

Let’s build the project and make some changes to the source code: 

We will add the dependency on spring-boot-starter-web and implement a REST controller. 

With devpack for Spring, we can run pre-configured build plugins without making the build changes for Gradle projects. The list is limited to io.spring.javaformat and io.github.rockcrafters.rockcraft plugins in the current version.

The io.spring.javaformat plugin formats the project source code according to the Spring coding standards. 

The io.github.rockcrafters.rockcraft plugin provides the ability to create OCI images for building and running the Spring application using Ubuntu Rocks.

Running

devpack-for-spring list-plugins

lists the plugins configured in devpack-for-spring:

Run

devpack-for-spring plugin format

to format the source code of the project:

Creating a build container

The Rockcraft plugin allows you to create a build container for the project – an OCI image that contains the build toolchain and project dependencies. The plugin needs the rockcraft snap installed:

snap install rockcraft --classic

devpack-for-spring includes Rockcraft plugin functionality to store dependencies offline:

devpack-for-spring plugin dependencies

The output is stored in target/build-rock/dependencies/ for Maven and build/build-rock/dependencies for Gradle:

The Rockcraft plugin generates a chiseled build image – an OCI image that includes a headless JDK, the build system (Gradle or Maven), and project dependencies.

The image can be uploaded to the local Docker daemon with:

devpack-for-spring plugin rockcraft push-build-rock

The image name is build-<your-project-name>

The build image is a chiselled Ubuntu by default with slices for openjdk, busybox, and git.

It provides a minimal set of dependencies to build the project:

docker run -v `pwd`:`pwd` \
  --user $(id -u):$(id -g) \
  --env PEBBLE=/tmp :latest \
  exec build `pwd` --no-daemon

Visual Studio code provides an extension to develop inside dev containers – images that contain all the tools necessary to develop the application. The extension adds a Visual Studio code server to the image and runs it. 

To run Visual Code server we will need to use Ubuntu base for your rock. The Rockcraft plugin allows you to do it by adding a build-rock/rockcraft.yaml file that contains the overrides for the default build container:

name: build-demo-dev
base: ubuntu@24.04

environment:
  JAVA_HOME: /usr/lib/jvm/java-21-openjdk-${CRAFT_ARCH_BUILD_FOR}

parts:
  dependencies:
    plugin: nil
    stage-packages:
        - openjdk-21-jdk
    override-build: craftctl default

To use buildRockcraft extension in the build.gradle we will also need to add the plugin:

plugins {
  id("io.github.rockcrafters.rockcraft") version "1.1.0"
...
}

buildRockcraft {
  rockcraftYaml = "build-rock/rockcraft.yaml"
}

A future version of devpack-for-spring will allow you to do it with a single command. 

To use the build rock as a devcontainer in VS Code, add the following .devcontainer.json to your project:

{
  "image" : "<your-build-rock-image>:latest",
  "containerUser": "ubuntu"
}

devpack-for-spring can deploy your application in the chiselled ROCK container.  

To build the chiselled runtime image of the Spring Boot application run:

devpack-for-spring plugin rockcraft build-rock

To push the image to the local docker daemon, execute:

devpack-for-spring plugin rockcraft push-rock

The image is tagged

<your-project-name>:latest,<your-project-name>:<your-project-version>.

See video of building and running the demo application in a chiseled container:

Future developments

The functionality above is just the beginning. We’re excited to bring the benefits of Spring to developers on Ubuntu, and we hope that this new feature empowers our users to kickstart some exciting new projects.

Our next step will be to expand plugin capabilities. We’ll provide refactorings to enable and configure build plugins using openrewrite, and select and pre-configure a set of plugins to improve the quality of life. This means that devpack for Spring will provide an opinionated choice for formatting, code analysis, dependency checking, and reproducible builds. The configuration will also be able to accept pre-configured file snippets to define additional configuration such as checkstyle checks and Rockcraft image overrides files. Ultimately, the goal of this preconfiguration is to make the Spring environment on Ubuntu enjoyable and easy to use.

Another major area we are exploring is the development environment setup and validation. With devpack for spring we intend to offer a choice of setting up JDK, build system, IDE, and the container environment. 

As an organization dedicated to open source values, we rely on our community to help us improve and develop our projects. We’d like to invite all of our readers to try out the devpack for Spring and to provide feedback by adding a new issue at https://github.com/canonical/devpack-for-spring/issues.

02 July, 2025 09:53AM

hackergotchi for Deepin

Deepin

Next-Gen Guardian | deepin 25 Solid System: Unshakable Core Defense!

In digital office and development scenarios, system stability and security have become core user demands. Therefore, deepin 25 introduces the Solid System (deepin Immutable System) — built around three core technologies: "Read-Only Protection, Snapshot Management, and Reassuring Restore." It constructs a comprehensive stable protection system spanning from the kernel layer to the application layer, dedicated to provide users with "rock-solid" system assurance. This article delves into common operating system stability pain points and details how the deepin 25 Solid System leverages innovative technologies to fundamentally reinforce the reliability of system operations.   I. Common OS Stability Issues & Root Causes ...Read more

02 July, 2025 05:45AM by xiaofei

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Live Linux kernel patching with progressive timestamped rollouts

The apt package manager is responsible for installing .deb packages on Ubuntu LTS (long-term support) and interim releases, including the .deb package for the Linux kernel. Updating the kernel package requires a system restart, leaving systems vulnerable between the moment the Linux kernel package is installed and when the machine is rebooted. In many cases, this exploit window is expanded by scheduled maintenance windows and delays associated with testing and validating security patches in staging environments.

Canonical Livepatch shrinks this exploit window by surgically modifying vulnerable kernel code in memory, and redirecting function calls to patched versions while the system continues operating. However, if the apt package manager has not also installed the security update from a newer .deb Linux kernel package, the in-memory security patches will be lost on reboot. This means that if the machine starts up in a vulnerable state, then Livepatch Client will have to reapply the Livepatch update. Ideally, system administrators will install security updates for the Linux kernel by upgrading the kernel .deb package, and rely on Canonical Livepatch service to secure the machine before the next reboot.

It is best practice to progressively roll out updates in test environments, before updating production environments. Until now, the only way to stagger Livepatch updates was to self-host a Livepatch Server, and control which machines received which Livepatch updates. Now it is even simpler to enable the Canonical Livepatch security patching automation with testing and validation in staging environments, before production. In internet connected environments, where Ubuntu instances can reach livepatch.canonical.com, Livepatch Client supports timestamp-based rollout configurations. Organizations can implement controlled and predictable update pipelines from staging to production environments, without the hassle of deploying a self-hosted Livepatch Server, and managing the distribution of Livepatch updates through Livepatch Server.

No Livepatch updates beyond this timestamp, please

The Livepatch cut-off date feature is enterprise focused, and is not available to users of the free Ubuntu Pro token. Configuring Livepatch Client with a specific timestamp in the past forces an Ubuntu machine to remain in a known, deterministic state. This can be achieved with 1 command, using “2024-10-01T12:00:00Z” as a hypothetical timestamp:

$ canonical-livepatch config cutoff-date="2024-10-01T12:00:00Z"

Even in tightly regulated production environments, system administrators can now move from a reactive patching to a proactive patching posture. Time-based control enables straightforward and rigorous testing workflows.

Progressing from testing, to staging, to production

  1. In the development or testing environment, configure Livepatch without cut-off restrictions. This allows the latest patches to be applied immediately. If a Livepatch cut-off date has been set, setting it to a blank value will remove it:

    $ canonical-livepatch config cutoff-date=""

  2. The staging environment should mirror production as closely as possible. Set a cut-off date that is ahead of the date set in production. This allows updates that have been withheld from production to arrive in the staging environment.
  3. Once testing, development, and staging environments have received Livepatch updates, the updates can be promoted to production with a high degree of confidence. Match the cut-off date in the production environment with what has been applied in staging.

It is possible to identify which Livepatch updates have been applied by tracking the patched CVEs in the Livepatch Client status output:

$ canonical-livepatch status --verbose

Conclusion

The timestamp based rollout capability introduced in Livepatch Client provides a predictable and controlled pipeline of updates, without the complexity of managing your own Livepatch Server. Using graduated cut-off dates across environments enables the Livepatch security patching automation solution to conform with most enterprise security update protocols.

This powerful and now extremely convenient feature is not included in the Ubuntu Pro free tier. Virtual machines launched on major public cloud providers such as AWS, Azure, Google, or Oracle using an Ubuntu Pro image will have access to the cut-off date feature in Livepatch Client. Take control over your system reliability and operational confidence by enabling Canonical Livepatch on your Ubuntu instances today.

Ready to security patch the Linux kernel without downtime?

Zero downtime patching is even better with zero surprises, chat with experts at Canonical to determine how Livepatch can improve your security posture.

Contact Us

02 July, 2025 12:01AM

July 01, 2025

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2025/06

The 6th monthly Sparky project and donate report of the 2025: – Linux kernel updated up to 6.15.2, 6.12.33-LTS, 6.6.93-LTS – added new and updated existing German translations of Sparky tools; thanks to Roy Make sure some applications in Sparky repos has been not updated last week, including kernels, and not be updated until the next week. The SourceForge based package repos is also not…

Source

01 July, 2025 06:47PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Chiseled Ubuntu containers for OpenJRE 8, 17 and 21

Today we are announcing chiseled containers for OpenJRE 8, 17 and 21 (Open Java Runtime Environment), coming from the OpenJDK project. These images are highly optimized for size and security, containing only the dependencies that are strictly necessary. They are available for both AMD64 and ARM64 architectures and benefit from 12 years of security support.

You can download the images from the following links:

We also completed a set of benchmarks to compare the images toChiseled Ubuntu containers to similar JRE images, and you can find the links to the Github repos further down in the page.

What are Chiseled JRE containers

Chiseled containers represent Canonical’s take on distroless container images. They are created using Chisel, an open source tool developed by Canonical that allows users to extract well-defined portions (aka slices) of Debian packages into a filesystem. Just like ordinary Debian packages, slices carry their own content and set of dependencies to other internal and external slices.

The Chiseled Ubuntu containers for JRE image packs the OpenJRE (Open Java Runtime Environment), coming from the OpenJDK project, a free and open source implementation of the Java Platform, Standard Edition (Java SE). In the context of JRE, Chisel allows the creation of super small images that can be used as a runtime, final-stage base image for compatible Java applications. Chiseled images not only remove the package manager, bash or superfluous components of OpenJDK, but also all unused dependencies or portions of dependencies.

Improved size and developer experience

We have benchmarked the Chiseled Ubuntu JRE images against the relevant alternatives from Eclipse Adoptium (Temurin), Amazon Corretto and Azul Zulu. The results have highlighted that:

  • The Chiseled Ubuntu JRE image of OpenJDK 17 provides approximately a 51% reduction in the size of the compressed image compared to Temurin runtime image, a 65% reduction compared to the Amazon Corretto runtime image and a 30% reduction compared to the Azul Zulu runtime image
  • The Chiseled Ubuntu JRE image of OpenJDK 8 provides a 52% reduction in the size of the compressed image compared to Temurin and 1% smaller than the Amazon Corretto image. Azul Zulu does not provide a JRE image and it was not evaluated.

In both instances we did not identify any statistically significant degradation of startup performance or throughput compared to the other images analyzed. The detailed benchmarks results will be published in subsequent blogs.

As a result, Chiseled Ubuntu JRE images will result in a lower resource and bandwidth utilization, which in turn mean significant cost savings when used at scale in a cloud environment. You can build and package your Java application simply by creating a new build stage from a base image.

You can read more about Chisel in the product documentation and you can access the Github repositories from the following links to find an in-depth performance analysis:

Trusted provenance and strong security guarantees

Chiseled Ubuntu images inherit Ubuntu’s long-term support guarantees and are updated within the same release cycle using the self-same packages as within other LTS components. They are fully supported by Canonical:

  • Up to 12-year security patching for Ubuntu Pro customers on all Ubuntu packages
  • Optional weekday or 24/7 customer support
  • 100% library and release cycle alignment with Ubuntu LTS

According to the 2023 Sysdig report, Java packages are the riskiest, representing over 60% of vulnerabilities exposed at runtime. With Java 8 and 17 still representing the dominant Java versions in large enterprises according to the latest Jetbrains and New Relic surveys, Canonical LTS guarantees means that developer teams can extend the life of their existing applications, without needing to operate a tradeoff between costly upgrades and security risk exposure.

Extend support to your workload with Everything LTS

Canonical’s Everything LTS  offers 12-year LTS for any open source Docker image, on any CNCF compliant Kubernetes distribution, which is ideal for organizations looking for extended support for their workloads.  With Everything LTS, Canonical’s expert engineering team will build distroless Docker images to customer specifications (including upstream components not packaged in Ubuntu) and fix critical CVEs within 24 hours, supported on RHEL, Ubuntu, VMware or public cloud K8s for 12+ years.

You can read more about Chiseled Ubuntu and Everything LTS in our blog:

01 July, 2025 03:48PM

hackergotchi for ZEVENET

ZEVENET

Network Traffic Inspection in Application Delivery

When a company evaluates an Application Delivery Controller, one of the key questions that often comes up is: Does it allow for traffic inspection?

But this question is more complex than it seems. Traffic inspection is a critical capability in many Application Delivery Controllers (ADCs), but its meaning can vary depending on the context, the depth of analysis, and the expected functionalities.

In this article, we explain what traffic inspection actually means in an ADC, why it matters, and what capabilities SKUDONET offers in this area.

Not All Inspection Is the Same

When an ADC inspects traffic, it can do so at different layers of the OSI model and with different purposes. The most common types include:

Layer 4 (L4) Inspection

At this level, the ADC analyzes network connections (TCP/UDP). It examines basic headers such as IP addresses, ports, and protocol type. This enables fast and efficient load balancing without needing to parse the content of the request.

  • Advantages: Very high performance, minimal latency
  • Limitations: No visibility into the request content

Layer 7 (L7) Inspection

Here, the ADC analyzes the actual content of the request at the application layer (HTTP, HTTPS, etc.). It can inspect HTTP headers, paths, cookies, URL parameters, or even the payload.

  • Advantages: Enables smarter decisions based on real content
  • Limitations: Requires more processing power; may affect performance if not optimized
Layer What It Analyzes What It Enables Pros Cons
L4 IP addresses, source/destination ports, transport protocols (TCP/UDP), connection states Basic load balancing, connection persistence, firewall rules, IP rate limiting, early bot/malicious traffic filtering Fast processing, low latency, efficient for bulk traffic, first-level security filtering No visibility into application data, limited contextual awareness
L7 HTTP/HTTPS headers, cookies, URL paths, query parameters, request methods (GET, POST…), full payload, user agents Content-based routing, API rate limiting, WAF rules, bot detection, advanced access control Granular visibility, intelligent traffic shaping, deep application-layer filtering Higher CPU usage, deeper inspection may impact performance if not optimized

Why Is Traffic Inspection Useful?

Traffic inspection capabilities in an ADC enable logic-based decisions according to the content and behavior of the traffic:

  • Content-based routing (content switching): Direct traffic based on headers, paths, or cookies.
  • Filtering and control: Block specific patterns, handle exceptions, or restrict access based on custom criteria.
  • Security: Detect suspicious traffic, block dangerous requests, and protect against application-level threats like SQL injection or XSS.
  • Response optimization: Adapt behavior depending on the client type, browser, or geographic location.

Additionally, in environments where proxies or other intermediaries are used, ADCs can detect and mitigate header spoofing (e.g., X-Forwarded-For), which is crucial when tracing the actual origin of a request.

What Traffic Inspection Capabilities Does SKUDONET Offer?

SKUDONET ADC provides a flexible traffic inspection system with a wide range of capabilities to adapt to various environments:

  • Bot and malicious traffic filtering at Layer 4 for high-speed preemptive protection, enabling early blocking of suspicious connections before deeper inspection is needed.
  • Header, cookie, and HTTP parameter inspection for routing, filtering, or blocking decisions at Layer 7.
  • API rate limiting based on URL patterns, request methods, or IP behavior — essential for protecting exposed endpoints.
  • Header rewriting and modification based on defined rules.
  • Custom responses for specific conditions, such as blocking suspicious or malformed requests.
  • Advanced certificate management, including wildcard support and granular SNI-based configuration.
  • Access control logic based on origin, methods, paths, or geolocation.
  • Intuitive graphical interface to define rules easily — or direct config access for manual or automated setups.
SKUDONET Network Traffic Inspection

These features enable deep control of HTTP/HTTPS traffic while combining speed and visibility — adaptable to both simple environments and complex architectures.

Traffic inspection in an ADC isn’t just about watching the data stream — it’s about understanding it and responding accordingly. The ability to inspect headers, analyze content, and take intelligent, real-time actions is what sets advanced solutions apart from limited ones.

SKUDONET Enterprise Edition enables efficient Layer 7 traffic inspection, offering detailed rules, deep visibility, and flexible configuration — all with a transparent, architecture-agnostic approach.

01 July, 2025 02:36PM by Nieves Álvarez

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Introducing Canonical builds of OpenJDK

Java has long been the most popular language for software development in large enterprises, with 90% of Fortune 500 companies using it for backend development, particularly in industries like finance, healthcare, and government. 

Java developers, more than most, are tasked with balancing the implementation of new features against the critical requirements of legacy application security, stability, and performance. The management of distinct Java versions, security updates, and deployment artifacts introduces considerable complexity.

For these reasons we have decided to double down on our investment in the toolchain by providing an even more comprehensive offering that both enterprises and community members can benefit from. Canonical’s OpenJDK support offering is centered around the following core tenets:

  • Industry leading security maintenance with Ubuntu Pro, providing security support to OpenJDK 8 until 2034 and at least 12 years on every other OpenJDK LTS release.
  • Timely access to new Java releases by including the latest OpenJDK release in the subsequent Ubuntu release. This also extends to the LTS releases.
  • Performance optimization for containers, combining the size reduction of Chiseled OpenJRE with novel technologies like CRaC (Coordinated Restore at Checkpoint).
  • Verified correctness testing OpenJDK release against the Technology Compatibility Kit (TCK) using the Eclipse AQAvit by Adoptium testing framework.
  • Broad architecture support including AMD64, ARM64, S390x, RISC-V and ppc64el.

Let’s explore each of these elements in greater detail.

Enhanced security assurance: long term security and stability

An Ubuntu Pro subscription extends a minimum 10-year security support guarantee for all OpenJDK LTS builds, thereby reducing the necessity for frequent application modernization efforts. This means that developers can simultaneously extend the life of their existing legacy applications and prioritize the development of features that directly benefit and enhance the user experience. 

This is especially crucial for legacy workloads running on Java 8, which according to a recent New Relic report still accounts for as much as 33% of production deployments. For workloads running on 24.04 LTS, Ubuntu Pro will extend  security maintenance to at least 2034, 8 years longer than Red Hat and 4 years longer than Azul Zulu.

Supported OpenJDK LTS versions within Ubuntu LTS releases 

OpenJDK LTS VersionGeneral Availability DateUbuntu LTS AvailabilitySupport End Date (via Ubuntu Pro)
8201418.04, 20.04, 22.04, 24.04At least 2034
11201818.04, 20.04, 22.04, 24.04At least 2034
17202118.04, 20.04, 22.04, 24.04At least 2034
21202320.04, 22.04, 24.04At least 2034

Facilitating innovation: timely access to new Java releases

With Ubuntu, long term support does not come at the expense of rapid releases. We aim to enable teams’ experimentation efforts by making new versions of OpenJDK available in Ubuntu right after their release. 

Starting with Ubuntu 24.04 LTS we are planning to align the OpenJDK and Ubuntu releases cadences as follows:

  • New OpenJDK LTS releases land in subsequent Ubuntu LTS releases, ensuring stability for your long-term projects.
  • Non-LTS OpenJDK releases become available in subsequent Ubuntu interim releases, perfect for testing new language features, APIs, and performance improvements as soon as they drop.

This dual approach gives you the flexibility to innovate rapidly while maintaining enterprise-grade stability for critical production deployments.

In addition we are also using innovative technologies like CRaC (Coordinated Restore at Checkpoint) to reduce containerized and traditional Java application startup and warmup times by allowing a running JVM and application state to be checkpointed and later restored. 

Deployment optimization: secure, minimal Chiseled JRE containers

Bloated container images slow down CI/CD pipelines and increase security risks. Ubuntu Chiseled OpenJRE containers offer a radically smaller footprint for the Java runtimes, cutting away all unnecessary packages and slices of dependencies. The smaller size does not compromise throughput, which falls in line with images from alternative OpenJDK distributions.

You can download the images from the following public registries and get additional long term security and support through the Ubuntu Pro subscription:

Chiseled JRE container statistics:

FeatureChiseled JRE 8Chiseled JRE 17Chiseled JRE 21
Compressed Size37/38MB (AMD64/ARM64)44/42MB (AMD64/ARM64)50/51MB (AMD64/ARM64)
Size vs. Temurin~52% Smaller~51% Smaller~56% Smaller
Performance Impact≈0% diff. throughput/ startup≈0% diff. throughput/ startup≈0% diff. throughput/ startup
Security MaintenanceUp to 12 years support via Ubuntu ProUp to 12 years support via Ubuntu ProUp to 12 years support via Ubuntu Pro

Over the coming weeks we will publish a set of detailed benchmarks comparing Chiseled Ubuntu OpenJRE containers to similar products of other major OpenJDK distributions.

Verified correctness and simplified compliance

When building enterprise applications, the last thing you need is wasting time debugging unexpected runtime behaviour. Since joining the Eclipse Foundation’s Adoptium Working Group in 2023 we have worked hard to make sure all our OpenJDK builds of versions 17 and 21 are verified for correctness so that all Ubuntu users can build their latest Java applications on a foundation they can trust.

“Canonical is a strong example of how our members contribute to and benefit from their involvement in the Adoptium Working Group,” said Mike Milinkovich, executive director of the Eclipse Foundation. “They are helping to drive innovation across the open source Java ecosystem and are using the Eclipse AQAvit testing framework to efficiently test and certify their builds with the Java TCK. I’m excited about what we can accomplish together as our collaboration continues to evolve.”

Our OpenJDK builds are rigorously tested against the Technology Compatibility Kit (TCK) using the AQAvit testing framework. This applies to the build for all the following architectures (on Ubuntu 22.04 LTS and 24.04 LTS):

  • AMD64
  • ARM64
  • s390x
  • Ppc64el
  • RISC-V

The same approach applies to cryptographic compliance requirements. Ubuntu Pro provides access to openjdk-11-fips (with FIPS 140-2 certified BouncyCastle). We’re also actively pursuing FIPS 140-3 certification for a dedicated OpenSSL-FIPS Java provider, simplifying compliance for developers in regulated industries and government departments.

Improved cloud native workload performance with GraalVM and CRaC

Java applications have strong runtime performance but suffer from slow startup time due to Java Virtual Machine (JVM) initialization and Just-In-Time (JIT) compilation processes. Over the past few years GraalVM and CRaC  (Coordinated Restore at Checkpoint) emerged as two different solutions to help developers create efficient cloud native applications.

We recognise the importance of these projects for the future of the Java ecosystem and we have decided to make them even easier to adopt and maintain on Ubuntu by packaging all necessary components and making them available either as debs or snaps.

GraalVM is a high-performance polyglot virtual machine that offers significant advantages for Java developers. It enables ahead-of-time (AOT) compilation of Java bytecode into native executables. This AOT compilation eliminates the need for JIT compilation at runtime, leading to substantially faster startup times and reduced memory footprint. We have created a snap to make it easier for developers to access the latest GraalVM features and build smaller, faster applications on 24.04 and future releases.

CRaC, on the other hand, allows a running JVM to be frozen, its state saved to disk, and then restored rapidly at a later point. By pre-warming the application and taking a checkpoint, subsequent startups can be significantly accelerated, often taking just milliseconds. We have packaged and added to the archive both CRaC OpenJDK builds and CRIU (Checkpoint/Restore In Userspace), delivering an effortless developer setup and minimum of 10 years of security maintenance with Ubuntu Pro (starting from Ubuntu 26.04 LTS)

Learn more about Canonical builds of OpenJDK

01 July, 2025 11:00AM

hackergotchi for Deepin

Deepin

deepin 25 Launches: Customize Your AI Companion’s Skills!

Tired of complex system operations breaking your workflow? Frustrated by switching between apps just for a clarification or translation? Deepin 25 introduces the solution: UOS AI Assistant—your full-scenario intelligent companion. More than a simple chatbot, it's a deeply integrated productivity engine within the OS, designed to be your ultimate work, study, and creation assistant. Discover how intuitively it understands you!   UOS AI FollowAlong: Instant Access, Productivity Unleashed Free from dialog boxes, UOS AI FollowAlong works like a true companion. Simply select text to activate AI functions: clarifying, searching, summarizing, translating—all instantly available. AI Clarification: Decode jargon and complex concepts ...Read more

01 July, 2025 08:53AM by xiaofei

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Update Livepatch Client for the newest kernel module signing certificate

The kernel engineering team at Canonical has generated a new module signing certificate on May 16, 2025, and it is embedded in all Ubuntu kernels published after that date. Livepatch Client version 10.11.2 published on June 13, 2025 includes this new certificate. Livepatch Client 10.11.2 or greater is required to successfully Livepatch all kernels published by Canonical after July 2026.

What is Canonical Livepatch?

Canonical Livepatch is a security patching automation tool which supports rebootless security updates for the Linux kernel. Livepatch remediates high and critical common vulnerabilities and exposures (CVEs) with in-memory patches, until the next package upgrade and reboot window. System administrators rely on Livepatch to secure mission-critical Ubuntu servers where security is of paramount importance. 

Livepatch Client is an auto-updating snap

When software is published as a snap, it has to be classified by channel and revision. Beta, stable, edge, or long term support are possible channel classifications, but every software publisher has the ability to create their own channel names. Revision numbers correspond to the software as it was published in a channel at a specific point in time, and is decoupled from the version number of the software. If a software publisher reverted their software to a previous version in a channel, the revision number would increment by one, and their version number would decrement accordingly. Snaps will auto-upgrade to the latest revision available in their channel, by default.

The software publisher is in control of the snap package’s upgrade timelines, and when they publish a new version in snapcraft.io, the update is visible to machines everywhere in the world. This is ideal for high-risk software where security updates need to be rolled out in a time-sensitive manner, such as in web browsers and email clients.

The Livepatch snap package provides a critical security patching service, and is published in stable, candidate, beta, and edge channels for a variety of platforms. Livepatch Client version 10.11.2 arrived to the latest/beta channel on June 10, 2025, was promoted to latest/candidate on June 13, 2025, and reached latest/stable on June 16, 2025. Ubuntu Pro users who have enabled the Livepatch entitlement using Pro Client, with the `pro enable livepatch` command, will all be on the latest/stable channel.

Where or when might Livepatch Client not auto-update?

Special consideration needs to be made for environments where Livepatch Client auto-updates have been disabled, or are not possible without additional intervention.

Landscape introduced snap management capabilities in 2024, giving system administrators granular control over auto-updating snap packages. Livepatch Client will need to be manually upgraded in environments where automatic upgrades have been disabled, or in airgapped environments where snap packages have to be manually updated.

How to make the snap available, and progressively update each environment

In airgapped environments, Canonical customers will have deployed a Snap Store internally, and either the Snap Store or Landscape can control the upgrades in those environments. It would be necessary to update the Landscape Client package in the Snap Store so that it is available across the entire estate, and ensure Landscape’s snap management rules for each environment in the estate are not blocking the Livepatch Client from updating to the latest version published in the self-hosted Snap Store. It is expected that the airgapped production environment tracks behind the airgapped staging and testing environments.

Conclusion

Canonical Livepatch prioritizes both protection and operational continuity through its self-updating and confined client. By leveraging the snap packaging system’s automatic update capabilities, the Livepatch client runs on the latest stable version, with the latest security, bugfix, and feature updates. In environments where Livepatch Client updates must be applied manually due to organizational policy or network connectivity constraints typically found in airgapped environments, Livepatch Client must be updated to version 10.11.2 or later for security patching continuity beyond July 2026.

Further Reading

Ready to security patch the Linux kernel without downtime?

Zero downtime patching is even better with zero surprises, chat with experts at Canonical to determine how Livepatch can improve your security posture.

Contact Us

01 July, 2025 04:48AM

The Fridge: Ubuntu 24.10 (Oracular Oriole) reaches End of Life on 10th July 2025

Ubuntu announced its 24.10 (Oracular Oriole) release almost 9 months ago, on 10th October 2024 and its support period is now nearing its end. Ubuntu 24.10 will reach end of life on 10th July 2025.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 24.10.

The supported upgrade path from Ubuntu 24.10 is to Ubuntu 25.04 Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/PluckyUpgrades

Ubuntu 25.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Mon Jun 30 11:57:50 UTC 2025 by Utkarsh Gupta on behalf of the Ubuntu Release Team

01 July, 2025 04:17AM

hackergotchi for Qubes

Qubes

XSAs released on 2025-07-01

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

01 July, 2025 12:00AM

June 30, 2025

hackergotchi for Ubuntu developers

Ubuntu developers

Colin Watson: Free software activity in June 2025

My Debian contributions this month were all sponsored by Freexian. This was a very light month; I did a few things that were easy or that seemed urgent for the upcoming trixie release, but otherwise most of my energy went into Debusine. I’ll be giving a talk about that at DebConf in a couple of weeks; this is the first DebConf I’ll have managed to make it to in over a decade, so I’m pretty excited.

You can also support my work directly via Liberapay or GitHub Sponsors.

PuTTY

After reading a bunch of recent discourse about X11 and Wayland, I decided to try switching my laptop (a Framework 13 AMD running Debian trixie with GNOME) over to Wayland. I don’t remember why it was running X; I think I must have either inherited some configuration from my previous laptop (in which case it could have been due to anything up to ten years ago or so), or else I had some initial problem while setting up my new laptop and failed to make a note of it. Anyway, the switch was hardly noticeable, which was great.

One problem I did notice is that my preferred terminal emulator, pterm, crashed after the upgrade. I run a slightly-modified version from git to make some small terminal emulation changes that I really must either get upstream or work out how to live without one of these days, so it took me a while to notice that it only crashed when running from the packaged version, because the crash was in code that only runs when pterm has a set-id bit. I reported this upstream, they quickly fixed it, and I backported it to the Debian package.

groff

Upstream bug #67169 reported URLs being dropped from PDF output in some cases. I investigated the history both upstream and in Debian, identified the correct upstream patch to backport, and uploaded a fix.

libfido2

I upgraded libfido2 to 1.16.0 in experimental.

Python team

I upgraded pydantic-extra-types to a new upstream version, and fixed some resulting fallout in pendulum.

I updated python-typing-extensions in bookworm-backports, to help fix python3-tango: python3-pytango from bookworm-backports does not work (10.0.2-1~bpo12+1).

I upgraded twisted to a new upstream version in experimental.

I fixed or helped to fix a few release-critical bugs:

30 June, 2025 11:30PM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 898

Welcome to the Ubuntu Weekly Newsletter, Issue 898 for the week of June 22 – 28, 2025. The full version of this issue is available here.

In this issue we cover:

  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • LXD: Weekly news #400
  • Other Meeting Reports
  • Upcoming Meetings and Events
  • Event report: UbuCon EU + OpenSouthCode 2025 (Málaga, Spain)
  • Un brindis por los 19 años de Ubuntu-VE
  • LoCo Events
  • Ubuntu Server Gazette – Issue 5 – Things to keep safe: Your circle of friends and your time!
  • Ubuntu Project docs update: making sense of the contributor story
  • Fwupdmgr offers KEK CA updates from 2011 to 2023
  • Open Source Summit from The Linux Foundation June 23-25 (Denver, Colorado USA)
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Featured Audio and Video
  • Updates and Security for Ubuntu 22.04, 24.04, 24.10, and 25.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • Din Mušić – LXD
  • Cristovao Cordeiro (cjdc) – Rocks
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

30 June, 2025 11:07PM by bashing-om

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Announcing Incus 6.14

The Incus team is pleased to announce the release of Incus 6.14!

This is a lighter release with quite a few welcome bugfixes and performance improvements, wrapping up some of the work with the University of Texas students and adding a few smaller features.

It also fixes a couple of security issues affecting those using network ACLs on bridge networks using nftables and network isolation.

The highlights for this release are:

  • S3 upload of instance and volume backups
  • Customizable expiry on snapshot creation
  • Alternative default expiry for manually created snapshots
  • Live migration tweaks and progress reporting
  • Reporting of CPU address sizes in the resources API
  • Database logic moved to our code generator

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

30 June, 2025 08:15PM

hackergotchi for VyOS

VyOS

VyOS Project June 2025 Update

Hello, Community!

This month's update looks small but there are quite a few big things happening. Expect a few release posts in the coming weeks! But apart from that, there are big ongoing developments inside the rolling release. First, we are ironing out the remaining issues in the VPP-based accelerated dataplane and we welcome everyone to test them.

In other areas, we are making steady progress at replacing the old configuration backend. Currently the focus is on the commit algorithm, that will make commits much faster and enable long-awaited features such as commit dry run (T7427). The other big things is the operational mode command system rework that will allow us to reintroduce operator level users and improve operational command documentation. Read on for details!

30 June, 2025 05:23PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How to get a job at Canonical

If you’re interested in applying for a role at Canonical, it can help to have some insider guidance.

A lot has been written on social media about Canonical and our hiring processes. Some of it is even true.

I share responsibility for what we do, because I am a hiring lead at Canonical.

Hiring leads are drawn from disciplines across the organisation, each with oversight of one or more hiring pipelines. I’m a Director of Engineering (in Engineering Excellence, where I lead Canonical’s documentation practice). I’m currently involved in the hiring process for various different roles: technical authors, and engineers in Community, Developer Relations and web platform engineering.

I’ve been a hiring lead for several years, and have shepherded dozens of new team members through our hiring process, from their initial application to their contract. Being a hiring lead is a reasonably substantial part of my work.

Some context

Canonical employs about 1300 people around the world. Between 2021 and 2025 we more or less doubled in size.

Each year, we hire between three and four hundred people, and we receive around one million job applications. You can do the maths yourself, but don’t jump to hasty conclusions. (It doesn’t mean that someone who applies has a one in 3000 chance of getting the job, for example). Clearly though, competition for jobs at Canonical is high. For some of our roles it’s exceptionally high. 

If you’re inclined to see hiring as a kind of contest between employer and candidate – parties on opposite sides of the table, with opposing interests – you might judge that more competition is excellent for us, but not so great for the candidates. I think it’s not nearly as simple as that. 

Amongst all these applications we see: candidates who apply for wholly unsuitable roles; candidates who don’t apply for the job that would suit them best; strong candidates who put in weak applications; candidates who unwittingly undermine their own applications. And we know there are also people, who would be excellent candidates, who decide not to apply at all.

When this happens, it certainly hurts the candidate, but it’s bad for us too. We’d like to change some of those patterns – hence this article, which points out how they occur. We want candidates to avoid some pitfalls and find better success.

Applying for the right job

It’s very hard to get any job if you’re not applying for the right ones. One thing that definitely does not work (not just at Canonical, but anywhere) is spraying applications at the employer.

“Let us do it for you!”

There are online services that make multiple automated applications to many different roles on your behalf. All you have to do is provide your details, and they will do the rest, even cleverly adapting your information to suit the roles they apply for. 

Sadly, the one secret that these companies don’t want you to know is that this is a first-class way to get yourself permanently blocked – employers really do not want to be spammed with job applications. The services make applications for unsuitable roles, and put in low-quality applications that would never be considered appropriate anyway.

Don’t use these services. They are not on your side and they do not help you.

What it looks like from our side

When a candidate makes multiple applications, it tells a story. 

The story can make good sense. If someone applies for both a Developer Relations and a Community position, the skills and qualities demanded in those roles have a lot of overlap. Similarly, a candidate might apply for a Senior Engineer and a Manager role related to a particular product: also perfectly understandable. Those applications show where the candidate’s interest lies, and there is no problem in putting in applications for more than one role in that way.

On the other hand, someone who applies for five or seven completely unrelated positions is not showing focus. Inevitably, each one of those applications is weak, but one glance at the list of applications as a whole also shows that the candidate doesn’t really understand what the jobs are about, and doesn’t have a strong idea about what they have to contribute. 

“I haven’t thought that deeply about what any of these positions entail – I’m just hoping for a job” is not a good signal to send an employer.

Do your research

Don’t start by making an application for the first role that seems remotely plausible. We have hundreds of open roles, so it’s likely that the first one you’ll find is not the best fit.

If you’re an experienced software professional, you can be expected to know how the industry works. You should be able to read job descriptions and understand how what they ask for matches what you can offer. You already have an advantage of understanding in that case: use it to set yourself one step ahead of the candidates who don’t.

Doing research doesn’t mean taking a cursory look at requirements and ticking them off when a word matches: Python: yes, Agile: yes, Kubernetes: yes. It means understanding what is actually needed, and articulating how you will be able to contribute – which is one of the things we look for. 

When looking at job descriptions, don’t ask yourself “Do I fit this?” as if you were an interchangeable for a machine. Your question should be: “How could I make a real difference here?”

Opportunities for early-career candidates

The candidates with less industry experience are often the ones who succumb to the temptation of firing off multiple indiscriminate shots in the hope that one hits a target.

One thing you can do is to look specifically for graduate and associate roles, such as Graduate Software Engineer. Here, you don’t need to worry that you will be competing against candidates with many more years of experience than you have, and you can concentrate on what you do have to offer. 

We also have general roles, that are open at all levels, as in our advertisement for Python Engineer. We’re not looking for a Python engineer – we’re looking for many Python engineers, for multiple positions, at all levels of seniority (which is exactly what it says in the advertisement). 

“All levels” genuinely means all the way from graduate-level to industry-leading experts. We aim to build rational, balanced software teams that give people opportunities to progress – so we need to hire engineers at all those levels.

What we consider outstanding in a junior engineer is not the same as outstanding for a senior. They’ll be assessed appropriately. A senior is expected to demonstrate substantial industry achievement. That can’t be expected of someone at the start of their career, so we’ll look for other demonstrations of ability, that make them stand out amongst their peers.

We’ll help too

One thing you should be aware of: hiring leads at Canonical talk to each other regularly, and are on the look-out for candidates who might be good for other roles. If an intriguing candidate comes into one of my hiring pipelines, but doesn’t seem quite right right for the role they applied for, I will circulate the application to colleagues in charge of more suitable ones. 

Many of the candidates I have hired didn’t apply for the job they eventually got. Another hiring lead recognised their potential, and asked me to have a look at them for one of my roles. 

Needless to say, this happens only with candidates who have really put themselves into an application in the first place – not the ones who submitted multiple low-effort applications.

When we recognise that we have found someone who is going to be excellent, we can spend our energies looking for the role to suit them, rather than the other way round. (In really exceptional cases, we have even found a candidate and then created a role for them.)

Being outstanding

Usually with a little effort you can show whether you’re a plausible fit for a role, but it’s still only half of what you need to do. For each open position, we have many candidates who fit very well. They will all earn consideration. However, what we are looking for is the ones who really stand out. You need to demonstrate how you stand out.

It’s true that some things that make a candidate stand out are beyond your control – you can’t help it if another candidate is the leading industry contributor to the open source technology the job focuses on, and you are not. But you can take care of the others.

Think about values such as initiative, leadership, commitment, motivation, courage, ambition, technical excellence, persistence, community, responsibility

Which do you think are most important for the role you’re interested in? What are the job descriptions and interviewers talking about? Get into the habit of thinking about yourself through the lens of outstanding qualities, and to ask yourself whether people see you that way, and why.

Many candidates could do a much better job of showing how they stand out. If you can find a way to do that, you have an immediate advantage.

Being specific

You can only stand out if you say specific, concrete things. 

If you read in the job description that you need to have familiarity with relational databases such as Postgres and MysQL, and you note in your application that you are “familiar with relational databases such as Postgres and MySQL” that is fairly useless. It is completely generic. 

Tell us what you did with Postgres (or MySQL): “I decided to use PostGIS, and migrated the cluster from the cloud to our own infrastructure” shows us something meaningful, and interesting, about you and databases.

Our questions

The questions we ask in the hiring process are aimed both at establishing how well you fit, and how well you stand out. It’s relatively straightforward for us to determine how well a candidate fits, but what makes a candidate stand out can be harder to see – especially if the candidate themselves has effaced what they could have shown clearly. 

At every stage in our hiring process, we ask multiple questions to help you show how you excel in the ways we are looking for. Some candidates seem to regard them with suspicion: each one a trap being set, to catch them out. They answer them as though they were knocking a ball back across a net. Don’t do that. Those questions are there for you. They show you what we want. They are handholds, not traps. We ask many, so that if one of them is a miss for you, you still have other opportunities to demonstrate excellence.

Look for opportunities to use our questions as your springboard to show how you stand out.

Showing and not saying

The important thing in demonstrating how you stand out is exactly that: showing

All too often candidates rely simply on saying (I am committed; I am passionate; I am expert; I am a fast learner; I take initiative; I can drive projects to their conclusion). That doesn’t help them, or us. Anyone can say those things (and they often do, whether they actually have those qualities or not).

What we want to know is how you can demonstrate the skills, values and qualities we’re looking for. 

What achievements, results, awards, efforts can you point to that show them, in your your career, studies or life? How can you tell your stories in a way that someone else understands them, and connects them to what they are looking for? What specific examples and instances tell your story for you? 

In an interview, would you know what to talk about if you were asked about them?

A good rule of thumb is that if what you want someone to know is a fact (having been a Python engineer for six years) then saying it is fine. But if it’s about a claim of value (being a Python web technology expert) you need to be able to point to something that actually demonstrates it.

Be prepared

Thinking about this when you’re faced with a question in an interview is much too late. Once again, don’t come passively to the application process, expecting interviewers to discover what they need from you. You have a part to play in it too, that starts long before you apply.

Think about what it is you want to show. Think forwards, to what interviewers will want to know and to how you can show it. 

What you are preparing for is your performance in your interview. That’s not performance as in a stage show that impresses an audience with flashy tricks. Nobody wants to hear slick, well-rehearsed speeches in an interview. But, athletes and musicians who understand what needs to be delivered and know they’ll get one chance to do it also rehearse. They think and work through the coming situation in advance, and that is what you must do too.

I’ve heard people advise “just be yourself in the interview”. That is atrocious advice. Don’t just be yourself. You are not a sample of cloth being submitted for inspection by an expert, you are a person taking part in determining how the future is going to be. Be the person you are going to be if you get the job. 

Preparedness is one of the qualities you need to be successful at work: demonstrate it in your interviews, by being prepared.

The written interview

We help you prepare for your interviews, through our written interview. (Ironically Canonical’s written interview is an aspect of the hiring process that attracts the most ire from some quarters.)

I have written more about the written interview previously, but in brief, our written interview helps candidates prepare for successive rounds by showing clearly what we care about, and how we hope a candidate will demonstrate excellence.

It is particularly valuable for less experienced candidates, but all candidates benefit from being prompted to think in ways that help them articulate the things that later interviewers will be looking for. An interview in written form gives candidates a chance to show how they stand out, with time to think about it, without the pressure of having to do it on the spot.

One of the effects of the written interview is that it promotes applicants from under-represented backgrounds. Often, someone’s excellence is not immediately visible, especially when that person comes from outside the industry. The written interview is a more open invitation to demonstrate qualities of excellence that might only be hinted at within the tight constraints of an application form or CV.

Success in the written interview

People are entitled to their opinions about our written interview. If someone chooses not to do it, for whatever reason, that is fair enough. 

What is harder to understand is when a candidate submits a written interview and clearly resents having done so. It’s very easy to pick this up from the tone and the way questions are answered; it never helps, and it would have saved a lot of everyone’s time not to do it at all. 

If you are going to complete a written interview, approach it constructively. That doesn’t mean being uncritical or passive. It’s not an information-gathering exercise, but a series of prompts for you to set out yourself and the qualities you can demonstrate.

Treat each question as a pointer to what we want, and each answer as an opportunity to show value you could deliver. You should ask yourself why certain questions are being asked: What does this question reveal about Canonical’s priorities? Why do they care about this? What is the context? 

Maybe you can see something a question should have asked but didn’t. Maybe you have something to say that isn’t precisely what’s being asked. Being a critical, active participant in the written interview means finding a way to connect those with what it’s asking. 

You might feel that a question actually gets something wrong. Some candidates are able to articulate their disagreement, in an effective and good-humoured way, while still being able to work with it and delivering what’s required.

If you are able to do that too you will have demonstrated – without even mentioning it – one of the most valuable skills a person can ever bring to work.

Academic excellence

Of all the questions asked in our hiring process, the ones that raise the most eyebrows are the ones about your education. “Why does Canonical care about my high-school results from 30 years ago? What possible relevance does that have to my skills now?”

We don’t hire skills. We hire people, with personal qualities. They are a whole package. We ask about academic attainment on the basis that it – all the way back to school days, even for a candidate later in life – is a strong indicator of many personal qualities and abilities, including performance at work.

The way we approach education allows us to be more flexible and nuanced about it. For example, we don’t insist on degrees in particular subjects. In fact, we don’t require a degree at all. We can look into more, and different, aspects of a candidate’s history. 

Your degree

Nearly everyone we hire does in fact have a degree (not just a degree, a good degree). It’s rare when someone doesn’t have a degree. They must still otherwise be able to show outstanding academic results in their high school studies as evidence of excellence, and we will look further, to find a compelling story for the path taken. But, because we look at the whole picture, and because we get more information from candidates, we can and occasionally do hire someone without a degree.

How did you stand out at school?

We hire people from all over the world, from completely different backgrounds, who have negotiated utterly different education systems. 

It’s not fair to compare a candidate from a background of social and economic privilege, and who graduated from a top technical university in a wealthy country with a candidate from the other side of the world, who lacked those privileges, and whose opportunities were limited by the circumstances of their birth and their society. 

But, if you believe that talent, excellence and determination to succeed can be found everywhere in the world and in people of every background, it is fair to ask: “Amongst your peers, at school amongst people from your background, who shared similar circumstances and opportunities – how did you stand out?”

Demonstrating excellence

We look for demonstration of academic excellence as part of our hiring process. It matters for senior roles as well as for entry-level positions, and for candidates of all ages.

If you are interested in working at Canonical you need to answer the questions we ask. If you’re going to be successful you need to answer them well: sincerely, strongly and clearly. 

Someone who can show outstanding results has an advantage in the hiring process. Someone whose academic story is one of weak ambition and poor achievement is not going to find a role at Canonical. 

Nothing is being hidden here. The questions – like them or not – transparently show what we seek.

As in all other aspects of your application, we are looking for excellence. You need to be able to demonstrate it. We ask further about it at different stages, and it quickly becomes apparent if someone’s claims don’t add up.

People are entitled to have opinions about this. It’s reasonable for them to ask questions about it. Many do, in their interviews, and we are happy to have a conversation about it. 

If someone decides they don’t want to discuss their education or answer our questions, for whatever reason, that is fair enough. There are plenty of other companies offering jobs, and we have plenty of other candidates. But, it would be a shame if an otherwise strong candidate withdrew an application on the basis of unfounded fears about the purpose of those questions.

AI and LLMs

We don’t employ LLMs at Canonical, we employ human beings. We don’t want LLMs to apply for jobs, and we ask candidates not to use them in their applications.

Some candidates do it anyway – and then it’s usually obvious when they do, because the patterns of AI are so recognisable. 

It’s good to quantify claims, but we’ve received CVs saying things like “Streamlined data retrieval and analysis across multiple platforms by 40%” or “Improved development process efficiency by 17%”. This is just gibberish with numbers in it; what it tells us is that the candidate has not even thought about what they expect other people to read. 

The written interview is a candidate’s opportunity to show who they are, but AI turns writing into a kind of characterless blended soup, with all the personality washed out of it. It makes for grim reading: generic claims without any ring of truth.

The only thing worse is when someone has asked an LLM to inject “personality” into what they write. It’s instantly obvious, and a miserable experience for the reviewer.

Don’t let AI erase your personality, or make you look false, or silly. Candidates turn to AI to help them, but the result is not what they hope. There is no way to stand out as excellent when you have relied on an LLM to present you in the application process.

We don’t use AI

Job applicants who fear that their application will be evaluated by AI sometimes resort to keyword-stuffing in the hope they will pass that hurdle and get to encounter a human. 

We don’t use AI. Hiring needs to be fair, and the risk is that LLMs could recapitulate and reinforce already-hidden biases in hiring. An LLM could never for example have the context or motivation of a human hiring lead, excited by a candidate from an unusual background, eager to see if a suitable role could be found for them.

Perhaps you have read the advice that you need to make sure your application and CV mention all the keywords in the job advertisement. Ignore that advice. Your application will not be read by a pattern-matching algorithm. It will go into the hands of an intelligent human being, who has domain knowledge, who cares, and makes human judgements.

Be a human

Apart from some very simple automation, you and your application are handled entirely by humans. Be a human. Write for humans. They will thank you in turn for being a human being. (They will also make human mistakes sometimes.)


I want to help you to help us to help you

I hope this article helps someone.

Just like my hiring lead colleagues, I know perfectly well that a job application doesn’t give us a wholly reliable picture of the candidate, and the candidate we encounter doesn’t always represent the person who actually joins us in the company. Both applying for jobs and hiring people for them are difficult sciences.

Still, one of the greatest frustrations of my role is a candidate whom I suspect, or even know, is much stronger than their performance in the hiring process suggests. 

In those cases I would love to probe further and discover whether I’m right, but I deal with my fair share of our one million applications each year, and it’s not possible for me to do that every single time. I have other candidates to take care of too, and the ones who are excellent and have shown it effectively in their applications will get my attention first.

I hope this article reaches the ones who are excellent, but need some help in understanding how to demonstrate it. If you believe you have excellence to show, and can see a role at Canonical in which you could be an outstanding contributor – apply for it.

My hiring lead colleagues and I need you to help us see how you stand out. Doing that will be your surest route to success in a job application at Canonical, and it’s how we are going to recruit the outstanding new colleagues we seek.

30 June, 2025 11:57AM

hackergotchi for Deepin

Deepin

deepin 25 DDE 7.0: Reimagined Interface,Unmatched Fluency

The deepin 25 DDE (deepin Desktop Environment) welcomes a comprehensive evolution. This upgrade doesn’t just bring visual and interactive personality; it’s dedicated to refining every detail, ensuring each operation becomes a delightful experience. Aesthetic Overhaul: The Pursuit of Perfection in Details The visual upgrade of the new DDE (deepin Desktop Environment) isn’t a superficial refresh. Starting from the user experience, it truly integrates aesthetics with efficiency. In its visual system, DDE 7.0 unifies the design language of icons and UI. It resolves the inconsistency of old icon styles and visual elements, establishing a highly cohesive design logic—from color tones and lines to ...Read more

30 June, 2025 08:13AM by sun, ruonan

To Every Member of the deepin Community: Growing Together, Grateful for Your Partnership

With the official release of deepin 25, we reflect on how every version upgrade stems from the collaboration of our global community. Whether through code contributions, translations, forum moderation, or software maintenance—your involvement drives deepin forward. We extend our deepest gratitude to all contributors and users!   To Our Code Contributors: The Innovators Special thanks to these community members for their valuable code submissions to deepin and related projects: @lhdjply @dyang0116 @Cherrling @wojiaohanliyang @AaronDot @newearth-ss @ice909 @WangJia-UR @Woomeeme @hillwoodroc @amjac27 @kyrie-z @Amannix @applyforprof @insight-miss @alphagocc @Yurii-huang @leoliu-oc @xyr218 @Yingqiao-Kong @SiamSami @xiaolong1305 @lu-xianseng @bocchi810 @silver-leaf @zhousc11 @StrangeZuo @yyc12345 @huxd1532 @ticat123 @slark-yuxj ...Read more

30 June, 2025 03:50AM by sun, ruonan

June 27, 2025

hackergotchi for Univention Corporate Server

Univention Corporate Server

Easier integration of Nubus into existing environments: Ad Hoc Provisioning and Directory Importer now available

Connecting new software to existing environments or migrating to a new IAM is usually a longer-term project. Nubus now offers new, lightweight options for integration with existing identity providers and directory services.

The initial situation: Existing identities & new projects

Organizations rarely start IT projects from scratch (“greenfield”) but usually have existing systems for managing and storing identities and for authentication. When new software is introduced, it is connected to these systems to avoid having to manage user accounts in yet another place and to enable users to log in via Single Sign-on or their familiar credentials.
Univention’s IAM Nubus is typically introduced in these environments for one of two reasons:

  1. As an identity broker for the introduction of new, modular software solutions: Nubus standardizes how user data is stored and how users authenticate against the applications and technical modules of new software. This relieves the often less flexible, existing identity providers. More information on this scenario can be found in our IAM whitepaper.
  2. To establish a leading identity store as an alternative to a proprietary, closed legacy system – for example, as a strategic replacement for a directory service based on Microsoft Active Directory.

In both cases, Nubus is connected to the existing identity management systems and authentication mechanisms. Even if the goal is a complete replacement of the legacy system, the migration must be carried out step by step. Therefore, it is usually necessary to ensure the synchronization of identity information between the legacy system and Nubus over a longer period.

Grafik Nubus IAM EN

The technical basis: Single Sign-on and “Ad Hoc Provisioning”

For users to easily access Nubus itself and the applications connected to Nubus, Single Sign-on with the legacy system is ideal. At the same time, this is a simple configuration task, since only Nubus has to be configured as a so-called “service provider” with the legacy system. No special permissions are required in the legacy system – it is sufficient to configure Nubus and the identity provider of the legacy system (e.g. Active Directory Federation Services) based on the OpenID Connect or SAML protocols.

New in Nubus is the option for “Ad Hoc Provisioning”, where user accounts are created or updated “ad hoc” directly upon successful Single Sign-on in Nubus. The information provided by the legacy system is used to populate a complete user account in Nubus. This allows users to immediately work with all applications connected to Nubus. Manual creation by an administrator or setting up a connector between the application and the IAM is not necessary.

The user accounts created this way can be enriched with additional information within Nubus. One use case could be assigning email addresses to users, enabling them to use a mail stack connected to Nubus.

Grafik IAM Nubus Ad Hoc Provisioning EN

Single Sign-on with Ad Hoc Provisioning is ideal for fast integration in environments with small to medium user numbers or in pilot and preliminary projects. This approach stands out due to quick configuration and low requirements for the existing environment. However, this process also has clear limitations that need to be considered. The most important are:

  • Only users who have logged in via the configured Single Sign-on in the past are known to Nubus. This becomes problematic especially when collaborative applications are used via Nubus and team members are missing because they have never logged in.
  • Data transfer based on the Single Sign-on protocols is limited – both in terms of timing (only at login) and the amount of user information. Updates to user information in Nubus only occur upon user login, so user accounts in Nubus may be outdated and not automatically locked or deleted. Users deleted in the leading system no longer log in, meaning the “user deleted” event is not transmitted to Nubus.
  • Only information that can be efficiently retrieved during SSO is transferred. The most important attributes like user ID and display name are always included, but additional data such as email aliases or account expiration dates are often unavailable or not standardized in the protocols. Currently, the Ad Hoc Provisioning implementation also omits the inefficient transfer of group membership information at this point.

A detailed description of how to set up Ad Hoc Provisioning can be found in the manual for the Keycloak App.

Full user lifecycle with the Nubus Directory Importer

In existing IT environments, directory services such as Active Directory are often used to store identities, which can be queried via LDAP. The “Nubus Directory Importer” can be used here to securely and efficiently transfer user and group information from the directory service to Nubus.

The Nubus Directory Importer communicates between the LDAP interface of the legacy system and the UDM REST API of Nubus. It transfers user and group information and automatically detects whether records need to be created, updated, or deleted in Nubus. This enables a complete user lifecycle.

Grafik IAM Nubus Directory Importer EN

The selection of information to be synchronized can be multi-level: The legacy system’s directory service can restrict which parts of the user and group data the Nubus Directory Importer is allowed to read via its service account. The “owner” of the data – the directory service – thus retains full control. The Nubus Directory Importer then determines, via a configurable mapping, which information is transferred to the UDM REST API of Nubus.

Thanks to its deployment as a Docker container, the operator is free to choose where in the network structure the data is processed by the Nubus Directory Importer. This is particularly beneficial when Nubus runs in a different network segment or is only accessible via the internet: the Nubus Directory Importer is operated where sensitive data can be handled – i.e. “close” to the legacy system. The Nubus REST API is then securely addressed via HTTPS, regardless of where Nubus is running.

The combination of Single Sign-on and Nubus Directory Importer results in convenient and secure integration for both users and operators. Users have easy access to Nubus and the connected applications and find their familiar structures such as groups and other users. Operators can be confident that the information in Nubus is up to date and only relevant data is transferred.

Grafik Nubus IAM User Lifecycle EN

The setup of the Nubus Directory Importer is described in the Nubus Operations Manual for Kubernetes. It can be used both on Kubernetes and on any Docker Compose-capable operating system and is compatible with Nubus in a UCS environment as well as with Nubus for Kubernetes.

Step-by-step deployment: start easily and keep all options open

The implementations of Ad Hoc Provisioning and the Nubus Directory Importer are coordinated and can build on each other. For a quick project start, it is advisable to begin with Single Sign-on and activated Ad Hoc Provisioning to quickly provide users access to the required IT services. This enables fast and user-friendly integration, even for preliminary projects and evaluation phases.

If simple user account synchronization is no longer sufficient, the Nubus Directory Importer can be activated in a later project phase, in addition to Single Sign-on. This ensures complete and continuous maintenance of user and group information. The user experience improves significantly. With full synchronization, Nubus’s directory service component also becomes a potential alternative to the legacy system.

And what about the Active Directory Connection?

In addition to the Nubus Directory Importer, the Active Directory Connection has long been available to our users. This can also be used to synchronize user accounts and groups between Microsoft Active Directory and Nubus. The Active Directory Connection offers significantly more functionality than the Directory Importer: it can also write information back to Active Directory and thus supports bidirectional synchronization, and it transfers password hashes between Active Directory and Nubus. However, the Active Directory Connection is currently only available for UCS. Its functionality is designed for scenarios where Nubus is the leading system, and it therefore requires significantly more access rights to Active Directory. In projects where Nubus is intended to read from an existing directory service, the Active Directory Connection is too invasive – and if the source system is not Active Directory, it cannot be used at all. In such cases, the Directory Importer plays to its strengths.

Conclusion: Identity integration with new flexibility

With Ad Hoc Provisioning and the Nubus Directory Importer, Nubus now offers two complementary implementations that significantly expand its integration and usage possibilities. Both are available for Nubus in UCS and Nubus for Kubernetes.

Whether for integrating modular software offerings like openDesk, using Nubus as part of a SaaS solution, or building alternatives to existing directory services – in all cases, these functions provide significant simplification for managing user identities and access rights across applications.

Cooperation with Zentrum für Digitale Souveränität

The implementations for Ad Hoc Provisioning and the Nubus Directory Importer were created in cooperation with and supported by the Zentrum für Digitale Souveränität (ZenDiS) for integration into openDesk. The options described here therefore also apply to the connection of openDesk to existing systems.

Der Beitrag Easier integration of Nubus into existing environments: Ad Hoc Provisioning and Directory Importer now available erschien zuerst auf Univention.

27 June, 2025 11:27AM by Ingo Steuwer

hackergotchi for Purism PureOS

Purism PureOS

The 2025 Most Secure Phone in The World Reviews Are In: Efani, Analytics Insight, Navi, and Cashify

Purism’s Librem 5 and Liberty Phones Named the Most Secure Smartphones in the World Top cybersecurity and tech publications agree—Purism leads the industry in mobile security for 2025.

The post The 2025 Most Secure Phone in The World Reviews Are In: Efani, Analytics Insight, Navi, and Cashify appeared first on Purism.

27 June, 2025 09:16AM by Purism

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How is Livepatch safeguarded against bad actors?

Canonical Livepatch is a security patching automation tool which supports reboot-less security updates for the Linux kernel, and has been architected to balance security with operational convenience. Livepatch remediates high and critical common vulnerabilities and exposures (CVEs) with in-memory patches, until the next package upgrade and reboot window. System administrators rely on Livepatch to secure mission-critical Ubuntu servers where security is of paramount importance.

Since the Linux kernel is an integral component of a running system, a fault would bring the entire machine to a halt. Two complementary security implementations provide safeguards against malicious code from being inserted via Canonical’s live kernel patching functionality:

  1. Secure Boot ensures you’re running a trusted kernel
  2. Module signature verification ensures only trusted code is loaded into the kernel at runtime

Secure Boot ensures trustworthiness of binaries by validating signatures, they must be signed by a trusted source. It protects the Ubuntu machine by preventing user-space programs from installing untrusted bootloaders and binaries. Secure Boot validation results in a hard requirement for module signature verification, to insert code at runtime.

Livepatching the Linux kernel securely

There are multiple layers of protection ensuring Livepatch runs safely:

Firstly, the Livepatch Client is packaged and distributed as a self-updating snap application. Snap packages are tamper-proof, GPG-signed, compressed, and read-only filesystems. The self-updating functionality is clever enough to roll back to the previous version, if the upgrade fails. Snaps run in a sandboxed environment, and system access is denied by default. The Livepatch snap application is strictly confined, and has granular access only to the areas of the system that are essential for its function, through pre-defined snap interfaces. 

Secondly, Canonical has implemented a certificate-based trust model to ensure Livepatch updates have been published by a trusted source, and not a third party with nefarious intent.

Certificate-based trust model for runtime code insertion

Livepatch implements a certificate-based trust chain wherein all patches must be cryptographically signed by Canonical. Certificates are embedded in all Linux kernels built by Canonical, and Livepatch updates are verified against these embedded certificates before being applied at runtime. Additionally, CA certificates are stored in bootloader packages to validate kernel signatures during the Secure Boot process, but this is a separate validation system from Livepatch module verification.

In order for this system to work over time, two certificates require periodic renewal. Client authentication certificates must be updated to successfully access content from Canonical’s servers, and the certificate in Livepatch Client must match the module signing certificates embedded in the kernels. Launchpad plays a crucial role in the development, packaging, and maintenance of Ubuntu. Launchpad’s build farm compiles source code into .deb packages, and hosts the CI/CD processes around maintaining a valid certificate for Livepatch.

The Livepatch engineering team and kernel engineering team collaborate with each other to ensure the kernels and Livepatch Client are using the appropriate certificate, and collaborate with the Launchpad team to ensure the builds have been signed appropriately. The Kernel Engineers at Canonical package the updates distributed by the Livepatch Client. The same machinery that is used for testing and validating the official kernel builds is repurposed for testing and validating every Livepatch update. Every Livepatch update is distributed as a signed kernel module, and the kernel validates module signatures against embedded certificates before applying the patch.

The public and private certificate pair must match to ensure the kernel can continue receiving Livepatch updates. Canonical signs every kernel with a private certificate, and the corresponding public certificate is embedded in the kernel at build time. All kernel modules, including the patches distributed by Livepatch, are signed with the appropriate private key. When Livepatch applies updates, both the Livepatch Client and the kernel validate signatures using the embedded public certificate. Mismatches between the public and private certificates embedded in the kernel for module signing validation will prevent Livepatch modules from being applied. Invalid Livepatch updates are simply rejected by the kernel during runtime signature verification.

Conclusion

The chain of trust established through Secure Boot, which ultimately requires signed kernel modules, ensures bad actors cannot use Livepatch as a vector for attack. Certificate expiry maintains the integrity of the trust chain, and ensures continued authorization to receive patches. For critical and high kernel vulnerabilities, organizations of all sizes and personal users alike, are turning to Livepatch to shrink the exploit-window of their Ubuntu instances, after kernel vulnerabilities are reported.

Ready to security patch the Linux kernel without downtime?

Zero downtime patching is even better with zero surprises, chat with experts at Canonical to determine how Livepatch can improve your security posture.

Contact Us

27 June, 2025 12:01AM

June 26, 2025

Ubuntu Blog: Accelerating data science with Apache Spark and GPUs

Apache Spark has always been very well known for distributing computation among multiple nodes using the assistance of partitions, and CPU cores have always performed processing within a single partition. 

What’s less widely known is that it is possible to accelerate Spark with GPUs. Harnessing this power in the right situation brings immense advantages: it reduces the cost of infrastructure and the amount of servers needed, speeds up query completion times to deliver quicker results up to 7 times when compared to traditional CPUs computing, and does it all in the background without having to alter any existing Spark applications’ code. We’re excited to share that our team at Canonical has enabled GPU support for Spark jobs using the NVIDIA RAPIDS Accelerator – a feature we’ve been developing to address real performance bottlenecks in large-scale data processing.

This blog will explain what advantages Spark can deliver on GPUs, how it delivers them, when GPUs might not be the right option, and guide you through how to launch Spark jobs with GPUs.

Why data scientists should care about Spark and GPUs

Running Apache Spark on GPUs is a notable opportunity to accelerate big data analytics and processing workloads by taking advantage of the specific strengths of GPUs. 

Unlike traditional CPUs, which typically have a low number of cores designed for sequential processing, GPUs are made up of thousands of smaller, power-saving cores that are designed for executing thousands of parallel threads concurrently. This architectural difference makes GPUs well suited to the highly distributed operations common in Spark workloads. By offloading such operations to GPUs, Spark can improve performance significantly, reducing query execution times by orders of magnitude compared to CPU-only environments, usually accelerating data computing by 2x to 7x. This significantly reduces time to insight for organizations, making a noticeable difference.

In this regard, GPU acceleration in Apache Spark constitutes a big advantage for data scientists, as they transition from traditional analytics to AI applications. Standard Spark workloads are CPU core-intensive, which does offer an extremely powerful computation due to its distributed nature – however, it may not be powerful enough to manage  AI-powered analytics workloads.

With GPUs, on the other hand, data scientists can work at higher speed – greater data scale, and improved efficiency. This means data scientists can iterate faster, explore data more interactively, and provide actionable insights in near real-time, which is critical in today’s fast-paced decision-making environments.

Alongside speed acceleration, GPU acceleration also simplifies the data science workflow by combining data engineering and machine learning workloads on a single platform. Through Spark with GPU acceleration, users can efficiently perform data preparation, feature engineering, model training, and inference in one environment without separate infrastructure, or complicated data movement between systems. Consolidating workflows reduces  operational complexity and speeds up end-to-end data science projects.

A third major advantage of using Spark on GPUs is that it reduces operational expenses. Given GPUs offer much greater throughput per machine, companies can achieve equal – or better – results with fewer servers. This keeps costs down, and reduces power consumption. This makes big-data analytics more affordable and sustainable – increasingly important areas for enterprises.

Finally, all of this is achievable without code rewriting or workflow modification, as technologies like NVIDIA RAPIDS smoothly integrate with Spark. Making adoption easier helps users to overcome a major barrier to unlocking the capabilities of GPUs, so they can prioritize rapid value delivery.

When should you rely on traditional CPUs?

It is important to note that not all workloads in Spark will benefit equally from GPU acceleration. 

Firstly, GPUs aren’t efficient for small data set workloads, since data transfer overhead between GPU and CPU memory can be higher than the performance benefit of GPU acceleration. With small workloads, fine-grained parallelism doesn’t benefit from the strengths of GPUs. Likewise, workloads that involve consistent data shuffling within the cluster may not be well suited. This is because shuffling leads to costly data movement across CPU and GPU memory, effectively slowing down operations.

Another good reason to rely on CPUs is if your Spark jobs rely significantly on user-defined functions that are not supported or optimized for execution on GPU. 

Similarly, if your workloads entail operations that directly operate on Resilient Distributed Datasets (RDDs), GPUs might not be the best choice. This is because the RAPIDS Accelerator is currently not capable of handling these workloads and will run them on the CPU instead. Finally, you will also need to make sure that your environment meets the hardware and configuration requirements for GPU acceleration.

To find out whether GPU acceleration is useful in your chosen environment, it’s worth carefully profiling and benchmarking your workloads. 

How to launch Spark jobs with GPUs

Our charm for Apache Spark works with Kubernetes as a cluster manager, so to enable GPUs on Apache Spark we will need to work with pods and containers.


First, you will need to deploy Charmed Apache Spark’s OCI image that supports the Apache Spark Rapids plugin. Read our guide to find out how

Once you’ve completed the deployment, and you’re ready to launch your first job,  you’ll need to create a pod template to limit the amount of GPU per container. To do so, edit the pod manifest file (gpu_executor_template.yaml) by adding the following content:

apiVersion: v1
kind: Pod
spec:
  containers:
    - name: executor
      resources:
        limits:
          nvidia.com/gpu: 1

With the spark-client snap, we can submit the desired Spark job, adding some configuration options for enabling GPU acceleration:

spark-client.spark-submit \
    ... \ 
    --conf spark.executor.resource.gpu.amount=1 \
    --conf spark.task.resource.gpu.amount=1 \
    --conf spark.rapids.memory.pinnedPool.size=1G \
    --conf spark.plugins=com.nvidia.spark.SQLPlugin \
    --conf spark.executor.resource.gpu.discoveryScript=/opt/getGpusResources.sh \
    --conf spark.executor.resource.gpu.vendor=nvidia.com \
    --conf spark.kubernetes.container.image=ghcr.io/canonical/charmed-spark-gpu:3.4-22.04_
edge\
    --conf spark.kubernetes.executor.podTemplateFile=gpu_executor_template.yaml
    …

With the Spark Client snap, you can configure the Apache Spark settings at the service account level so they automatically apply to every job. Find out how to manage options at the service account level in our guide.

Spark with GPUs: the takeaway

In short, NVIDIA RAPIDS GPU acceleration offers Apache Spark enormous performance boosts, enabling faster data processing, cost savings, and does so without code change. This means data scientists can process bigger data sets and heavy models more efficiently, generating insights faster than before. Not all workloads, however, are benefited equally; small data sets, excessive data shuffling, or unsupported functions could limit GPU advantages. Careful profiling must be performed in order to determine when GPUs are a cost effective choice to make. Overall, Spark on GPUs offers a powerful way to accelerate data science and drive innovation.

26 June, 2025 02:55PM

Ubuntu Blog: Cut data center energy costs with bare metal automation

Data centers are popping up everywhere. With the rapid growth of AI, cloud services, streaming platforms, and connected devices, the demand for compute is growing, – and data centers are in the middle of it all. But while they’re essential for the digital economy, their energy consumption poses a major challenge.

Unfortunately, a large part of data centers’ energy just isn’t used efficiently. A surprising amount goes into physical machines that are powered on but idle. And it’s not just wasteful, it’s also unnecessary, given how automation and tools like Canonical MAAS (Metal as a Service) can make a real difference. In this blog, we’ll explore how to improve data center energy efficiency and reduce power waste through smart automation tools.

The energy challenge in modern data centers

It’s easy to see why data centers need so much power: servers, cooling systems, network infrastructure – it all adds up. And the more powerful the workloads are (think large-scale AI training or video processing), the more energy is needed to keep things running. In fact, some providers are going as far as building their own renewable energy plants just to keep up. 

But here’s the thing: not all of those servers need to be on 24/7.

In virtualized or containerized environments, orchestrators can handle scale automatically. Need more capacity? Spin up more VMs or containers. Traffic slows down? They scale back, saving resources. But physical machines don’t work that way – or at least, not out of the box. Most orchestration tools stop at the OS level and don’t control the power state of the underlying hardware.

So in many cases, physical servers stay powered on even when they’re not doing anything. Development machines run over weekends. Infrastructure for bursty workloads (like telco or streaming services) stays fully online, just in case demand spikes.

This kind of constant uptime drains your power budget and shortens your hardware’s lifespan. But with the right automation tools, there’s no reason bare metal can’t be just as dynamic and responsive as VMs.

When can machines be powered down?

Not every workload needs to run all the time. In fact, there are plenty of scenarios where it makes perfect sense to power down machines, if you have the tools to do it cleanly and bring them back up when needed.

Here are a few common examples:

Development and test environments

Many companies maintain dedicated physical environments for development, testing, or staging. But developers usually work weekdays, 9 to 5. That means those machines often sit idle overnight and throughout the weekend. With automation, these servers can be shut down during off-hours and brought back up when the team logs in Monday morning.

Bursty or on-demand services

Some workloads only spike at specific times. For instance, think of telecom platforms during major events or streaming services during prime time. During low-traffic periods, there’s no need to keep all infrastructure running at full capacity.

Scheduled jobs and batch processing

Some systems run on a schedule. Nightly data processing, weekly builds, or monthly reporting tasks are good examples. Instead of keeping the underlying hardware online 24/7, you can power it up just in time for the job, then shut it down afterward.

Labs and temporary environments

QA labs, customer demos, or sandbox environments often run temporarily but remain online out of habit or convenience. Automating their lifecycle ensures they’re only using resources when they’re actually needed.

These are just a few examples, but the core idea is simple: if a machine isn’t doing valuable work, it probably shouldn’t be on. That’s the kind of mindset that leads to real energy savings. And it’s exactly the kind of thing automation can handle, especially with a bare metal management tool.

The opportunity: smarter infrastructure management

So, if energy waste is such a common problem in data centers, why hasn’t it been fixed already?

A big part of the answer is tooling. Virtual machines and containers have powerful orchestration layers (such as Kubernetes, Juju, and so on) that can scale workloads automatically. But bare metal has traditionally lagged behind. Managing physical infrastructure has often meant static, manual provisioning, and once a server is on, it tends to stay on.

That’s a missed opportunity – and the best way to benefit from it is to use smart automation or management tools. Such tools typically allow you to:

  • Reclaim unused hardware quickly
  • Repurpose machines on demand
  • Provision and decommission systems based on real usage patterns

Bare metal automation opens the door to treating physical infrastructure more like cloud infrastructure: elastic, dynamic, and efficient. With the right tools in place, you can cut waste, reduce costs, and make better use of the hardware you already have.

That’s where Canonical MAAS comes into the picture, and how it fills the orchestration gap left by VM- and container-focused tools.

How bare metal automation solves the data center energy waste problem

Canonical MAAS brings cloud-like automation to bare metal. It provides full lifecycle machine management, from discovery and commissioning, to OS deployment. The API abstracts physical machines so that the upper level applications and orchestrators can treat physical machines as if they were virtual machines.

And how can we use MAAS to solve the energy waste problem?

On-demand provisioning and deprovisioning

MAAS lets you treat physical servers like elastic resources. You can deploy a machine when it’s needed and release it when it’s not. Whether it’s powering down unused dev machines or spinning up nodes for a high-traffic window, MAAS makes it easy to manage physical infrastructure dynamically.

Power management through integrated APIs

MAAS supports a wide range of power management interfaces (IPMI, Redfish, and others), so it can turn machines on or off remotely. That means you can build automation around actual usage, powering down idle servers during off-hours and waking them up when needed.

Integration with orchestration tools

While Kubernetes handles your container workloads and Juju orchestrates applications across clouds, MAAS takes care of the underlying metal. This creates a complete stack where every layer can scale intelligently from bare metal to application. With the right orchestration tools’ configurations, machines can be powered off when unused and can be brought to life again when the resource is needed.

Scriptable and customizable

Do you want to create your own application? No problem. MAAS offers REST APIs and CLI tools so you can script exactly how your infrastructure behaves. Want to schedule weekend shutdowns for certain machine pools? Or automatically decommission hardware after a job finishes? MAAS makes it possible.

Example: Juju + MAAS in Action

Juju is widely used to deploy and manage applications across hybrid infrastructure. By integrating Juju with MAAS, you can extend this automation to physical servers’ power states.

How it works:

  1. Deploy your MAAS and Juju controllers.
  2. Add MAAS to Juju as a cloud provider.
  3. Now, Juju can deploy applications to the MAAS-managed bare metal servers
  4. Create a centralized system script that:
    • Monitors machine states by polling Juju’s status or querying machines directly (e.g., via SSH or agent checks).
    • Powers off idle machines via the MAAS API when they meet criteria (e.g., no Juju workloads for >1 hour).
  5. This solution relies on MAAS + Juju auto-scaling to power machines back on when resources are needed (e.g., during juju deploy or add-unit).

Result:

  • Servers only consume energy when actively used.
  • Teams retain on-demand access without manual intervention.
  • Energy costs drop significantly—especially for environments with predictable downtime (e.g., nights/weekends).

This solution mirrors the elasticity of the cloud but applies it to bare metal, closing the efficiency gap.

Use MAAS to cut your data center power bill today

Data centers don’t have to be power-hungry monsters. With smart automation, you can reduce energy waste and operational costs, and make your infrastructure greener, without sacrificing performance or flexibility.

Here’s how to get started:

The future of data centers is efficient, dynamic, and sustainable. With tools like MAAS, that future is within reach.

Ready to cut energy costs? Learn more about MAAS or contact our team for a custom solution.

Further reading

26 June, 2025 02:38PM