September 07, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Faizul "Piju" 9M2PJU: Setting Up Your Own Mail Server with Mailcow and Docker: A Step-by-Step Guide

Managing your own email server can be a game-changer, especially if you are concerned about privacy, control, and customization. Enter Mailcow, a versatile open-source suite for managing your email that integrates beautifully with Docker. In this post, we’ll explore what Mailcow is, why you should consider using it, and a step-by-step guide to installing it using Docker.

What is Mailcow?

Mailcow is a modern, self-hosted mail server suite based on Docker. It bundles several popular open-source software components to provide a complete email solution. Mailcow is designed to be easy to deploy and maintain, allowing users to create a secure and feature-rich email environment with a clean and intuitive web interface.

Key Features of Mailcow:

  • Web-Based Management Interface: A user-friendly interface to manage users, domains, and settings.
  • Anti-Spam and Anti-Virus: Integrated tools like SpamAssassin and ClamAV for security.
  • DKIM, DMARC, and SPF Support: Helps in securing your email traffic and reducing spam.
  • Calendar and Contact Synchronization: Includes SOGo for calendaring, contact, and mail client support.
  • Integrated Backup: Automated and easy-to-manage backups.
  • Multiple Language Support: Supports a wide range of languages, making it accessible globally.

Why Use Mailcow?

There are several reasons to choose Mailcow as your email server:

  1. Control and Privacy: Keep your data private by hosting your email service.
  2. Cost Efficiency: Save on costs associated with paid email services.
  3. Feature-Rich: Offers all features that a modern email system should have.
  4. Community Support: As an open-source project, Mailcow has an active community providing support and updates.

How to Install Mailcow using Docker

Installing Mailcow using Docker simplifies the deployment process by isolating each component in its container. Here’s a step-by-step guide to setting up Mailcow on a fresh Ubuntu or Debian server.

Step 1: System Requirements and Preparation

Before installing Mailcow, ensure you have the following:

  • A fresh server running Ubuntu 20.04 LTS or Debian 10/11.
  • At least 2 GB of RAM (recommended 4 GB or more).
  • 10 GB of disk space, although more is better for storing emails.
  • A fully qualified domain name (FQDN) for your server, e.g., mail.yourdomain.com.

Ensure your system is up-to-date:

sudo apt update && sudo apt upgrade -y

Step 2: Install Docker and Docker Compose

Mailcow runs on Docker, so the first step is to install Docker and Docker Compose.

  1. Install Docker:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
  1. Install Docker Compose:
sudo apt install docker-compose -y

Step 3: Clone the Mailcow Repository

Now, download Mailcow from its GitHub repository to get the latest version:

git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized

Step 4: Configure Mailcow

  1. Generate Configuration Files:

Run the script to generate the necessary configuration files:

./generate_config.sh

You will be prompted to enter your server’s FQDN (e.g., mail.yourdomain.com).

  1. Adjust Configuration Files:

You can customize your configuration by editing the mailcow.conf file generated by the script.

Step 5: Set Up DNS Records

To make your Mailcow server fully operational, set up the following DNS records for your domain:

  • A Record: Points your domain (e.g., mail.yourdomain.com) to your server’s IP address.
  • MX Record: Directs emails to your domain. Set it to mail.yourdomain.com.
  • SPF, DKIM, and DMARC Records: These records help in securing and authenticating your domain. Mailcow can help you generate these records.

Step 6: Start Mailcow

Now that everything is set up, it’s time to start Mailcow:

docker-compose pull
docker-compose up -d

This command will pull the necessary Docker images and start the Mailcow services in detached mode.

Step 7: Access the Mailcow Web Interface

Once Mailcow is up and running, open your browser and navigate to https://mail.yourdomain.com. You should see the Mailcow login screen. The default credentials are:

  • Username: admin
  • Password: moohoo

For security reasons, change these credentials immediately after the first login.

Step 8: Configure Mailcow

From the Mailcow admin interface, you can add domains, create email accounts, set up DKIM signing, configure spam filtering, and much more. The web interface is intuitive and provides a straightforward way to manage your mail server.

Step 9: Set Up Backups and Maintenance

Mailcow provides built-in tools for backup and maintenance. You can configure automatic backups and manage them through the web interface to ensure your emails are safe and your server runs smoothly.

Conclusion

Mailcow offers an excellent solution for anyone looking to host their own email server. With Docker’s simplicity and Mailcow’s feature-rich environment, setting up and maintaining a self-hosted mail server has never been easier. Whether you are a business looking to control your communication or a tech enthusiast who loves having everything in-house, Mailcow is a fantastic option.

By following this guide, you’ll have your Mailcow server up and running in no time! Enjoy full control of your emails and the security that comes with it.

The post Setting Up Your Own Mail Server with Mailcow and Docker: A Step-by-Step Guide appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

07 September, 2024 08:29AM

hackergotchi for Qubes

Qubes

Qubes Canary 040

We have published Qubes Canary 040. The text of this canary and its accompanying cryptographic signatures are reproduced below. For an explanation of this announcement and instructions for authenticating this canary, please see the end of this announcement.

Qubes Canary 040


                    ---===[ Qubes Canary 040 ]===---


Statements
-----------

The Qubes security team members who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is September 06, 2024.

2. There have been 104 Qubes security bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

       427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
   Project (e.g. to hand out the private signing keys or to introduce
   backdoors).

5. We plan to publish the next of these canary statements in the first
   fourteen days of December 2024. Special note should be taken if no new
   canary is published by that time or if the list of statements changes
   without plausible explanation.


Special announcements
----------------------

None.


Disclaimers and notes
----------------------

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.

The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.


Proof of freshness
-------------------

Fri, 06 Sep 2024 03:19:20 +0000

Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Rmaych: A Christian Town Trapped between Hezbollah and Israel
DER SPIEGEL's Coverage of Donald Trump: We Have Failed to Tame the Media Monster
Interview with German Chancellor Olaf Scholz: "Pithy Sayings Are Not Part of My Approach to Politics"
War in Sudan: Soup Kitchens Fight against Looming Famine
Warsaw's Palace of Culture: From a Symbol of Oppression to a Symbol of Subversion

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Anti-Polio Campaign in Gaza Enters New Phase, Hours After Deadly Strike
Woman in France Testifies Against Husband Accused of Bringing Men to Rape Her
Boko Haram Kills at Least 170 Villagers in Nigeria Attack
German Police Shoot Gunman Dead Near Israeli Consulate in Munich
Pope Finds Fervent Fans Among Indonesia’s Transgender Community

Source: BBC News (https://feeds.bbci.co.uk/news/world/rss.xml)
‘Our future is over’: Forced to flee by a year of war
Father of suspect in Georgia school shooting arrested
Hunter Biden makes last-minute guilty plea in tax case
Telegram CEO Durov says his arrest 'misguided'
'Running for her family' - Olympian mourned after vicious attack

Source: Blockchain.info
000000000000000000016d3095d652dbcfd3f4323c3472470b2e0d6f0866774b


Footnotes
----------

[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: canary-040-2024.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmbbcV8ACgkQ1lWk8hgw
4Grqjw/+MOTIwNnZkO1oIsx6Qhq9ZeaCzG60q+k9sVy30gRIbAJtVY+rXlKdQiz8
cLq571iZk0SVkm7lH5aQpLKDu1gUNRbQyC+7xzdkvWpqGAVKbdhB2nUCv4yb5yyi
Wgf7zge3yqe2qVCkEowSVptWTKVepGKeQwH01LujmerpgaDv+RhNSEcA0PUVtsNG
vvviiXwBqUkrCFe2M5vsTVOTUd1T2vQscIg/sYDFVY1fv4MQy0KzTMcqYNWDiw1H
Ke+AJksfUccL1FBXFYtxhBrS4fPqGyb41lXF+ue8k8ixKc0dhGdLtAfOhYZF9RNu
7/OZtMlUKrSasrzaBOQRUQ08gG3Gcav4OD0GqAmvJMGdHRj3SuSBAhAuizXt3vkm
Vd2QKtUZ30jJ2OXDhwqm1x6PLXZHBymEvlaRhaIGOeHFKi3kX6izT0uB6uirOWnM
lBXtBnWMQSKbhD3o1EfL5LRolBEjhmyuT9HDruSIecc4vy6dH5tSdj2vptbk8M3Q
b8vO1pX63A7vyEgsaaELSH+SOznFxSxkNW0NiuIeWAS3zs1j4Cd8dY91LFGvUfSb
LMjD6pI/vkvBxfRq2dt2HIAGKz9P3BohVPY4wprRyukxw37W7r2MF08qenA0G3YP
bfF9ocjAe3sqzw1zm79doMGII5U7G15AQgeowXuVqd3B7yn1pHY=
=PSOA
-----END PGP SIGNATURE-----

Source: canary-040-2024.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmbcJoEACgkQSsGN4REu
FJA/PQ//acXenduLRFrCy43B7o9ThtPDuO++fJUCE8bbiamY5pbkyTIFnADkVago
3ToeWVhNvoQYtx0YmGCj4dHfUGIWgOP2CDD6QqSTBMzFxc3EBlw+icHnSf0TXqQV
IhTZ9VGAWnqmiE3VVnBMWOF+W+Lojq6UIzcR6OsKPR6PpqPfEXZxROSunuua1VCx
Shu7/bWESQjjwUtwFMb8FNuSczIJVU7Nn7E24t55yCWWJGFIPsyPkfircmez00kx
gmy7273Gu6OrknPqLSGQ0dewG8qnAeLBvr7kyz/BDzTOyp9Gpmw+cPhwv/FtYcHo
jnvSFD+Tyog+CgR7+u8cYyntfwcidnmfl2FoR9WQiSnzirXbwQXsInJJKm8TCgm9
84CsVfKQujT4W26H71GIxfpAS9+fplM9dD4soBJGsjlYrpRzuQz9569LyAz52qDE
Yzq519pd0LiSnIIY2tgNR6jZw8q0Ud+ORdxUQ2m7I/JH/VMuoszv9VGUUCPOj8Ib
WqfNxQrBwg/wFjfepw/KnbTnXB1NF64tiAYV73sddSE1jGPKTYhrApeako5UDZjv
YPVIZ4wF552wAQbDODQeu3xeRnXen1A+cnHetzNTULyLrzwDQ7O14v+wX2jvPXAH
9pDaUJVLM3OQiFr3Yq8gQLhSJyRSTu1LzE3SenZqTceuvQTmyqA=
=nnrv
-----END PGP SIGNATURE-----

Source: canary-040-2024.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes canary has been published.

What is a Qubes canary?

A Qubes canary is a security announcement periodically issued by the Qubes security team consisting of several statements to the effect that the signers of the canary have not been compromised. The idea is that, as long as signed canaries including such statements continue to be published, all is well. However, if the canaries should suddenly cease, if one or more signers begin declining to sign them, or if the included statements change significantly without plausible explanation, then this may indicate that something has gone wrong. A list of all canaries is available here.

The name originates from the practice in which miners would bring caged canaries into coal mines. If the level of methane gas in the mine reached a dangerous level, the canary would die, indicating to miners that they should evacuate. (See the Wikipedia article on warrant canaries for more information, but bear in mind that Qubes Canaries are not strictly limited to legal warrants.)

Why should I care about canaries?

Canaries provide an important indication about the security status of the project. If the canary is healthy, it’s a strong sign that things are running normally. However, if the canary is unhealthy, it could mean that the project or its members are being coerced in some way.

What are some signs of an unhealthy canary?

Here is a non-exhaustive list of examples:

  • Dead canary. In each canary, we state a window of time during which you should expect the next canary to be published. If no canary is published within that window of time and no good explanation is provided for missing the deadline, then the canary has died.
  • Missing statement(s). Every canary contains the same set of statements (sometimes along with special announcements, which are not the same in every canary). If an important statement was present in older canaries but suddenly goes missing from new canaries with no correction or explanation, then this may be an indication that the signers can no longer truthfully make that statement.
  • Missing signature(s). Qubes canaries are signed by the members of the Qubes security team (see below). If one of them has been signing all canaries but suddenly and permanently stops signing new canaries without any explanation, then this may indicate that this person is under duress or can no longer truthfully sign the statements contained in the canary.

No, there are many canary-related possibilities that should not worry you. Here is a non-exhaustive list of examples:

  • Unusual reposts. The only canaries that matter are the ones that are validly signed in the Qubes security pack (qubes-secpack). Reposts of canaries (like the one in this announcement) do not have any authority (except insofar as they reproduce validly-signed text from the qubes-secpack). If the actual canary in the qubes-secpack is healthy, but reposts are late, absent, or modified on the website, mailing lists, forum, or social media platforms, you should not be concerned about the canary.
  • Last-minute signature(s). If the canary is signed at the last minute but before the deadline, that’s okay. (People get busy and procrastinate sometimes.)
  • Signatures at different times. If one signature is earlier or later than the other, but both are present within a reasonable period of time, that’s okay. (For example, sometimes one signer is out of town, but we try to plan the deadlines around this.)
  • Permitted changes. If something about a canary changes without violating any of statements in prior canaries, that’s okay. (For example, canaries are usually scheduled for the first fourteen days of a given month, but there’s no rule that says they have to be.)
  • Unusual but planned changes. If something unusual happens, but it was announced in advance, and the appropriate statements are signed, that’s okay (e.g., when Joanna left the security team and Simon joined it).

In general, it would not be realistic for an organization to exist that never changed, had zero turnover, and never made mistakes. Therefore, it would be reasonable to expect such events to occur periodically, and it would be unreasonable to regard every unusual or unexpected canary-related event as a sign of compromise. For example, if something usual happens with a canary, and we say it was a mistake and correct it, you will have to decide for yourself whether it’s more likely that it really was just a mistake or that something is wrong and that this is how we chose to send you a subtle signal about it. This will require you to think carefully about which among many possible scenarios is most likely given the evidence available to you. Since this is fundamentally a matter of judgment, canaries are ultimately a social scheme, not a technical one.

What are the PGP signatures that accompany canaries?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all canaries so that Qubes users have a reliable way to check whether canaries are genuine. The only way to be certain that a canary is authentic is by verifying its PGP signatures.

Why should I care whether a canary is authentic?

If you fail to notice that a canary is unhealthy or has died, you may continue to trust the Qubes security team even after they have signaled via the canary (or lack thereof) that they been compromised or coerced. Falsified canaries could include manipulated text designed to sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a canary?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (Qubes Canary 040), the commands are:

$ gpg --verify canary-040-2024.txt.sig.marmarek canary-040-2024.txt
$ gpg --verify canary-040-2024.txt.sig.simon canary-040-2024.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the Qubes Canary 040 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

07 September, 2024 12:00AM

September 06, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: A desktop touched by Midas: Oracular Oriole

<noscript> <img alt="" height="308" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_312,h_308/https://ubuntu.com/wp-content/uploads/ecc2/oriole.gif" width="312" /> </noscript>

In the poem “To an Oriole” [1], novelist and poet Edgar Fawcett inquires about the origin and nature of the oriole. He likens the northern song bird to a “scrap of sunset with a voice” and an orange tulip in a forgotten garden that was magically transformed. This type of poetic and mystical imagery around Orioles can be found throughout classical literature and mythology. In many cultures, their presence was considered a divine and prophetic symbol of creativity, prosperity and community. These three virtues are rather synonymous with the spirit of Ubuntu and therefore we wanted to ensure our muse was given an artistic interpretation befitting its oracular aura.

Ubuntu 24.10 aims to capture the bird “that Midas touched” [2] in it’s official wallpaper:

<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/6481/wallapaper_Oriole_colour_1920x1080-min.png" width="1920" /> </noscript>
<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/98bf/wallapaper_Oriole_dark_1920x1080-min.png" width="1920" /> </noscript>
<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/d466/wallapaper_Oriole_dimmed_1920x1080-min.png" width="1920" /> </noscript>
<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/4c7a/wallapaper_Oriole_light_1920x1080-min.png" width="1920" /> </noscript>

You can download the official Oracular Oriole wallpapers in a variety of colors, shapes, and sizes here.

As above, so below

When tasked with depicting our aureate mascot, the Design team at Canonical found inspiration from astrological iconography. Marcus Haslam, the Magician, explains:

“Incorporating the adjective (Oracular) with the Oriole was indeed a challenge.  I wanted this to be in keeping with the brand elements we have in our design system. The use of the circle, line illustrations and color palette fit in well and a good starting point.  My first task was building the Oriole bird out of concentric circles, this would form the centerpiece. As it progressed I added other elements to the design to bring in “Oracular.” The (all seeing) eye with the circle of friends inside was a gift, and the crystal ball added a playful aspect to the mascot.  I added planets around the outer edge to complete the Ubuntu world. When animated these move into position with a nod to the circle of friends logo.”

“Designing the mascot I always had the animation in the back of my mind and how it may work. The concentric circles  proved a very useful asset, it created a nice rhythm and movement for this Oracular mythical world.”

Vicennium visions

As tradition, our official wallpaper is accompanied by artwork submitted and voted upon by the community. The categories for this release being: Mascot, Digital Art, Photography and 20th Anniversary. Our many artists, photographers and pixel wizards provided dozens of beautiful submissions with two chosen from each category for inclusion. It’s our great honor to present the winners of the Oracular Oriole wallpaper competition:

Mascot Category

<noscript> <img alt="" height="338" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_602,h_338/https://lh7-rt.googleusercontent.com/docsz/AD_4nXd9UwUnhH4yO6vzNUv2FCOHApm23aEy0ag-BE-ojFch6E4jj-bzPraE-qyYhiM-KrutAIpFSQB7n1R_Nsfn9W8qdvmcLZbm3KaJbUThixdCby7eBpHspW0OGNb8S9lvM3YcIWdT7xirLCg5Ds75CkxZltBK?key=8j47dC1CKK-Vw9W2IUOTYA" width="602" /> </noscript>

Oriole Mascot (Light\Dark) – @moskalenko-v

<noscript> <img alt="" height="337" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_599,h_337/https://lh7-rt.googleusercontent.com/docsz/AD_4nXeaPBFy42vpWTJK_42Qvrk3C63QN-l1MdMC7S43Xg6vj2_ZB_fgjrGNCdGDIwHsEcR4NcDcu-rCxsgTrXtfS-x4aDyU0J86USF2G7DQ9c8CnvaiEGfdup6duVjFUyXKm8wzKcYL6q1f_PbyTP7E45_tmBjn?key=8j47dC1CKK-Vw9W2IUOTYA" width="599" /> </noscript>

The Oracular Oriole – @meetdilip

Digital Art Category

<noscript> <img alt="" height="340" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_606,h_340/https://lh7-rt.googleusercontent.com/docsz/AD_4nXdUYfWOSE6cWaJoTr6eBoiurPtTGMwqPPIfa4Hfv4MHjTjGd9kYjtOQL0jAC6F6btMoTD-4dYxr20F5DiOEqWvmFlPaCbbVJToZhZPVPdnpDo55_Z66ZXdh0XJG4vG51vno22Ml7h6jVcJ2TBGkS10AeT7J?key=8j47dC1CKK-Vw9W2IUOTYA" width="606" /> </noscript>

Einsamer Raum – @orbitelambda

<noscript> <img alt="" height="339" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_604,h_339/https://lh7-rt.googleusercontent.com/docsz/AD_4nXcAy705D5J3jI5_0GFDBTwqgprrqtiIpQYE3Z40F7ugfjZR5TbxVtJ7pdYOF0nXIv_h5JrsshG_8m3bdqM4_9eYkDR7W2L6G9vxPhabgOXNGq7ghGjqUf3iyXs828LeG6kTyb3wzZ7Tp9t8cA39vSkOKvQ?key=8j47dC1CKK-Vw9W2IUOTYA" width="604" /> </noscript>

Arizona Nacht – @orbitelambda

Photography Category

<noscript> <img alt="" height="455" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_607,h_455/https://lh7-rt.googleusercontent.com/docsz/AD_4nXfmZ1hwaP3T0fg3WFuhLGPBYnX9IrTtAxz2_pNhNw34Xe8iMu6xrw6Q90L_6xRdm-3HxjLD7JI4rxvPRDTs-ZhLXayxRinJ3Y6aVlTqRkvVzH7pT138GoX_EKUdu6nIod-eNJSSR0dqikzqNigf5R4XQjII?key=8j47dC1CKK-Vw9W2IUOTYA" width="607" /> </noscript>

schabing 2 – @schabing

<noscript> <img alt="" height="456" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_607,h_456/https://lh7-rt.googleusercontent.com/docsz/AD_4nXdKX-BuYM3bYo2yvKatWI2erdCCqIqCExdLE90mjsB3DnrkjdvW27trotd2yW1yaNV6EdJhj2cXAl5lWJqos2GXngUuOAr6_JMUH5GcIDM6YH0pOgfiTWaNdRzGp50RhVxESI44G-XkNfffVQIN-eC-LnHK?key=8j47dC1CKK-Vw9W2IUOTYA" width="607" /> </noscript>

Sunset – @gafreax

20th Anniversary Category

<noscript> <img alt="" height="342" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_609,h_342/https://lh7-rt.googleusercontent.com/docsz/AD_4nXfY1XhjQDAx-W8Io74et4Y0_BHzE4AU9PY0fUSa_Ztf6Y9-JtwSRgWtXuSUsyGoFuyQxNuAtjshRzAb0GNYZQZnjlCbdR-WYxHQ0Zw_-AnFzxnFTA40tLJXiCJkcgdYOO9yohVhVc870CdfTq36ENKMXzt9?key=8j47dC1CKK-Vw9W2IUOTYA" width="609" /> </noscript>

Legacy (Light/Dark) – @aaronprisk

<noscript> <img alt="" height="342" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_608,h_342/https://lh7-rt.googleusercontent.com/docsz/AD_4nXdloAMrl0_Wa7eEJWoOGYxojzZMdlW6c-nOYdfMddLrB-JGSh8w6LYafiENmWARBmoPpV9AKBzixtS0aKm_vqEpxCchau_rPOUw1DG2gn85Tapp_SBR0CV99JhH3OaE0LzE8pfWduR610uM3ovuRIPc5vuf?key=8j47dC1CKK-Vw9W2IUOTYA" width="608" /> </noscript>

Warty Remastered – @romactu1

Getting Involved

The Ubuntu Wallpaper Competition is just one of many ways you can contribute to the Ubuntu community. Discover how you can get involved at: https://ubuntu.com/community/contribute

References

[1] https://allpoetry.com/To-An-Oriole

[2] https://en.wikisource.org/wiki/Poems:_Second_Series_(Dickinson)/The_Oriole

06 September, 2024 01:41PM

Alan Pope: Windows 3.11 on QEMU 5.2.0

This is mostly an informational PSA for anyone struggling to get Windows 3.11 working in modern versions of QEMU. Yeah, I know, not exactly a massively viral target audience.

Anyway, short answer, use QEMU 5.2.0 from December 2020 to run Windows 3.11 from November 1993.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

An innocent beginning

I made a harmless jokey reply to a toot from Thom at OSNews, lamenting the lack of native Mastodon client for Windows 3.11.

When I saw Thom’s toot, I couldn’t resist, and booted a Windows 3.11 VM that I’d installed six weeks ago, manually from floppy disk images of MSDOS and Windows.

I already had Lotus Organiser installed to post a little bit of nostalgia-farming on threads - it’s what they do over there.

Post by @popey
View on Threads

I thought it might be fun to post a jokey diary entry. I hurriedly made my silly post five minutes after Thom’s toot, expecting not to think about this again.

Incorrect, brain

I shut the VM down, then went to get coffee, chuckling to my smart, smug self about my successful nerdy rapid-response. While the kettle boiled, I started pondering - “Wait, if I really did want to make a Mastodon client for Windows 3.11, how would I do it?

I pondered and dismissed numerous shortcuts, including, but not limited to:

  • Fake it with screenshots doctored in MS Paint
  • Run an existing DOS Mastodon Client in a Window
  • Use the Windows Telnet client to connect insecurely to my laptop running the Linux command-line Mastodon client, Toot
  • Set up a proxy through which I could get to a Mastodon web page

I pondered a different way, in which I’d build a very simple proof of concept native Windows client, and leverage the Mastodon API. I’m not proficient in (m)any programming languages, but felt something like Turbo Pascal was time-appropriate and roughly within my capabilities.

Diversion

My mind settled on Borland Delphi, which I’d never used, but looked similar enough for a silly project to Borland Turbo Pascal 7.0 for DOS, which I had. So I set about installing Borland Delphi 1.0 from fifteen (virtual) floppy disks, onto my Windows 3.11 “Workstation” VM.

Windows 3.11, with a Borland Delphi window open

Thank you, whoever added the change floppy0 option to the QEMU Monitor. That saved a lot of time, and was reduced down to a loop of this fourteen times:

"Please insert disk 2"
CTRL+ALT+2
(qemu) change floppy 0 Disk02.img
CTRL+ALT+1
[ENTER]

During my research for this blog, I found a delightful, nearly decade-old video of David Intersimone (“David I”) running Borland Delphi 1 on Windows 3.11. David makes it all look so easy. Watch this to get a moving-pictures-with-sound idea of what I was looking at in my VM.

Once Delphi was installed, I started pondering the network design. But that thought wasn’t resident in my head for long, because it was immediately replaced with the reason why I didn’t use that Windows 3.11 VM much beyond the original base install.

The networking stack doesn’t work. Or at least, it didn’t.

That could be a problem.

Retro spelunking

I originally installed the VM by following this guide, which is notable as having additional flourishes like mouse, sound, and SVGA support, as well as TCP/IP networking. Unfortunately I couldn’t initially get the network stack working as Windows 3.11 would hang on a black screen after the familiar OS splash image.

Looking back to my silly joke, those 16-bit Windows-based Mastodon dreams quickly turned to dust when I realised I wouldn’t get far without an IP address in the VM.

Hopes raised

After some digging in the depths of retro forums, I stumbled on a four year-old repo maintained by Jaap Joris Vens.

Here’s a fully configured Windows 3.11 machine with a working internet connection and a load of software, games, and of course Microsoft BOB 🤓

Jaap Joris published this ready-to-go Windows 3.11 hard disk image for QEMU, chock full of games, utilities, and drivers. I thought that perhaps their image was configured differently, and thus worked.

However, after downloading it, I got the same “black screen after splash” as with my image. Other retro enthusiasts had the same issue, and reported the details on this issue, about a year ago.

does not work, black screen.

It works for me and many others. Have you followed the instructions? At which point do you see the black screen?

The key to finding the solution was a comment from Jaap Joris pointing out that the disk image “hasn’t changed since it was first committed 3 years ago”, implying it must have worked back then, but doesn’t now.

Joy of Open Source

I figured that if the original uploader had at least some success when the image was created and uploaded, it is indeed likely QEMU or some other component it uses may have (been) broken in the meantime.

So I went rummaging in the source archives, looking for the most recent release of QEMU, immediately prior to the upload. QEMU 5.2.0 looked like a good candidate, dated 8th December 2020, a solid month before 18th January 2021 when the hda.img file was uploaded.

If you build it, they will run

It didn’t take long to compile QEMU 5.2.0 on my ThinkPad Z13 running Ubuntu 24.04.1. It went something like this. I presumed that getting the build dependencies for whatever is the current QEMU version, in the Ubuntu repo today, will get me most of the requirements.

$ sudo apt-get build-dep qemu
$ mkdir qemu
$ cd qemu
$ wget https://download.qemu.org/qemu-5.2.0.tar.xz
$ tar xvf qemu-5.2.0.tar.xz
$ cd qemu-5.2.0
$ ./configure
$ make -j$(nproc)

That was pretty much it. The build ran for a while, and out popped binaries and the other stuff you need to emulate an old OS. I copied the bits required directly to where I already had put Jaap Joris’ hda.img and start script.

$ cd build
$ cp qemu-system-i386 efi-rtl8139.rom efi-e1000.rom efi-ne2k_pci.rom kvmvapic.bin vgabios-cirrus.bin vgabios-stdvga.bin vgabios-vmware.bin bios-256k.bin ~/VMs/windows-3.1/

I then tweaked the start script to launch the local home-compiled qemu-system-i386 binary, rather than the one in the path, supplied by the distro:

$ cat start
#!/bin/bash
./qemu-system-i386 -nic user,ipv6=off,model=ne2k_pci -drive format=raw,file=hda.img -vga cirrus -device sb16 -display gtk,zoom-to-fit=on

This worked a treat. You can probably make out in the screenshot below, that I’m using Internet Explorer 5 to visit the GitHub issue which kinda renders when proxied via FrogFind by Action Retro.

Windows 3.11, at 1280x1024, running Internet Explorer 5, looking at a GitHub issue

Share…

I briefly toyed with the idea of building a deb of this version of QEMU for a few modern Ubuntu releases, and throwing that in a Launchpad PPA then realised I’d need to make sure the name doesn’t collide with the packaged QEMU in Ubuntu.

I honestly couldn’t be bothered to go through the pain of effectively renaming (forking) QEMU to something like OLDQEMU so as not to damage existing installs. I’m sure someone could do it if they tried, but I suspect it’s quite a search and replace, or move the binaries somewhere under /opt. Too much effort for my brain.

I then started building a snap of qemu as oldqemu - which wouldn’t require any “real” forking or renaming. The snap could be called oldqemu but still contain qemu-system-i386 which wouldn’t clash with any existing binaries of the same name as they’d be self-contained inside the compressed snap, and would be launched as oldqemu.qemu-system-i386.

That would make for one package to maintain rather than one per release of Ubuntu. (Which is, as I am sure everyone is aware, one of the primary advantages of making snaps instead of debs in the first place.)

Anyway, I got stuck with another technical challenge in the time I allowed myself to make the oldqemu snap. I might re-visit it, especially as I could leverage the Launchpad Build farm to make multiple architecture builds for me to share.

…or not

In the meantime, the instructions are above, and also (roughly) in the comment I left on the issue, which has kindly been re-opened.

Now, about that Windows 3.11 Mastodon client…

06 September, 2024 01:40PM

hackergotchi for Grml

Grml

Migrated Git and Wiki services

For the last 16 years (since 2008!) we hosted our own Git infrastructure and had a mirror of our Git repositories at GitHub.com.

While running your own infrastructure clearly has its benefits, it also requires maintenance efforts, for which we no longer really have the workforce nor enjoyment we used to have. We also appreciate the social effects you get from platforms like GitHub. We therefor decided to switch to GitHub as our primary Git hosting place. Over the last days, we migrated from git.grml.org to github.com by putting according URL rewrites into place.

We also used to host our own DokuWiki for more than 19(!) years. In the last few years we didn't have any actual wiki changes, so we decided to also migrate this over to GitHub, and also there put according URL rewrites into action.

Now all our Grml Git repositories can be found at github.com/grml, and our Wiki is available at github.com/grml/grml/wiki/.

If you should notice any problems with any of our services, please reach out.

06 September, 2024 01:37PM by Michael Prokop (nospam@example.com)

hackergotchi for Deepin

Deepin

(中文) 如何在小米平板5上运行 deepin 23 ?

Sorry, this entry is only available in 中文.

06 September, 2024 02:35AM by aida

From AMD64 to RISC-V, LoongArch, and ARM64: The Multi-Architecture Adaptation Journey of deepin

Author: longlong This article is a full transcription of longlong’s speech at WHLUG, so there are some informal expressions. The content only represents personal views and positions. As the successor to deepin 20, one of the biggest changes in deepin 23 is the addition of multi-architecture support: from originally only supporting the AMD64 architecture to now supporting multiple CPU architecture platforms including AMD64, RISC-V, LoongArch (New World), and ARM64. Currently, the stable image for AMD64 architecture of deepin 23 has been released, while the images for other CPU architectures are still in the preview version stage as the ecosystem is ...Read more

06 September, 2024 02:15AM by aida

September 05, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Salih Emin: uCareSystem 24.09.05: Olympic level removal of garbage configs

uCareSystem has had the ability to detect packages that were uninstalled and then remove their config files. Now it uses a better way that detects more. Also with this release, there are fixes and enhancements that make it even more useful. First of all, Its the Olympics… you saw the app icon that was change […]

05 September, 2024 09:09PM

hackergotchi for Elive

Elive

Elive 3.8.44 released

The Elive Team is pleased to announce the release of 3.8.44 This new version includes: Login Manager: Now features a blurred version of your current wallpaper. Firmwares: Updated from the testing branch of Debian. Audio Cards detection much improved Flatpaks & Snaps: Directly supported if selected during installation. PKGBUILD: Package support included. AppImages: Improved support and integration. Proofreading massively improved in the entire system Conky configuration improvements for the header Version Details: Kernels: 6.10.6, 6.1.0 Debian base: Bookworm + backports Sizes: 3.6 GB, 2.9 GB Pre-cached (offline) packages: Yes

Check more in the Elive Linux website.

05 September, 2024 08:07PM by Thanatermesis

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E314 Rute Correia II

Continua a conversa com a Rute Correia, fã incondicional de Pusheen, para nos dizer que as consolas estão pela hora da morte; como o Steamdeck é uma bisarma fofinha, graças às possibilidades de personalização do software e do interface aberto - ou até com estojos em malha que tornam tudo mais tchanan. Falámos de XBOX, Vita, como a SEGA Dreamcast estava à frente do seu tempo e ainda lançámos invectivas aos fabricantes para regressarem ao aspecto colorido e translúcido dos anos 90, em vez de fazerem tudo em preto.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

05 September, 2024 12:00AM

September 04, 2024

hackergotchi for Proxmox VE

Proxmox VE

Proxmox at Open Source Summit 2024 in Vienna

Meet Proxmox at the Open Source Summit in Vienna from 16-18 September 2024 at our booth G/S10.

Open Source Summit is the premier event for open source developers, technologists, and community leaders to collaborate, share information, solve problems and gain knowledge, furthering open source innovation and ensuring a sustainable open source ecosystem. It is the gathering place for open source code and community contributors.

Proxmox Booth:
  • WHEN: September 16-18, 2024...

Read more

04 September, 2024 01:01PM by t.lamprecht (invalid@example.com)

hackergotchi for Pardus

Pardus

Pardus 21 ve 23 için Yeni Güncellemeler Yayımlandı

Pardus 21 ve 23 yüklü sisteminizi güncel tutmanız, yapılan değişiklikleri görmeniz için yeterlidir.

Güncellemeler, mevcut Pardus sisteminize bildirim olarak gelecektir. Pardus Güncelleyici uygulamasını kullanarak sisteminizi güncelleyebilirsiniz.

Başlıca Yapılan Değişiklikler

Pardus 21:
  • Debian 11.11 güncellemeleri ile birlikte gerekli yama ve bakımları yapılan paket güncellemeleri dahil edildi.
  • Pardus uygulamaları ve harici uygulama güncellemeleri dahil edildi.
  • Güvenlik güncellemeleri yayımlandı.
  • Kernel versiyonu 5.10.0-32‘ye yükseltildi.
  • Kurulu sisteme 50’nin üzerinde paket ve yama içeren güncelleştirmeler getirildi.
  • Depoda 500‘ün üzerinde paket güncellendi.
  • Güncel ISO “Haftalık Sürümler” sayfasına eklendi.
Pardus 23:
  • Debian 12.7 güncellemeleri ile birlikte gerekli yama ve bakımları yapılan paket güncellemeleri dahil edildi.
  • Öntanımlı internet tarayıcısı Firefox sürümü 115.15′e yükseltildi.
  • Kernel versiyonu 6.1.0-25′e yükseltildi.
  • Pardus Güncelleyici ve Pardus Yazılım Merkezi uygulamasında geliştirmeler yapıldı.
  • Harici uygulamalar güncellendi.
  • Güvenlik güncellemeleri yayımlandı.
  • Kurulu sisteme 100’ün üzerinde paket ve yama içeren güncelleştirmeler getirildi.
  • Depoda 1000‘in üzerinde paket güncellendi.
  • Güncel ISO “Haftalık Sürümler” sayfasına eklendi.
Paket Adı Yeni Versiyon Eski Versiyon
amd64-microcode 3.20240820.1~deb11u1 3.20230808.1.1~deb11u1
apache2-bin 2.4.62-1~deb11u1 2.4.61-1~deb11u1
base-files 12pardus21.5.4 12pardus21.5.3
bind9-dnsutils 1:9.16.50-1~deb11u2 1:9.16.50-1~deb11u1
bind9-host 1:9.16.50-1~deb11u2 1:9.16.50-1~deb11u1
bind9-libs 1:9.16.50-1~deb11u2 1:9.16.50-1~deb11u1
cups-client 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-common 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-core-drivers 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-daemon 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-ipp-utils 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-ppdc 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups-server-common 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
cups 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
exfatprogs 1.1.0-1+deb11u1 1.1.0-1
gir1.2-gtk-3.0 3.24.24-4+deb11u4 3.24.24-4+deb11u3
gir1.2-javascriptcoregtk-4.0 2.44.3-1~deb11u1 2.44.2-1~deb11u1
gir1.2-webkit2-4.0 2.44.3-1~deb11u1 2.44.2-1~deb11u1
graphviz 2.42.2-5+deb11u1 2.42.2-5
gtk2-engines-pixbuf 2.24.33-2+deb11u1 2.24.33-2
gtk-update-icon-cache 3.24.24-4+deb11u4 3.24.24-4+deb11u3
intel-microcode 3.20240813.1~deb11u1 3.20240514.1~deb11u1
libc6-dev 2.31-13+deb11u11 2.31-13+deb11u10
libc6 2.31-13+deb11u11 2.31-13+deb11u10
libc-bin 2.31-13+deb11u11 2.31-13+deb11u10
libc-dev-bin 2.31-13+deb11u11 2.31-13+deb11u10
libc-devtools 2.31-13+deb11u11 2.31-13+deb11u10
libcdt5 2.42.2-5+deb11u1 2.42.2-5
libcgraph6 2.42.2-5+deb11u1 2.42.2-5
libc-l10n 2.31-13+deb11u11 2.31-13+deb11u10
libcups2 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
libcupsimage2 2.3.3op2-3+deb11u8 2.3.3op2-3+deb11u6
libcurl3-gnutls 7.74.0-1.3+deb11u13 7.74.0-1.3+deb11u12
libcurl4 7.74.0-1.3+deb11u13 7.74.0-1.3+deb11u12
libgail18 2.24.33-2+deb11u1 2.24.33-2
libgail-3-0 3.24.24-4+deb11u4 3.24.24-4+deb11u3
libgail-common 2.24.33-2+deb11u1 2.24.33-2
libgtk2.0-0 2.24.33-2+deb11u1 2.24.33-2
libgtk2.0-bin 2.24.33-2+deb11u1 2.24.33-2
libgtk2.0-common 2.24.33-2+deb11u1 2.24.33-2
libgtk-3-0 3.24.24-4+deb11u4 3.24.24-4+deb11u3
libgtk-3-bin 3.24.24-4+deb11u4 3.24.24-4+deb11u3
libgtk-3-common 3.24.24-4+deb11u4 3.24.24-4+deb11u3
libgvc6 2.42.2-5+deb11u1 2.42.2-5
libgvpr2 2.42.2-5+deb11u1 2.42.2-5
libjavascriptcoregtk-4.0-18 2.44.3-1~deb11u1 2.44.2-1~deb11u1
liblab-gamut1 2.42.2-5+deb11u1 2.42.2-5
libnss-myhostname 247.3-7+deb11u6 247.3-7+deb11u5
libnss-systemd 247.3-7+deb11u6 247.3-7+deb11u5
libntfs-3g883 1:2017.3.23AR.3-4+deb11u4 1:2017.3.23AR.3-4+deb11u3
libpam-systemd 247.3-7+deb11u6 247.3-7+deb11u5
libpathplan4 2.42.2-5+deb11u1 2.42.2-5
libruby2.7 2.7.4-1+deb11u2 2.7.4-1+deb11u1
libsystemd0 247.3-7+deb11u6 247.3-7+deb11u5
libtommath1 1.2.0-6+deb11u1 1.2.0-6
libudev1 247.3-7+deb11u6 247.3-7+deb11u5
libwebkit2gtk-4.0-37 2.44.3-1~deb11u1 2.44.2-1~deb11u1
libxml2 2.9.10+dfsg-6.7+deb11u5 2.9.10+dfsg-6.7+deb11u4
locales 2.31-13+deb11u11 2.31-13+deb11u10
net-tools 1.60+git20181103.0eebece-1+deb11u1 1.60+git20181103.0eebece-1
ntfs-3g 1:2017.3.23AR.3-4+deb11u4 1:2017.3.23AR.3-4+deb11u3
pardus-software 0.8.1 0.8.0
pardus-update 0.5.0 0.3.0
ruby2.7 2.7.4-1+deb11u2 2.7.4-1+deb11u1
systemd-sysv 247.3-7+deb11u6 247.3-7+deb11u5
systemd-timesyncd 247.3-7+deb11u6 247.3-7+deb11u5
systemd 247.3-7+deb11u6 247.3-7+deb11u5
udev 247.3-7+deb11u6 247.3-7+deb11u5
usb.ids 2024.07.04-0+deb11u1 2024.01.20-0+deb11u1
webkit2gtk-driver 2.44.3-1~deb11u1 2.44.2-1~deb11u1
Paket Adı Yeni Versiyon Eski Versiyon
amd64-microcode 3.20240820.1~deb12u1 3.20230808.1.1~deb12u1
apache2-bin 2.4.62-1~deb12u1 2.4.61-1~deb12u1
base-files 12.4+pardus23.2.1 12.4+pardus23.2
bind9-dnsutils 1:9.18.28-1~deb12u2 1:9.18.24-1
bind9-host 1:9.18.28-1~deb12u2 1:9.18.24-1
bind9-libs 1:9.18.28-1~deb12u2 1:9.18.24-1
bubblewrap 0.8.0-2+deb12u1 0.8.0-2
cups-client 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-common 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-core-drivers 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-daemon 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-ipp-utils 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-ppdc 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups-server-common 2.4.2-3+deb12u7 2.4.2-3+deb12u5
cups 2.4.2-3+deb12u7 2.4.2-3+deb12u5
curl 7.88.1-10+deb12u7 7.88.1-10+deb12u6
firefox-esr-l10n-tr 115.15.0esr-1~deb12u1 115.13.0esr-1~deb12u1
firefox-esr 115.15.0esr-1~deb12u1 115.13.0esr-1~deb12u1
fonts-opensymbol 4:102.12+LibO7.4.7-1+deb12u4 4:102.12+LibO7.4.7-1+deb12u3
ghostscript 10.0.0~dfsg-11+deb12u5 10.0.0~dfsg-11+deb12u4
gir1.2-gtk-3.0 3.24.38-2~deb12u2 3.24.38-2~deb12u1
gir1.2-javascriptcoregtk-4.0 2.44.3-1~deb12u1 2.44.2-1~deb12u1
gir1.2-javascriptcoregtk-4.1 2.44.3-1~deb12u1 2.44.2-1~deb12u1
gir1.2-webkit2-4.0 2.44.3-1~deb12u1 2.44.2-1~deb12u1
gir1.2-webkit2-4.1 2.44.3-1~deb12u1 2.44.2-1~deb12u1
graphviz 2.42.2-7+deb12u1 2.42.2-7+b3
gtk2-engines-pixbuf 2.24.33-2+deb12u1 2.24.33-2
gtk-update-icon-cache 3.24.38-2~deb12u2 3.24.38-2~deb12u1
imagemagick-6-common 8:6.9.11.60+dfsg-1.6+deb12u2 8:6.9.11.60+dfsg-1.6+deb12u1
initramfs-tools-core 0.142+deb12u1 0.142
initramfs-tools 0.142+deb12u1 0.142
intel-microcode 3.20240813.1~deb12u1 3.20240514.1~deb12u1
libaom3 3.6.0-1+deb12u1 3.6.0-1
libavcodec59 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libavfilter8 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libavformat59 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libavutil57 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libc6-dev 2.36-9+deb12u8 2.36-9+deb12u7
libc6 2.36-9+deb12u8 2.36-9+deb12u7
libc-bin 2.36-9+deb12u8 2.36-9+deb12u7
libc-dev-bin 2.36-9+deb12u8 2.36-9+deb12u7
libc-devtools 2.36-9+deb12u8 2.36-9+deb12u7
libcdt5 2.42.2-7+deb12u1 2.42.2-7+b3
libcgraph6 2.42.2-7+deb12u1 2.42.2-7+b3
libc-l10n 2.36-9+deb12u8 2.36-9+deb12u7
libcups2 2.4.2-3+deb12u7 2.4.2-3+deb12u5
libcupsimage2 2.4.2-3+deb12u7 2.4.2-3+deb12u5
libcurl3-gnutls 7.88.1-10+deb12u7 7.88.1-10+deb12u6
libcurl4 7.88.1-10+deb12u7 7.88.1-10+deb12u6
libgail18 2.24.33-2+deb12u1 2.24.33-2
libgail-3-0 3.24.38-2~deb12u2 3.24.38-2~deb12u1
libgail-common 2.24.33-2+deb12u1 2.24.33-2
libgs10-common 10.0.0~dfsg-11+deb12u5 10.0.0~dfsg-11+deb12u4
libgs10 10.0.0~dfsg-11+deb12u5 10.0.0~dfsg-11+deb12u4
libgs-common 10.0.0~dfsg-11+deb12u5 10.0.0~dfsg-11+deb12u4
libgtk2.0-0 2.24.33-2+deb12u1 2.24.33-2
libgtk2.0-bin 2.24.33-2+deb12u1 2.24.33-2
libgtk2.0-common 2.24.33-2+deb12u1 2.24.33-2
libgtk-3-0 3.24.38-2~deb12u2 3.24.38-2~deb12u1
libgtk-3-bin 3.24.38-2~deb12u2 3.24.38-2~deb12u1
libgtk-3-common 3.24.38-2~deb12u2 3.24.38-2~deb12u1
libgvc6 2.42.2-7+deb12u1 2.42.2-7+b3
libgvpr2 2.42.2-7+deb12u1 2.42.2-7+b3
libjavascriptcoregtk-4.0-18 2.44.3-1~deb12u1 2.44.2-1~deb12u1
libjavascriptcoregtk-4.1-0 2.44.3-1~deb12u1 2.44.2-1~deb12u1
liblab-gamut1 2.42.2-7+deb12u1 2.42.2-7+b3
liblibreoffice-java 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libmagickcore-6.q16-6-extra 8:6.9.11.60+dfsg-1.6+deb12u2 8:6.9.11.60+dfsg-1.6+deb12u1
libmagickcore-6.q16-6 8:6.9.11.60+dfsg-1.6+deb12u2 8:6.9.11.60+dfsg-1.6+deb12u1
libmagickwand-6.q16-6 8:6.9.11.60+dfsg-1.6+deb12u2 8:6.9.11.60+dfsg-1.6+deb12u1
libnss-myhostname 252.30-1~deb12u2 252.26-1~deb12u2
libnss-systemd 252.30-1~deb12u2 252.26-1~deb12u2
libpam-systemd 252.30-1~deb12u2 252.26-1~deb12u2
libpathplan4 2.42.2-7+deb12u1 2.42.2-7+b3
libpostproc56 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libpq5 15.8-0+deb12u1 15.7-0+deb12u1
libpython3.11-dev 3.11.2-6+deb12u3 3.11.2-6+deb12u2
libpython3.11-minimal 3.11.2-6+deb12u3 3.11.2-6+deb12u2
libpython3.11-stdlib 3.11.2-6+deb12u3 3.11.2-6+deb12u2
libpython3.11 3.11.2-6+deb12u3 3.11.2-6+deb12u2
libreoffice-base-core 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-base-drivers 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-base 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-calc 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-common 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-core 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-draw 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-gnome 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-gtk3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-impress 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-java-common 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-l10n-tr 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-math 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-nlpsolver 4:0.9+LibO7.4.7-1+deb12u4 4:0.9+LibO7.4.7-1+deb12u3
libreoffice-report-builder-bin 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-report-builder 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-script-provider-bsh 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-script-provider-js 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-script-provider-python 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-sdbc-firebird 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-sdbc-hsqldb 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-sdbc-mysql 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-sdbc-postgresql 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-style-colibre 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-style-elementary 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice-wiki-publisher 4:1.2.0+LibO7.4.7-1+deb12u4 4:1.2.0+LibO7.4.7-1+deb12u3
libreoffice-writer 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libreoffice 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libssl3 3.0.14-1~deb12u2 3.0.13-1~deb12u1
libswresample4 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libswscale6 7:5.1.6-0+deb12u1 7:5.1.5-0+deb12u1
libsystemd0 252.30-1~deb12u2 252.26-1~deb12u2
libsystemd-shared 252.30-1~deb12u2 252.26-1~deb12u2
libudev1 252.30-1~deb12u2 252.26-1~deb12u2
libuno-cppu3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libuno-cppuhelpergcc3-3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libunoloader-java 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libuno-purpenvhelpergcc3-3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libuno-sal3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libuno-salhelpergcc3-3 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
libwebkit2gtk-4.0-37 2.44.3-1~deb12u1 2.44.2-1~deb12u1
libwebkit2gtk-4.1-0 2.44.3-1~deb12u1 2.44.2-1~deb12u1
linux-image-amd64 6.1.106-3 6.1.94-1
linux-libc-dev 6.1.106-3 6.1.94-1
locales 2.36-9+deb12u8 2.36-9+deb12u7
openjdk-17-jre-headless 17.0.12+7-2~deb12u1 17.0.11+9-1~deb12u1
openjdk-17-jre 17.0.12+7-2~deb12u1 17.0.11+9-1~deb12u1
openssl 3.0.14-1~deb12u2 3.0.13-1~deb12u1
pardus-software 0.8.1 0.8.0
pardus-update 0.5.0 0.3.0
python3.11-dev 3.11.2-6+deb12u3 3.11.2-6+deb12u2
python3.11-minimal 3.11.2-6+deb12u3 3.11.2-6+deb12u2
python3.11 3.11.2-6+deb12u3 3.11.2-6+deb12u2
python3-numpy 1:1.24.2-1+deb12u1 1:1.24.2-1
python3-uno 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
systemd-sysv 252.30-1~deb12u2 252.26-1~deb12u2
systemd-timesyncd 252.30-1~deb12u2 252.26-1~deb12u2
systemd 252.30-1~deb12u2 252.26-1~deb12u2
udev 252.30-1~deb12u2 252.26-1~deb12u2
uno-libs-private 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
ure-java 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
ure 4:7.4.7-1+deb12u4 4:7.4.7-1+deb12u3
usb.ids 2024.07.04-0+deb12u1 2024.01.20-0+deb12u1
webkit2gtk-driver 2.44.3-1~deb12u1 2.44.2-1~deb12u1
wpasupplicant 2:2.10-12+deb12u2 2:2.10-12+deb12u1

04 September, 2024 11:54AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Meet Canonical at OpenSearchCon 2024 in San Francisco

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/7210/OpenSearch-Blog1.png" width="1280" /> </noscript>

OpenSearchCon, the annual conference that brings the OpenSearch community together to learn, connect, and collaborate, is happening in San Francisco on 24-26 September. The three-day event will give users, developers, and technologists across the OpenSearch Project a chance to explore real-world successes and dive deeper into search, analytics, security, and observability – the primary use cases for OpenSearch.

Canonical is a proud member of the OpenSearch community, and our team is delighted to join the conference for the 3rd time in a row. This year, we’ve prepared insightful keynotes and a special announcement to share at the conference. 

Don’t miss out our talks

Gustavo Sanchez, Field AI Engineer at Canonical, will present on “Future-proof AI applications with OpenSearch as vector database”.  The vector database is a new and unique database that is gaining immense popularity in the LLM space. This database effectively stores and manipulates vector data for large datasets used in machine learning use cases. Join the talk to explore how OpenSearch, as a vector database, can be a strong foundation for GenAI applications. Learn more about the talk. 

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/9363/Event-activities-Carousel-Slide-2-1.png" width="1280" /> </noscript>

Our second talk, delivered by Mehdi Bendriss and Michelle Tabirao, is all about simplifying OpenSearch operations in hybrid clouds –“Deploy, Manage, Observe: An OpenSearch Operator for easier hybrid cloud operations, at scale”.

Managing OpenSearch at scale across hybrid cloud environments poses significant challenges. It requires manual effort, advanced administrative knowledge, and familiarity with various cloud platforms. In this session, we will introduce an open-source IaaS operator designed to simplify the deployment and management of OpenSearch for high availability, scalability, and observability. Don’t miss the live demo illustrating the operator’s capabilities with a mixed deployment fleet. Learn more about the talk

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/284c/Event-activities-Carousel-Slide-2-2.png" width="1280" /> </noscript>

Canonical’s Charmed OpenSearch


For OpenSearchCon, we have prepared a special announcement about our Charmed OpenSearch solution and how it can secure and automate the deployment and management of your search and analytics suite across private and public clouds. Join us at the event to learn more and stay tuned for updates. 

Book a meeting with our team.

If you don’t want to wait till the event, feel free to contact our team now to discuss your OpenSearch journey. We are just one email away.  

See you soon in San Francisco! 

Further reading

  • Vector databases for Generative AI applications: this webinar covers various concepts, such as generative AI, retrieval augmented generation (RAG), the importance of search engines, and efficient open source tooling that enables developers and enthusiasts to build their generative AI applications.
  • Learn more about Charmed OpenSearch

04 September, 2024 07:48AM

Ubuntu Blog: Meet Canonical at Open Source Summit Europe 2024

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/ea08/OSS-EU-24-Dark-Suru.png" width="1280" /> </noscript>

Join Canonical, the publisher of Ubuntu, as we attend the upcoming Open Source Summit Europe 2024 in Austria.

Hosted by the Linux Foundation, this summit is the premier event for developers, technologists, and community leaders with a keen interest in the innovation that open source enables. Mark your calendars for September 16-18, 2024, as we gather in Vienna for this exciting event.

Canonical provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI.

Since we started working on Ubuntu, 20 years ago, engaging with open source communities has been a cornerstone of our mission. We’re excited to connect with attendees at Open Source Summit Europe to share insights, foster collaboration and contribute to this vibrant ecosystem.

Visit the Canonical booth at OSS EU 2024

This year, Canonical will be at booth G/S13 in Hall E of the Austria Center Vienna. We’re excited to talk about Ubuntu as the favorite platform for individuals and enterprises.

Swing by our booth to chat about our support for the latest stable software, Ubuntu’s enterprise-grade reliability and performance that shines in production, and our best-in-class commercial support of up to 12 years — up and down the stack.

While Ubuntu is known as the target platform for open source software vendors and community projects, it’s also the starting point for much of our work in the infrastructure layer, from the bare metal up to virtualization.

To learn more, we’ll have Cristóvão Cordeiro at our booth with demos of Chiselled Ubuntu containers.

Canonical booth at Open Source Summit EU 2024

  • Date: 16 – 18 September 2024
  • Venue: Austria Vienna Center, Hall E (Level 0)
  • Location: Booth G/S13

We’ll demonstrate how anyone can easily build a distroless-like container, from the bottom-up, while retaining the usual and familiar experience of a typical package manager.

The result is an Ubuntu-based Docker image with the same functionality as its bloated counterparts, but with a smaller attack surface (and off-the-shelf compliance with existing security standards).

We’ll show you how we can build declarative recipes for slicing Ubuntu packages and building container images, and we’ll use popular runtimes and software applications (like Python and Valkey) as a reference.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://assets.ubuntu.com/v1/37355c61-c.com-interim-hp-isometric-illustration%201.png" width="720" /> </noscript>

Want a dedicated session with our team?

In fields such as data science and machine learning, Ubuntu is the operating system of choice for many of the most popular frameworks.

This includes OpenCV, TensorFlow, Keras, PyTorch and Kubeflow, as well as our recently announced Canonical Data Science Stack, which allows you to set up ML environments right out of the box and is optimized from the OS right down to the hardware layer.

See us at the OpenSearch and Valkey community booths

<noscript> <img alt="" height="317" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720,h_317/https://ubuntu.com/wp-content/uploads/0cae/image-1.jpeg" width="720" /> </noscript>

We’re excited to also join the OpenSearch and Valkey communities at their booths during Open Source Summit EU this year.

OpenSearch simplifies ingesting, searching, visualizing, and analyzing data for use cases such as application search, log analytics, data observability, and more.

Valkey is an open source, high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database.

You’ll also find us at the Valkey Developer Day, co-located with OSS EU, where Cristóvão Cordeiro will have a session titled “Ubuntu-based Valkey for Linux and Kubernetes environments”.

We’re thrilled to promote open collaboration and innovation with OpenSearch and Valkey.

Attend our talks on a variety of open source topics

MicroCeph: Simplifying Storage from Laptop to Data Center

Peter Sabaini, Software Engineer

  • Date: Tuesday, September 17, 2024 | 16:55 – 17:35 CEST
  • Location: Room 0.11-0.12 (Level 0)

OpenPrinting – We Make Printing Just Work!

Till Kamppeter, OpenPrinting Project Lead

  • Date: Wednesday, September 18, 2024 | 15:10 – 15:50 CEST
  • Location: Hall B (Level 2)

Ubuntu-based Valkey for Linux and Kubernetes Environments

Cristóvão Cordeiro, Engineering Manager

  • Date: Thursday, September 19, 2024 | 09:00 – 17:00 CEST
  • Location: Hall M (Level 1)
<noscript> <img alt="" height="200" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_600,h_200/https://ubuntu.com/wp-content/uploads/03f5/OSS-EU-2024-Banner.png" width="600" /> </noscript>

04 September, 2024 06:56AM

hackergotchi for Deepin

Deepin

Two Essential Tools Network Engineers Must Master When Using deepin: Minicom and Cutecom

Minicom (Command-line Tool) Minicom is a command-line tool with no graphical interface. It has a small installation package, consumes minimal system resources, and can be used directly from deepin's Super Terminal window. Minicom is the recommended tool for use.Minicom is mainly used for serial communication. Below is a guide to installing and using Minicom. Installing Minicom The method to install Minicom depends on the Linux distribution you are using. For deepin, you can follow these steps to install Minicom: 1、Open the terminal: First, open the deepin Super Terminal. 3、Get root privileges: To install software, root privileges are usually required. You ...Read more

04 September, 2024 02:56AM by aida

September 03, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu Summit 2024: A logo takes flight

One of the first things we think about when we start planning each Ubuntu Summit is the logo. This might seem like a small thing, but it’s important. We want our logo to reflect the summit’s location, and to provide a sense of its cultural identity in an inclusive and welcoming way.

We enjoy design challenges like these, and the places they take us. But they’re difficult, and take time. How do you represent a location and a culture, with perhaps hundreds of years of history, in three or four colours at a size that will print on a t-shirt or beanie?

Design by geography

For our two previous summits, we used well known monuments from the cities hosting the events; Charles Bridge stretching over the Vltava river in Prague for 2022, and the Freedom Monument in Riga last year. We felt these worked well because they were easily identifiable, looked good at a small scale and represented community building, inclusivity and support.

<noscript> <img alt="" height="322" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_572,h_322/https://ubuntu.com/wp-content/uploads/8985/summit22-23-combo.png" width="572" /> </noscript>

Logo a go go

This year, however, proved a little more challenging. The Summit is being held October 25-27th in The Hague, a wonderful city on the edge of the North Sea in the Netherlands, not too far from Amsterdam.

The Hague has some magnificent buildings of its own, and we used this as inspiration for our first approach.

<noscript> <img alt="" height="322" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_727,h_322/https://ubuntu.com/wp-content/uploads/a8c5/summit-24-concepts.png" width="727" /> </noscript>

Our first design was of the tower at the Peace Palace, which is part of the International Court of Justice. This looked quite good, but was slightly too narrow, and at this scale, we didn’t think the tower was distinctive enough.

We then tried using the Ridderzall (Hall of Knights), which is part of the Binnenhof, and one of the oldest parliament buildings in the world. This was a closer fit to our previous designs, but there was also a lot of detail to try and simplify while still being distinctive enough.

Finally, we thought we’d try our most obvious idea: a depiction of one of the many windmills you find in The Netherlands. We liked this a lot, and this design was close to being chosen, but ultimately, we thought it was a little predictable and pedestrian. A little too safe. Instead, we wanted to try something bolder, and in all our research about The Hague, we kept seeing images of storks…

Soaring to new heights

Thanks to the surrounding wetland, storks are interwoven between the history and culture of The Hague.

Images of storks eating an eel have even become a symbol of luck and prosperity for the city. There’s one in the city’s coat of arms, for example, and this gave us a unique opportunity to create a logo in a way we’ve not tried before, based on some wildlife.

<noscript> <img alt="" height="547" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_387,h_547/https://lh7-rt.googleusercontent.com/docsz/AD_4nXeLknbGUk7zbhJF1EpBzaFlxNVPmEvE_YF212hS_D9Berj5cfkpMqlYtwSoD7vtP0l1_OJvV1JjDtHFGLvXyyCzrWTuDlRetj6jpL0lwafsoZEkNTwvEyMbn6lyr9qkQ42ZZOFhpybzYMxNQbjqHKLm-HYq?key=CTDqFWrVpifB-1wOBgoKpw" width="387" /> </noscript>

As soon as we saw this sketch, we all agreed it was both a refreshing change, and something very Ubuntu-like. It represented both the culture of the city and the culture of our community. 

This was the idea we wanted to take further, and it’s the idea we’ve ultimately been working on ever since. And now it’s finished.

After 7 different revisions, and some important tweaks to the eel (ask us for the bloopers, privately), we present our Ubuntu Summit logo design, 2024, for The Hague…

<noscript> <img alt="" height="744" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_500,h_744/https://ubuntu.com/wp-content/uploads/ca58/summit-24-logo.png" width="500" /> </noscript>

Join us at Ubuntu Summit 2024

To learn more about what makes the Ubuntu Summit so special and how to register to attend, please visit summit.ubuntu.com.

03 September, 2024 11:34PM

hackergotchi for Deepin

Deepin

deepin Community Monthly Report for August 2024

Community Data Overview   Community Product 1、deepin 23 Official Release On August 15, 2024, deepin 23 was officially released! deepin 23 features Linux 6.6 LTS and 6.9 mainline dual kernels, a new DDE, deep integration of AI capabilities, the launch of UOS AI Assistant and other AI applications, and over 200 product optimizations and new features including the “Linyaps” standalone toolkit, self-developed IDE, and atomic updates. After 9 iterations and 51 internal tests, and the development of 8 self-developed development tools, this release introduces numerous innovative features and meets user needs, providing an unprecedented personalized and intelligent operating experience. >>> ...Read more

03 September, 2024 03:03AM by aida

hackergotchi for ARMBIAN

ARMBIAN

Armbian with Preinstalled Home Assistant

Expanding Your Smart Home Horizons

In the ever-evolving landscape of smart home technology, Home Assistant (HA) has emerged as a powerful open-source platform, enabling users to seamlessly connect and automate their smart home devices—from TVs and fans to cameras, thermostats, lights, and sensors. Home Assistant’s unified web-based user interface offers a user-friendly experience, allowing both beginners and tech-savvy users to build intricate automations that bring their smart homes to life.

Traditionally, Home Assistant’s Operating System (HAOS) is optimized for popular mainstream hardware such as the Raspberry Pi and x86 platforms. However, one of the key limitations of HAOS is its restricted environment. As an embedded Linux system, HAOS is designed to run Home Assistant and little else, making it difficult to install additional applications alongside it. While this ensures a streamlined experience for Home Assistant, it also limits the flexibility and functionality that power users might desire.

Home Assistant Armbian

This is where Armbian steps in, breaking down the barriers imposed by the official HAOS. Armbian offers a unique advantage by providing Home Assistant on top of a full-fledged operating system—Armbian Minimal. This not only allows Home Assistant to run on a vast selection of ARM-based devices supported by Armbian but also opens the door to a more versatile and expandable smart home setup.

With Armbian, you can enjoy the best of both worlds: the power and simplicity of Home Assistant combined with the flexibility of a complete operating system. Whether you’re using a Raspberry Pi, a Rockchip, an Allwinner, or any other ARM-based device, Armbian with preinstalled Home Assistant provides a robust solution for your smart home needs—without the downsides of a restricted embedded environment.

For more information and to get started, check out Armbian download pages for your device.

References:

The post Armbian with Preinstalled Home Assistant first appeared on Armbian.

03 September, 2024 12:27AM by Didier Joomun

September 02, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 855

Welcome to the Ubuntu Weekly Newsletter, Issue 855 for the week of August 25 – 31, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 22.04.5 final point-release delayed until September 12
  • Ubuntu 24.04.1 LTS released
  • Ubuntu Stats
  • Hot in Support
  • Ubuntu Meeting Activity Reports
  • Rocks Public Journal; 2024-08-27
  • Convocatória para apresentação de propostas (Call for proposals)
  • UbuCon Asia 2025 – Call for Bids!
  • LoCo Council approved and formalized LoCo Handover process
  • LoCo Events
  • Introducing Kernel 6.11 for the 24.10 Oracular Oriole Release
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

02 September, 2024 11:06PM

Ubuntu Blog: Japanese device manufacturer I-O DATA DEVICE’s business expansion with Ubuntu Pro for Devices

I-O DATA has entered into a partnership with Canonical, aimed at bringing the benefits of open source and Ubuntu to thousands of devices across Japan. I-O DATA will now ship its hardware with Ubuntu pre-installed, and is now an official reseller of Ubuntu Pro for Devices, a subscription that provides 10 years of security maintenance for the OS and applications, as well as device management capabilities.

Open source is where innovation happens

Open source is very much the standard for modern software development, but new regulations like the Cyber Resilience Act are making stringent security practices paramount. With businesses making more use of AI, which requires both high computational power and commitment to regulatory demands, it’s becoming increasingly unsustainable to rely on unsupported open source.

Ubuntu’s widespread community adoption and solid enterprise use cases make it an ideal common denominator system for organisations looking to unify and streamline their operations. Ubuntu Pro for Devices takes this even further, by offering subscription-based support for up to 10 years, for over 30,000 open source packages – including common frameworks like ROS and developer toolchains like Go and Python. This bold step by I-O DATA is a clear sign of commitment to offering their customers the tools they need to thrive in a competitive digital environment, by making Ubuntu devices more widely available.

“By using Ubuntu, IT professionals in Japanese organisations can enhance their foundational skills. This will eventually lead to the development of advanced OS, cloud, security and other essential IT technologies and services that will bolster Japan’s industries across various sectors and restore its global competitiveness. I believe this will herald a new golden age for Japan.” – Daiyuu Nobori (Researcher at Industrial Cyber Security Center, Information-technology Promotion Agency from 2017, Special Employee of NTT-East from 2020)

Widening horizons with a robust OS

By making use of Ubuntu Pro for Devices, I-O DATA is offering its customers a robust OS with a track record of high-performance in novel situations. Ubuntu is already used across industries as diverse as automotive, education, space engineering and medical. I-O DATA intends to take Ubuntu to companies across these spaces, with an initial target of 800 million yen for the first year.

“We expect to be able to provide effective AI image analysis services in a more stable environment through the use of Ubuntu appliances as network edge devices in addition to the network cameras we currently use.” – Masaya Kawase, Director, Customer Contact Point Digitalization Support Division, Business Management Division, OPTiM Corp.

An initiative welcomed by I-O Data’s partners

“We have grown to serve over 40,000 units in the domestic digital signage market. We owe this success to our long-standing cooperative relationship with I-O DATA Device. We are currently developing an innovative set-top box for digital signage that is powered by Ubuntu. We hope that this commitment will contribute to the further growth of your business”. – Yasuo Fukunaga, President and Representative Director, Cyberstation, Inc.

I-O DATA intends to engage multiple partners in order to increase the number of hardware options available to them, such as NPU installed models that run AI on the network edge. In addition to the partners already mentioned, I-O DATA has engaged ORCA Management Organization (responsible for all patient data in Japan) and Software Research Associates (SRA), a technology consultancy organisation that has been heavily involved in the provision of Japanese language support services for Ubuntu.

I-O DATA hopes to deepen their partnership with Canonical as this initiative takes off, as they appeal to organisations that are already using Ubuntu, as well as new prospects who are looking to move away from the difficulties of closed source code.

About I-O DATA

I-O DATA DEVICE, INC., is a top-tier manufacturer and provider of high-quality computer peripherals and interface products to the global consumer and OEM markets. Founded in 1976, by Mr. Akio Hosono, I-O DATA has received recognition as Japan’s undisputed market leader within the PC Peripheral industry. I-O DATA is not only a manufacturer of such devices, but handles the design, development and production of the products.

About Canonical

Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone.

Learn more at https://canonical.com/

02 September, 2024 02:49PM

Ubuntu Blog: Canonical at IAA Transportation 2024

<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/f148/IAA-blog.png" width="1920" /> </noscript>
Showcasing automotive software innovation in hall 12, booth E36

Book a demo with our team

As the automotive industry continues to accelerate towards a more connected, autonomous, and electric future, Canonical is thrilled to participate in the IAA Transportation 2024 in Hanover. This important event is a great opportunity for us to demonstrate our latest achievements in automotive software, focusing on an open source EV charging station demo, and a cutting-edge Over-the-Air (OTA) solution powered by the Snap Store.

Powering the future with Open Source EV charging

As the global automotive market transitions to electric vehicles (EVs), the demand for efficient and scalable EV charging infrastructure has never been greater. Canonical is proud to present our collaboration with DFI and PIONIX to build an open source charging station software stack, which will be featured at IAA 2024. This demonstration will showcase EVerest, an open source software framework that enables seamless communication between EV chargers, vehicles, and the grid, ensuring optimal performance and compliance with international standards like ISO 15118 and OCPP.

By packaging EVerest as a snap, Canonical ensures that this solution can be quickly and securely deployed at scale. This approach not only simplifies the deployment process but also guarantees that the charging stations are always up-to-date with the latest security patches and features, providing a robust and future-proof solution for EV charging stations.

Extending Over-the-Air updates

Canonical’s commitment to contributing to the automotive industry’s software shift extends to vehicle maintenance and software management through our OTA update solution powered by the Snap Store. At IAA 2024, we will demonstrate how a dedicated Snap Store enables seamless updates of a vehicle’s electronic control units (ECUs), ensuring that all software components remain secure and up to date. This approach not only enhances vehicle performance but also reduces the need for physical maintenance, offering a more efficient and cost-effective solution for both automotive manufacturers and end-users.

Join Us at IAA Transportation 2024

We invite you to visit our booth E36 in Hall 12 at IAA Transportation 2024 to explore all of our latest innovations in automotive software. Whether you’re interested in the future of automotive software development, the latest in EV charging technology, or cutting-edge OTA solutions, our team of experts will be there to provide insights, demonstrations, and discuss current trends in automotive software.

Book a demo with our team.

Don’t miss this opportunity to see how Canonical’s open source solutions are shaping the future of mobility. We look forward to meeting you in Hanover and sharing our vision of a connected, sustainable, and software-first automotive future.

To learn more about Canonical and our engagement in automotive: 

Contact Us

Check out our webpage

02 September, 2024 02:20PM

Jonathan Carter: Debian Day South Africa 2024

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

02 September, 2024 01:01PM

hackergotchi for Deepin

Deepin

deepin Community Attends DebConf24, Showcasing China's Open Source Strength

The 2024 Debian Conference (DebConf24) successfully concluded at the Daeyeon Campus of Pukyong National University in early August, attracting over 339 attendees from 48 countries and regions worldwide. During the conference, developers from around the globe participated in 108 events, including over 50 lectures and discussions, 37 Birds of a Feather (BoF) sessions, 12 workshops, and a day trip. The topics covered a wide range of areas, including free software and Debian introductions, package policies, system administration, automation, cloud and containers, system security, community diversity, internationalization, localization, embedded systems, and the kernel. The deepin Community has been attending this grand ...Read more

02 September, 2024 08:41AM by aida

(中文) deepin 23 下如何运行绝大数 Windows 游戏?

Sorry, this entry is only available in 中文.

02 September, 2024 08:05AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Join Canonical in Vienna for Valkey Developer Day 2024

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/0d06/Valkey1.jpg" width="1280" /> </noscript>

Valkey Developer Day is coming to Vienna on September 19 for the first time. This event offers a unique opportunity to network with fellow developers and learn about Valkey’s current innovation and future. Canonical is committed to promoting and advancing the Valkey project, and our team is excited to join the event.

What is Valkey?


Valkey is an open source (BSD) high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database. Valkey can run as either a standalone daemon or in a cluster, with options for replication and high availability. Valkey is the open source alternative to the Redis in-memory, NoSQL data store.

Canonical’s Talk at Valkey Developer Day

Don’t miss our session, “Ubuntu-based Valkey for Linux and Kubernetes environments” by Cristovao Cordeiro, Engineering Manager at Canonical. The presentation will spotlight several methods for utilising Valkey, such as the “deb”, “snap”, and Docker image packages. Cristovao will break down the relative strengths of the different Valkey packing formats, and their respective use cases. The talk will also outline the future plans of the Ubuntu ecosystem to further support Valkey’s releases and upcoming developments.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-rt.googleusercontent.com/docsz/AD_4nXdexF23WoiyXJADNzeDcoirLt2qRbAle_lU3x0N6tYm045azyUrfsqCSLJ1S6s4HTum6JkKgOi-UlNAcR2SSMuDlfmc8mwnZl04YJwPZv1XnrR02w7jji8DctNaWYXsXi7wv2GzvROhCvxEy929jHeVy6U?key=vLRhxVCZzD9pkD3p8a7eZw" width="720" /> </noscript>

Valkey on Ubuntu

Canonical is an enthusiastic contributor to the development of Valkey. Following our mission to bring free software to the widest audience and enable a wide diversity of open source communities to collaborate under the Ubuntu umbrella, Valkey will be released in the upcoming Ubuntu 24.10 (Oracular Oriole).

If you’re hungry to learn more about Valkey, join us on October 1st for Canonical’s Data and AI Masters event. Canonical and the Valkey Community will be hosting a session on “Powering Your Modern Web Applications with Open Source Valkey.” Data and AI Masters is an exclusive online event that provides an in-depth look at the latest innovations in data and AI. You can register for the event here.

Feel free to contact our team now to discuss your Valkey journey. We are just one email away.  

See you soon in Vienna! 

02 September, 2024 07:44AM

September 01, 2024

Colin Watson: Free software activity in August 2024

All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)

You can also support my work directly via Liberapay.

man-db and friends

I released libpipeline 1.5.8 and man-db 2.13.0.

Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.

OpenSSH

I fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.

Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.

Multiple identical results from getaddrinfo

I expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:

$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts
127.0.0.1       localhost
127.0.1.1       [...]
127.0.0.1       localhost ip6-localhost ip6-loopback

The last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.

I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).

OpenSSL vs. cryptography

I tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.

Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.

After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.

GCC 14 regressions

Continuing from last month, I fixed build failures in pccts (NMU) and trn4.

Python team

I upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, zope.testrunner. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.

A few of these required their own bits of yak-shaving:

I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).

I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.

Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.

I made python-nacl build reproducibly (upstream PR).

I moved aliased files from / to /usr in timekpr-next (#1073722).

Installer team

I applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).

01 September, 2024 01:29PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2024/08

The 8th monthly Sparky project and donate report of the 2024: – Linux kernel updated up to 6.10.7, 6.6.48-LTS, 6.1.107-LTS & 5.15.165-LTS – Sparky semi-rolling 2024.08 & 2024.08 Special Editions released – added to repos: PeaZip Many thanks to all of you for supporting our open-source projects. Your donations help keeping them and us alive. Don’t forget to send a small tip in september too…

Source

01 September, 2024 12:06PM by pavroo

hackergotchi for ARMBIAN

ARMBIAN

Armbian 24.8 Yelt

As we continue to evolve, Armbian is proud to introduce our latest release, packed with enhancements, new hardware support, and important upgrades that will further solidify the stability and performance of your systems.

Key Highlights

  • RK3588 Boot Loader Upgrades: Enhanced stability for RK3588 hardware with the latest bootloader upgrades. This ensures a more reliable experience across supported devices.
  • 4K60p Video Acceleration: Experience smoother visuals with 4K60p video acceleration, now available on Gnome and KDE desktop builds.
  • Kernel Bump to 6.10.y: All kernels have been updated to 6.10.y, bringing improved performance, security patches, and broader hardware support.
  • BigTreeTech CB1 Platinum Support: Armbian now fully supports BigTreeTech CB1, offering a robust platform for your 3D printing projects.
  • Expanded Desktop Options: We’re thrilled to bring you Gnome, XFCE, Cinnamon, and KDE Neon desktop environments. Choose the desktop that best suits your needs.
  • ZFS 2.2.5: The latest ZFS version (2.2.5) is now supported, optimized for kernel 6.10.
  • Long-Term Support (LTS): We’re committed to keeping older devices like the Odroid C1, NanoPi NEO, BPi M1, ClearFog, Helios64 and TinkerBoard in great shape with ongoing updates and support.
  • ThinkPad X13s Enhancements: Several upgrades have been rolled out for the ThinkPad X13s, enhancing its compatibility and performance with Armbian.
  • 3D Support on Debian-Based Systems: 3D acceleration is now supported on Debian-based Armbian builds, improving the overall user experience.
  • New Board Support: We’ve expanded our hardware support with new boards, including Libre Alta and Solitude, Radxa E25, Rock 5C, RISCV64 BananaPi F3, and more.
  • Deprecation and Cleanup: Significant code cleanup and the demotion of deprecated support, ensuring a leaner and more efficient codebase. We are moving towards mainline-only support for many devices.
  • Ubuntu Noble: Ubuntu Noble is entering its final testing phase as a build host supported target, bringing us closer to a full release.

Detailed change logs

Platinum Support and Community Contributions

Our focus remains on boards with platinum support, where vendors assist us in mitigating costs, ensuring top-tier support and contributing to open-source efforts. If you’re looking for the best-supported boards, we highly recommend selecting from this category.

Armbian remains a community-driven project. We cannot maintain this large and complex ecosystem without your support. Whether it’s rewriting manuals, BASH scripting, or reviewing contributions, there’s a place for everyone. Notably, your valuable contributions could even earn you a chance to win a powerful Intel-based mini PC from Khadas.

Production Use Recommendations

For production environments, we recommend:

  • Opting for hardware labeled with platinum or standard support.
  • Utilizing stabilized point releases around Armbian Linux 6.10.y.
  • Becoming an Armbian support partner to gain access to professional services.

Recognizing Our Contributors

We extend our deepest gratitude to the remarkable contributors who have played a pivotal role in this release. Special thanks to: @ColorfulRhino, @igorpecovnik, @rpardini, @alexl83, @amazingfate, @The-going, @efectn, @adeepn, @paolosabatino, @SteeManMI, @JohnTheCoolingFan, @EvilOlaf, @chainsx, @viraniac, @monkaBlyat, @alex3d, @belegdol, @kernelzru, @tq-schmiedel, @ginkage, @Tonymac32, @schwar3kat, @pyavitz, @Kreyren, @hqnicolas, @prahal, @h-s-c, @RadxaYuntian, and many others.

Our dedicated support staff: Igor, Didier, Lanefu, Adam, Werner, Metka, Aaron, and more, deserve special recognition for their continuous efforts and support.

Join the Armbian Community

Armbian thrives on community involvement. Your contributions are crucial to sustaining this vibrant ecosystem. Whether you’re an experienced developer or just getting started, there’s always a way to contribute.

Thank you for your continued support.

The Armbian Team

The post Armbian 24.8 Yelt first appeared on Armbian.

01 September, 2024 09:23AM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Dougie Richardson: Plesk high swap usage

Seen warnings about high swap consumption in Plesk on Ubuntu 20.04.6 LTS:

Had a look in top and noticed clamavd using 1.0G of swap. After a little digging around, it might be related to a change in ClamAV 0.103.0 where non-blocking signature database reloads were introduced.

Major changes

  • clamd can now reload the signature database without blocking scanning. This multi-threaded database reload improvement was made possible thanks to a community effort.
    • Non-blocking database reloads are now the default behavior. Some systems that are more constrained on RAM may need to disable non-blocking reloads, as it will temporarily consume double the amount of memory. We added a new clamd config option ConcurrentDatabaseReload, which may be set to no.

I disabled the option and the difference is dramatic:

I’ll keep an eye on it I guess.

01 September, 2024 09:11AM

August 31, 2024

Lubuntu Blog: Lubuntu 24.04.1 LTS is Released!

Thanks to all the hard work from our contributors, Lubuntu 24.04.1 LTS has been released. With the codename Noble Numbat, Lubuntu 24.04 is the 26th release of Lubuntu, the 12th release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 24.04 LTS will be supported for 3 years until April 2027. Our […]

31 August, 2024 12:17PM

August 30, 2024

Ubuntu Blog: Integrating the Ubuntu Snapshot Service into systems management and update tools

In an earlier blog we announced our new snapshot service and how Microsoft Azure was using this for updates. We recently published information to help people use the service beyond Microsoft Azure.

Today I would like to show a simplified example of how to integrate the snapshot service into systems management tooling. This can let users control the rollout of updates across an Ubuntu fleet and improve the update experience for users. The aim is to inspire those building systems management tools, particularly multi-OS tools, to investigate the Ubuntu snapshot service.

If you are an end user or enterprise wanting to use snapshots, please take a look at Landscape. This is Canonical’s recommended systems management tool and it has recently added support for the snapshot service.

A video version of this blog post is available here:

Video overview of integrating the Ubuntu snapshot service into systems management tools

Usage of the snapshot service

As a simple example, on an out-of-the-box Ubuntu 24.04 (Noble) system, you can pass a snapshot argument to different apt commands and apt will act as if you had run those commands at that date and time (any time after 1 March 2023):

apt update --snapshot 20240423T230000Z
apt policy hello -S 20240423T230000Z
apt install hello --snapshot 20240423T230000Z

These commands should also work for Ubuntu on our public cloud partners (AWS, Azure, GCP, Oracle or IBM Cloud).

You do need to ensure that the index is up to date before running those other apt commands (note the above apt update with the --snapshot argument before the other commands). There is a new apt command that makes this a little easier, letting you update your indexes and then install in one command:

apt remove hello
apt install hello --update --snapshot 20240423T230000Z

For more detailed guidance of how to use the service, please see the documentation. Snapshots also work on earlier Ubuntu releases with additional configuration. There are a number of useful applications of this, including reproducibility, debugging customer issues and similar.

Integrating the snapshot service into update management

We wrote an earlier blog series on how to balance security and stability in Ubuntu updates. In Part 3 of that series, we talked about different ways to create a point-in-time “snapshot” of updates. This can be a useful technique when updating multiple instances. Ubuntu takes a number of steps to reduce the risk of regressions in security updates. There is always a chance, however, that an update will have a negative consequence in your specific environment.

Using snapshots allows a consistent set of updates that you can test and move progressively through your production environments. This could limit the “blast radius” of any negative effects of an update —perhaps to one instance in your highly-available set instead of all three. Any progressive rollout of security updates does mean that machines at the end of the rollout are unpatched until it has completed (as covered further below).

Demonstrating a snapshot rollout with example instances

We can demonstrate how a systems management or update tool could integrate the snapshot service by manually running commands. We can do this across four instances to simulate different risk levels, availability or update domains, regions or similar. Each of these demo instances represents an arbitrarily large set of instances in that update domain or risk level.

I will use freshly-launched 24.04 instances. Snapshots work on earlier Ubuntu releases, but require additional steps (detailed in the documentation). We regularly publish new versions of our images on the public clouds that incorporate security fixes. That means that if I used an up-to-date image it would not have many security updates. We will therefore specify an old build of the image, so that we have more updates available.

We can see when the image was created in the build.info:

$ cat /etc/cloud/build.info
build_name: server
serial: 20240423

The “serial” shows us that this image was created on 23 April 2024.

Preventing uncontrolled updates

By default, we set Ubuntu instances to update themselves once a day with security updates using unattended-upgrades. We can prevent these systems automatically updating themselves by setting a snapshot that matches the image build date. In this case, we can set all of the systems to use the snapshot of the archive as it was on 23 April 2024 at 23:00 UTC. This means none of the systems should see any updates released after that date and time.

echo 'APT::Snapshot "20240423T230000Z";' | sudo tee /etc/apt/apt.conf.d/50snapshot

Now, any normal apt commands should not install anything new, because the instances see the archive as it was on 23 April 2024.

$ sudo apt update
[...]
All packages are up to date.
$ sudo apt upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

unattended-upgrades also will not install anything because, again, it sees the archive as it was on 23 April:

$ sudo unattended-upgrade -d
[...]
pkgs that look like they should be upgraded:
Fetched 0 B in 0s (0 B/s)
                          	 
fetch.run() result: 0
Packages blacklist due to conffile prompts: []
No packages found that can be upgraded unattended and no pending auto-removals

Creating an update set

Now we want to start rolling updates out across the estate. We start by choosing a snapshot ID, which will likely be the date and time that we start the process. We will use 20240617T120000Z, or 12:00 UTC on 17 June 2024. Our first “update set” is then the updates between the image build date and this 17 June 2024 snapshot.

We are going to start with this first instance, which could be our development or staging environment. First we run the command that we ran before, but with the 17 June snapshot ID:

echo 'APT::Snapshot "20240617T120000Z";' | sudo tee /etc/apt/apt.conf.d/50snapshot

Now we can update our package indexes and the instance does see available updates:

$ sudo apt update
[...]
80 packages can be upgraded. Run 'apt list --upgradable' to see them.

After setting the snapshot, we can use all the normal apt commands to upgrade our packages to that snapshot, or let unattended-upgrades run and it would upgrade just the security updates (or whatever else we have set up for unattended-upgrades to do).

The other instances will not install any of these updates, because we set them to use the old snapshot. That means that we can test the upgraded instance(s) and check that everything seems to be working correctly. If there are issues, we can pause the rollout and address them before rolling the update out more widely.

Rolling out our update set

Assuming that everything appears fine in our dev/test environment, we can roll the updates to the next tier. We use the process above to set the next group to the same snapshot that we just validated. Perhaps this is internal servers or just instances in one availability zone of highly-available configurations.

Maybe we then let the updates run with production workloads for a day. Then we can move to the next group and repeat the process, again setting the snapshot to the same ID. And so on, through each of the rings or risk levels in our deployment plan.

By this point of the rollout it may be a week later, but we will still use the snapshot from midday 17 June. This is because we validated that update set in testing and in the earlier deployment rings. If we find any issues when rolling out the updates, we can pause the rollout. This gives us time to address the issues (always bearing in mind the security implications of delaying updates).

These rollouts could overlap, too. If the Ubuntu security team releases a new update, we can kick off a new update. We can create a new update set based on a snapshot ID that includes that update. Then we can validate that in testing and early tiers while the previous update set rolls out.

Balancing security and stability

As we mentioned in some of our previous content, you should keep the update rollout as short as possible. Your estate will have unprotected instances for however long you are rolling out security updates. By default, we set Ubuntu server instances to install security updates each day. You need to balance any extension of this against the increased security risk. One approach is to check the severity of any unpatched vulnerabilities, for example in our security feeds. Then users could accelerate rollouts to address high-severity vulnerabilities. If you are creating an interface for end users, exposing this information lets them make more informed choices.

Different organisations need to balance stability and security in different ways. Snapshots are another way that we give Ubuntu users more options in how they strike that balance. By deploying update sets progressively across your risk levels, you can balance security against any potential disruption.

Talk to us!

Please let us know how you are using or integrating the snapshot service in our Public Cloud discourse. We would love to hear from you!

30 August, 2024 12:06PM

Alan Pope: Virtual Zane Lowe for Spotify

tl;dr

I bodged together a Python script using Spotipy (not a typo) to feed me #NewMusicDaily in a Spotify playlist.

No AI/ML, all automated, “fresh” tunes every day. Tunes that I enjoy get preserved in a Keepers playlist; those I don’t like to get relegated to the Sleepers playlist.

Any tracks older than eleven days are deleted from the main playlist, so I automatically get a constant flow of new stuff.

My personal Zane Lowe in a box

Nutshell

  1. The script automatically populates this Virtual Zane Lowe playlist with semi-randomly selected songs that were released within the last week or so, no older (or newer).
  2. I listen (exclusively?) to that list for a month, signaling songs I like by hitting a button on Spotify.
  3. Every day, the script checks for ’expired’ songs whose release date has passed by more than 11 days.
  4. The script moves songs I don’t like to the Sleepers playlist for archival (and later analysis), and to stop me hearing them.
  5. It moves songs I do like to the Keepers playlist, so I don’t lose them (and later analysis).
  6. Goto 1.

I can run the script at any time to “top up” the playlist or just let it run regularly to drip-feed me new music, a few tracks at a time.

Clearly, once I have stashed some favourites away in the Keepers pile, I can further investigate those artists, listen to their other tracks, and potentially discover more new music.

Below I explain at some length how and why.

NoCastAuGast

I spent an entire month without listening to a single podcast episode in August. I even unsubscribed from everything and deleted all the cached episodes.

Aside: Fun fact: The Apple Podcasts app really doesn’t like being empty and just keeps offering podcasts it knows I once listened to despite unsubscribing. Maybe I’ll get back into listening to these shows again, but music is on my mind for now.

While this is far from a staggering feat of human endeavour in the face of adversity, it was a challenge for me, given that I listened to podcasts all the time. This has been detailed in various issues of my personal email newsletter, which goes out on Fridays and is archived to read online or via RSS.

In August, instead, I re-listened to some audio books I previously enjoyed and re-listened to a lot of music already present on my existing Spotify playlists. This became a problem because I got bored with the playlists. Spotify has an algorithm that can feed me their idea of what I might want, but I decided to eschew their bot and make my own.

Note: I pay for Spotify Premium, then leveraged their API and built my “application” against that platform. I appreciate some people have Strong Opinions™️ about Spotify. I have no plans to stop using Spotify anytime soon. Feel free to use whatever music service you prefer, or self-host your 64-bit, 192 kHz Hi-Res Audio from HDTracks through an Elipson P1 Pre-Amp & DAC and Cary Audio Valve MonoBlok Power Amp in your listening room. I don’t care.

I’ll be here, listening on my Apple AirPods, or blowing the cones out of my car stereo. Anyway…

I spent the month listening to great (IMHO) music, predominantly released in the (distant) past on playlists I chronically mis-manage. On the other hand, my son is an expert playlist curator, a skill he didn’t inherit from me. I suspect he “gets the aux” while driving with friends, partly due to his Spotify playlist mastery.

As I’m not a playlist charmer, I inevitably got bored of the same old music during August, so I decided it was time for a change. During the month of September, my goal is to listen to as much new (to me) music as I can and eschew the crusty playlists of 1990s Brit-pop and late-70s disco.

How does one discover new music though?

Novel solutions

I wrote a Python script.

Hear me out. Back in the day, there was an excellent desktop music player for Linux called Banshee. One of the great features Banshee users loved was “Smart Playlists.” This gave users a lot of control over how a playlist was generated. There was no AI, no cloud, just simple signals from the way you listen to music that could feed into the playlist.

Watch a youthful Jorge Castro from 13 years ago do a quick demo.

Jorge Demonstrating the awesome power of Smart Playlists in Banshee (RIP in Peace)

Aside: Banshee was great, as were many other Mono applications like Tomboy and F-Spot. It’s a shame a bunch of blinkered, paranoid, noisy, and wrong Linux weirdos chased the developers away, effectively killing off those excellent applications. Good job, Linux community.

Hey ho. Moving on. Where was I…

Spotify clearly has some built-in, cloud-based “smarts” to create playlists, recommendations, and queues of songs that its engineers and algorithm think I might like. There’s a fly in the ointment, though, and her name is Alexa.

No, Alexa, NO!

We have a “Smart” speaker in the kitchen; the primary music consumers are not me. So “my” listening history is now somewhat tainted by all the Chase Atlantic & Central Cee my son listens to and Michael (fucking) Bublé, my wife, enjoys. She enjoys it so much that Bublé has featured on my end-of-year “Spotify Unwrapped” multiple times.

I’m sure he’s a delightful chap, but his stuff differs from my taste.

I had some ideas to work around all this nonsense. My goals here are two-fold.

  1. I want to find and enjoy some new music in my life, untainted by other house members.
  2. Feed the Spotify algorithm with new (to me) artists, genres and songs, so it can learn what else I may enjoy listening to.

Obviously, I also need to do something to muzzle the Amazon glossy screen of shopping recommendations and stupid questions.

The bonus side-quest is learning a bit more Python, which I completed. I spent a few hours one evening on this project. It was a fun and educational bit of hacking during time I might otherwise use for podcast listening. The result is four hundred or so lines of Python, including comments. My code, like my blog, tends to be a little verbose because I’m not an expert Python developer.

I’m pretty positive primarily professional programmers potentially produce petite Python.

Not me!

Noodling

My script uses the Spotify API via Spotipy to manage an initially empty, new, “dynamic” playlist. In a nutshell, here’s what the python script does with the empty playlist over time:

  • Use the Spotify search API to find tracks and albums released within the last eleven days to add to the playlist. I also imposed some simple criteria and filters.
    • Tracks must be accessible to me on a paid Spotify account in Great Britain.
    • The maximum number of tracks on the playlist is currenly ninety-four, so there’s some variety, but not too much as to be unweildy. Enough for me to skip some tracks I don’t like, but still have new things to listen to.
    • The maximum tracks per artist or album permitted on the playlist is three, again, for variety. Initially this was one, but I felt it’s hard to fully judge the appeal of an artist or album based off one song (not you: Black Lace), but I don’t want entire albums on the list. Three is a good middle-ground.
    • The maximum number of tracks to add per run is configurable and was initially set at twenty, but I’ll likely reduce that and run the script more frequently for drip-fed freshness.
  • If I use the “favourite” or “like” button on any track in the list before it gets reaped by the script after eleven days, the song gets added to a more permanent keepers playlist. This is so I can quickly build a collection of newer (to me) songs discovered via my script and curated by me with a single button-press.
  • Delete all tracks released more than eleven days ago if I haven’t favourited them. I chose eleven days to keep it modern (in theory) and fresh (foreshadowing). Technically, the script does this step first to make room for additional new songs.

None of this is set in stone, but it is configurable with variables at the start of the script. I’ll likely be fiddling with these through September until I get it “right,” whatever that means for me. Here’s a handy cut-out-and-keep block diagram in case that helps, but I suspect it won’t.

 +-----------------------------+
 | Spotify (Cloud) |
 | +---------------------+ |
 | | Main Playlist | |
 | +---------------------+ |
 | | | |
 | Like | | Dislike |
 | v | |
 | +---------------------+ |
 | | Keeper Playlist | |
 | +---------------------+ |
 | | |
 | v |
 | +---------------------+ |
 | | Sleeper Playlist | |
 | +---------------------+ |
 +-------------+---------------+
 ^
 |
 v
 +---------------------------+
 | Python Script |
 | +---------------------+ |
 | | Calls Spotify API | |
 | | and Manages Songs | |
 | +---------------------+ |
 +---------------------------+

Next track

The expectation is to run this script automatically every day, multiple times a day, or as often as I like, and end up with a frequently changing list of songs to listen to in one handy playlist. If I don’t like a song, I’ll skip it, and when I do like a song, I’ll likely play it more than once. and maybe click the “Like” icon.

My theory is that the list becomes a mix between thirty and ninety artists who have released albums over the previous rolling week. After the first test search on Tuesday, the playlist contained 22 tracks, which isn’t enough. I scaled the maximum up over the next few days. It’s now at ninety-four. If I exhaust all the music and get bored of repeats, I can always up the limit to get a few new songs.

In fact, on the very first run of the script, the test playlist completely filled with songs from one artist who had just released a new album. That triggered the implementation of the three songs per artist/album rule to reduce that happening.

I appreciate listening to tracks out of sequence, and a full album is different from the artist intended. But thankfully, I don’t listen to a lot of Adele, and the script no longer adds whole albums full of songs to the list. So, no longer a “me” problem.

No AI

I said at the top I’m not using any “AI/ML” in my script, and while that’s true, I don’t control what goes on inside the Spotify datacentre. The script is entirely subject to the whims of the Spotify API as to which tracks get returned to my requests. There are some constraints to the search API query complexity, and limits on what the API returns.

The Spotify API documentation has been excellent so far, as has the Spotipy docs.

Popular songs and artists often organically feature prominently in the API responses. Plus (I presume) artists and labels have financial incentives or an active marketing campaign with Spotify, further skewing search results. Amusingly, the API has an optional (and amusing) “hipster” tag to show the bottom 10% of results (ranked by popularity). I did that once, didn’t much like it, and won’t do it again.

It’s also subject to the music industry publishing music regularly, and licensing it to be streamed via Spotify where I live.

Not quite

With the script as-is, initially, I did not get fresh new tunes every single day as expected, so I had a further fettle to increase my exposure to new songs beyond what’s popular, trending, or tagged “new”. I changed the script to scan the last year of my listening habits to find genres of music I (and the rest of the family) have listened to a lot.

I trimmed this list down (to remove the genre taint) and then fed these genres to the script. It then randomly picks a selection of those genres and queries the API for new releases in those categories.

With these tweaks, I certainly think this script and the resulting playlist are worth listening to. It’s fresher and more dynamic than the 14-year-old playlist I currently listen to. Overall, the script works so that I now see songs and artists I’ve not listened to—or even heard of—before. Mission (somewhat) accomplished.

Indeed, with the genres feature enabled, I could add a considerable amount of new music to the list, but I am trying to keep it a manageable size, under a hundred tracks. Thankfully, I don’t need to worry about the script pulling “Death Metal,” “Rainy Day,” and “Disney” categories out of thin air because I can control which ones get chosen. Thus, I can coerce the selection while allowing plenty of randomness and newness.

I have limited the number of genre-specific songs so I don’t get overloaded with one music category over others.

Not new

There are a couple of wrinkles. One song that popped into the playlist this week is “Never Going Back Again” by Fleetwood Mac, recorded live at The Forum, Inglewood, in 1982. That’s older than the majority of what I listened to in all of August! It looks like Warner Records Inc. released that live album on 21st August 2024, well within my eleven-day boundary, so it’s technically within “The Rules” while also not being fresh, new music.

There’s also the compilation complication. Unfresh songs from the past re-released on “TOP HITS 2024” or “DANCE 2024 100 Hot Tracks” also appeared in my search criteria. For example, “Talk Talk” by Charli XCX, from her “Brat” album, released in June, is on the “DANCE 2024 100 Hot Tracks” compilation, released on 23rd August 2024, again, well within my eleven-day boundary.

I’m in two minds about these time-travelling playlist interlopers. I have never knowingly listened to Charli XCX’s “Brat” album by choice, nor have I heard live versions of Fleetwood Mac’s music. I enjoy their work, but it goes against the “new music” goal. But it is new to me which is the whole point of this exercise.

The further problem with compilations is that they contain music by a variety of artists, so they don’t hit the “max-per-artist” limit but will hit the “max-per-album” rule. However, if the script finds multiple newly released compilations in one run, I might end up with a clutch of random songs spread over numerous “Various Artists” albums, maxing out the playlist with literal “filler.”

I initially allowed compilations, but I’m irrationally bothered that one day, the script will add “The Birdie Song” by Black Lace as part of “DEUTSCHE TOP DISCO 3000 POP GEBURTSTAG PARTY TANZ SONGS ZWANZIG VIERUNDZWANZIG”.

Nein.

I added a filter to omit any “album type: compilation,” which knocks that bopping-bird-based botherer squarely on the bonce.

No more retro Europop compilation complications in my playlist. Alles klar.

Not yet

Something else I had yet to consider is that some albums have release dates in the future. Like a fresh-faced newborn baby with an IDE and API documentation, I assumed that albums published would generally have release dates of today or older. There may be a typo in the release_date field, or maybe stuff gets uploaded and made public ahead of time in preparation for a big marketing push on release_date.

I clearly do not understand the music industry or publishing process, which is fine.

Nuke it from orbit

I’ve been testing the script while I prototyped it, this week, leading up to the “Grand Launch” in September 2024 (next month/week). At the end of August I will wipe the slate (playlist) clean, and start again on 1st September with whatever rules and optimisations I’ve concocted this week. It will almost certainly re-add some of the same tracks back-in after the 31st August “Grand Purge”, but that’s as expected, working as designed. The rest will be pseudo-random genre-specific tracks.

I hope.

Newsletter

I will let this thing go mad each day with the playlist and regroup at the end of September to evaluate how this scheme is going. Expect a follow-up blog post detailing whether this was a fun and interesting excursion or pure folly. Along the way, I did learn a bit more Python, the Spotify API, and some other interesting stuff about music databases and JSON.

So it’s all good stuff, whether I enjoy the music or not.

You can get further, more timely updates in my weekly email newsletter, or view it in the newsletter archive, and via RSS, a little later.

Ken said he got “joy out of reading your newsletter”. YMMV. E&OE. HTH. HAND.

Nomenclature

Every good project needs a name. I initially called it my “Personal Dynamic Playlist of Sixty tracks over Eleven days,” or PDP-11/60 for short, because I’m a colossal nerd. Since bumping the max-tracks limit for the playlist, it could be re-branded PDP-11/94. However, this is a relatively niche and restrictive playlist naming system, so I sought other ideas.

My good friend Martin coined the term “Virtual Zane Lowe” (Zane is a DJ from New Zealand who is apparently renowned for sharing new music). That’s good enough for me. Below are links to all three playlists if you’d like to listen, laugh, live, love, or just look at them.

The “Keepers” and “Sleepers” lists will likely be relatively empty for a few days until the script migrates my preferred and disliked tracks over for safe-keeping & archival, respectively.

November approaches

Come back at the end of the month to see if: My script still works. The selections are good. I’m still listening to this playlist, and most importantly. Whether I enjoy doing so!

If it works, I’ll probably continue using it through October and into November as I commute to and from the office. If that happens, I’ll need to update the playlist artwork. Thankfully, there’s an API for that, too!

I may consider tidying up the script and sharing it online somewhere. It feels a bit niche and requires a paid Spotify account to even function, so I’m not sure what value others would get from it other than a hearty chuckle at my terribad Python “skills.”

One potentially interesting option would be to map the songs in Spotify to another, such as Apple Music or even videos on YouTube. The YouTube API should enable me to manage video playlists that mirror the ones I manage directly on Spotify. That could be a fun further extension to this project.

Another option I considered was converting it to a web app, a service I (and other select individuals) can configure and manage in a browser. I’ll look into that at the end of the month. If the current iteration of the script turns out to be a complete bust, then this idea likely won’t go far, either.

Thanks for reading. AirPods in. Click “Shuffle”.

30 August, 2024 12:00PM

The Fridge: Ubuntu 24.04.1 LTS released

The Ubuntu team is pleased to announce the release of Ubuntu 24.04.1 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-severity bugs, with a focus on maintaining stability and compatibility with Ubuntu 24.04 LTS.

Kubuntu 24.04.1 LTS, Ubuntu Budgie 24.04.1 LTS, Ubuntu MATE 24.04.1 LTS, Lubuntu 24.04.1 LTS, Ubuntu Kylin 24.04.1 LTS, Ubuntu Studio 24.04.1 LTS, Xubuntu 24.04.1 LTS, Edubuntu 24.04.1 LTS, Ubuntu Cinnamon 24.04.1 LTS and Ubuntu Unity 24.04.1 LTS are also now available. More details can be found in their individual release notes (see ‘Official flavours’):

https://discourse.ubuntu.com/t/ubuntu-24-04-lts-noble-numbat-release-notes/39890

Maintenance updates will be provided for 5 years from the initial 24.04 LTS release for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Expanded Security Maintenance).

To get Ubuntu 24.04.1 LTS

In order to download Ubuntu 24.04.1 LTS, visit:

https://ubuntu.com/download

Users of Ubuntu 22.04 LTS will be offered an automatic upgrade to 24.04.1 LTS via Update Manager.

We recommend that all users read the 24.04.1 LTS release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://discourse.ubuntu.com/t/ubuntu-24-04-lts-noble-numbat-release-notes/39890

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

https://discourse.ubuntu.com/contribute

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

https://ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

https://ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Aug 29 19:19:17 UTC 2024 by Łukasz ‘sil2100’ Zemczak on behalf of the Ubuntu Release Team

30 August, 2024 02:04AM

August 29, 2024

Ubuntu Studio: Ubuntu Studio 24.04.1 LTS Released

Upgrades from 22.04 LTS also enabled!

The Ubuntu Studio team is pleased to announce the first service release of Ubuntu Studio 24.04 LTS, 24.04.1. This also marks the opening of upgrades from Ubuntu Studio 22.04 LTS to 24.04 LTS.

If you are running Ubuntu Studio 22.04, you should be receiving an upgrade notification in a matter of days upon login.

Notable Bugs Fixed Specific to Ubuntu Studio:

  • Fixed an issue where PipeWire could not send long SysEx messages bridging to some MIDI controllers .
  • DisplayCal would not launch as it required an older version of Python than Python 3.12 that was released with Ubuntu 24.04. This was fixed.
  • The new installer doesn’t configure users to be part of the audio group by default. However, upon first login, the user that just logged-in is automatically configured, but this requires the system to be completely restarted to take effect. The fix to make this seamless is in progress.

Other bugfixes are in progress and/or fixed and can be found in the Ubuntu release notes or the Kubuntu release notes for the desktop environment.

How to get Ubuntu Studio 24.04.1 LTS

Ubuntu Studio 24.04.1 LTS can be found on our download page.

Upgrading to Ubuntu Studio 24.04.1 LTS

If you are running Ubuntu Studio 24.04 LTS, you arleady have it.

If you are running Ubuntu Studio 22.04 LTS, wait for a notification in your system tray. Otherwise, see the instructions in the release notes.

Contributing and Donating

Right now we mostly need financial contributions and donations. As stated before, our project leader’s family is in a time of lead with his wife losing her job unexpectedly. We would like to keep the project running and be able to give above and beyond to help them.

Therefore, if you find Ubuntu Studio useful and can find it in your heart to give what you think it’s worth and then some, please do give.

Ways to donate can be found in the sidebar as well as at ubuntustudio.org/contribute.

29 August, 2024 09:17PM

Ubuntu Blog: Upgrade your desktop: Ubuntu 24.04.1 LTS is now available

Whether you’re a first time Linux user, experienced developer, academic researcher or enterprise administrator, Ubuntu 24.04 LTS Noble Numbat is the best way to benefit from the latest advancements in the Linux ecosystem — just in time for Ubuntu’s 20 year mark.

The release of Ubuntu 24.04.1 LTS represents the consolidation of fixes and improvements identified during the initial launch of Ubuntu 24.04 LTS. From today, Ubuntu 24.04.1 LTS is available to download and install from our download page.

Users of Ubuntu 22.04 LTS will shortly be prompted to upgrade to 24.04 LTS directly from their desktop, either automatically or as part of a scheduled update. This is a great time to start exploring Ubuntu 24.04 LTS. If you missed our press release, don’t worry. We’re summing up the most exciting developments in this post to get you ready to upgrade.

User experience and performance enhancements

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/7-Wcy72kffGY3f_KhI4VNoDGow_nnsGwB10oSO2oACqBYORb5xRWuQSKwAkaLE0YWciUWlrf5Hk2yKNb66kdo7t3d8YQSu1yS1JaJiGliqn3aFDAG5Qy558ApHb_did8V0EGmWKaH2DzhOnGa8pR50I" width="720" /> </noscript>

GNOME 46 brings a host of performance and usability improvements including file manager search and performance, expandable notifications and consolidated settings options for easier access.

We’ve also bolstered our provisioning options through our installer with ZFS encryption and by integrating autoinstall support. That, along with our new App Center and a dedicated app for firmware updates, brings you a richer experience — and with better performance to boot.

Read more about the new features in Ubuntu Desktop 24.04 LTS in our deep dive>

Your development and data science tools, right at home on Ubuntu

<noscript> <img alt="" height="1080" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1920,h_1080/https://ubuntu.com/wp-content/uploads/4a1f/Canonical-AI-ML-Illustrations-v5-06-2.png" width="1920" /> </noscript>

As the target platform for open source software vendors and community projects, Ubuntu 24.04 LTS ships with the latest toolchains for Python, Rust, Ruby, Go, PHP and Perl, and users get first access to the latest updates for key libraries and packages.

In fields such as data science and machine learning, Ubuntu is the operating system (OS) of choice for many of the most popular frameworks. This includes OpenCV, TensorFlow, Keras, PyTorch and Kubeflow, as well as our Canonical Data Science Stack, which allows you to set up ML environments with ease right out of the box.

Everything you would need in your organization

<noscript> <img alt="" height="1440" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_2560,h_1440/https://ubuntu.com/wp-content/uploads/353e/1221-min-scaled.jpg" width="2560" /> </noscript>

Ubuntu 24.04 LTS has a bevy of features that make it feel right at home in your enterprise. 

Netplan is included as the default tool to configure networking on desktop (as it has been the default on server), allowing administrators to configure their Ubuntu estate regardless of platform. The recent release of Netplan 1.0 brings with it additional features around wireless compatibility and usability improvements such as netplan status –diff, making Netplan a great complement to Network Manager through its bidirectional integration.

Ubuntu’s prevalence in engineering and data science teams in enterprise, academic and federal institutions often means IT administrators rely on Canonical’s Landscape for monitoring, managing and reporting on the compliance of machines across desktop, server, and cloud. 

With Windows remaining the corporate OS preferred by other departments, integrating with Active Directory simplifies the work of administrators by allowing them to manage Ubuntu instances using the Active Directory knowledge they’ve grown accustomed to.

How to integrate Ubuntu Desktop with Active Directory

The additional support for Group Policy Objects extended these capabilities, along with privilege management, remote script execution, certificate autoenrollment, network shares, network proxies, AppArmor profiles and a host of other requested functionality.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/p0IxtwClGsUl89Y6qR1qYN_6AB3Ne40yiR18DRSYgSlD7XjMbie6RXYl28Ya2ihpAXi7Zfnp9v-FyIHXwmUEZrhKheXlG_W1Lls1P-LXhKr1F7E-vqTWs4qq5t1wjOhVRLS_HNVwX0CBt7Gk-Zk12JA" width="720" /> </noscript>

Underneath the hood, Ubuntu 24.04 LTS also includes additional security improvements for those developing and distributing software within the Ubuntu ecosystem.

From a more robust distribution of Personal Package Archives (PPAs), to enhanced restrictions of unprivileged user namespaces in AppArmor, and an improved proposed pocket for granular package installation, Ubuntu 24.04 LTS gives you more options to secure your applications and your data.

More value for enterprises with Pro

Running the latest OS offers new features and enhanced performance, which is a good choice for new deployments. However, organizations also prioritize stability, security, and support, whether it’s for a fleet of workstations or established production systems.

Canonical’s Ubuntu Pro subscription caters to these needs. As with other long term supported releases, 24.04 LTS not only includes five years of standard security maintenance on the main Ubuntu repository, but that coverage is extended to 10 years and over 34,000 packages with Ubuntu Pro. An additional 2 years of support can be purchased through our Legacy Support add-on, giving you a 12 year commitment on this release and going back to 14.04 LTS.

This .1 update to Ubuntu 24.04 LTS Noble Numbat brings additional stability to our release, making it the ideal choice for your enterprise.

Get started today

29 August, 2024 05:31PM

hackergotchi for Tails

Tails

Tails report for July 2024

Highlights

  • On our first month back from vacation, we continued making it easier to recover from common failure modes without requiring technical expertise:

    • we drafted an implementation of our design to detect, report, and repair corruption of the Persistent Storage

    • we finished implementing our plans to improve the detection of and recovery from low-memory situations. Going from prototype to implementation, this work was a great example of the 90-90 rule in action: the first 90% of the work consumed the first 90% of our time, and the remaining 10% accounted for the other 90% of our time

  • Over the past year, we have been close downstream of the Tor Project's design and implementation of Arti. This month, we reached a significant milestone in our collaboration: we prepared a prototype of Tails in which multiple applications use Arti.

  • freiheitsfoo, one of our longest supporters, renewed their sponsorship of Tails! Welcome aboard for another year of resisting censorship and surveillance online!

Releases

📢We released Tails 6.5!

In Tails 6.5, we brought:

  • an updated Tor Browser with cool letterboxing improvements, and the latest Debian (12.6)

  • repairs to first-boot partitioning that many users were facing issues with after Tails 6.4

  • fixes to connecting via mobile broadband, LTE, and PPPoE DS. This has been a persistent issue in the Tails 6 series so far.

To know more, check out the Tails 6.5 release notes and the changelog.

Metrics

Tails was started more than 779,262 times this month. That's a daily average of over 25,946 boots.

29 August, 2024 03:38PM

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Carter: Orphaning bcachefs-tools in Debian

Around a decade ago, I was happy to learn about bcache – a Linux block cache system that implements tiered storage (like a pool of hard disks with SSDs for cache) on Linux. At that stage, ZFS on Linux was nowhere close to where it is today, so any progress on gaining more ZFS features in general Linux systems was very welcome. These days we care a bit less about tiered storage, since any cost benefit in using anything else than nvme tends to quickly evaporate compared to time you eventually lose on it.

In 2015, it was announced that bcache would grow into its own filesystem. This was particularly exciting and it caused quite a buzz in the Linux community, because it brought along with it more features that compare with ZFS (and also btrfs), including built-in compression, built-in encryption, check-summing and RAID implementations.

Unlike ZFS, it didn’t have a dkms module, so if you wanted to test bcachefs back then, you’d have to pull the entire upstream bcachefs kernel source tree and compile it. Not ideal, but for a promise of a new, shiny, full-featured filesystem, it was worth it.

In 2019, it seemed that the time has come for bcachefs to be merged into Linux, so I thought that it’s about time we have the userspace tools (bcachefs-tools) packaged in Debian. Even if the Debian kernel wouldn’t have it yet by the time the bullseye (Debian 11) release happened, it might still have been useful for a future backported kernel or users who roll their own.

By total coincidence, the first git snapshot that I got into Debian (version 0.1+git20190829.aa2a42b) was committed exactly 5 years ago today.

It was quite easy to package it, since it was written in C and shipped with a makefile that just worked, and it made it past NEW into unstable in 19 January 2020, just as I was about to head off to FOSDEM as the pandemic started, but that’s of course a whole other story.

Fast-forwarding towards the end of 2023, version 1.2 shipped with some utilities written in Rust, this caused a little delay, since I wasn’t at all familiar with Rust packaging yet, so I shipped an update that didn’t yet include those utilities, and saw this as an opportunity to learn more about how the Rust eco-system worked and Rust in Debian.

So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.

I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).

With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream.

As it stands now, bcachefs-tools is impossible to maintain in Debian stable. While my primary concerns when packaging, are for Debian unstable and the next stable release, I also keep in mind people who have to support these packages long after I stopped caring about them (like Freexian who does LTS support for Debian or Canonical who has long-term Ubuntu support, and probably other organisations that I’ve never even heard of yet). And of course, if bcachfs-tools don’t have any usable stable releases, it doesn’t have any LTS releases either, so anyone who needs to support bcachefs-tools long-term has to carry the support burden on their own, and if they bundle it’s dependencies, then those as well.

I’ll admit that I don’t have any solution for fixing this. I suppose if I were upstream I might look into the possibility of at least supporting a larger range of recent dependencies (usually easy enough if you don’t hop onto the newest features right away) so that distributions with stable releases only need to concern themselves with providing some minimum recent versions, but even if that could work, the upstream author is 100% against any solution other than vendoring all its dependencies with the utility and insisting that it must only be built using these bundled dependencies. I’ve made 6 uploads for this package so far this year, but still I constantly get complaints that it’s out of date and that it’s ancient. If a piece of software is considered so old that it’s useless by the time it’s been published for two or three months, then there’s no way it can survive even a usual stable release cycle, nevermind any kind of long-term support.

With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done. I made an upload to experimental so that it’s still available if someone wants to work on it (without having to go through NEW again), it’s been removed from unstable so that it doesn’t migrate to testing, and the ancient (especially by bcachefs-tools standards) versions that are in stable and oldstable will be removed too, since they are very likely to cause damage with any recent kernel versions that support bcachefs.

And so, my adventure with bcachefs-tools comes to an end. I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it.

29 August, 2024 01:04PM

Ubuntu Blog: Unleash the power of open source in London: Canonical Partner Executive Summit

Imagine a world where open source isn’t just a buzzword, but a catalyst for innovation and growth. A world where collaboration, knowledge sharing, and cutting-edge technology converge to unlock unprecedented opportunities.

In the heart of London, a groundbreaking event brings together the biggest trends and opportunities in the open source world: the Canonical Partner Executive Summit 2024. This exclusive gathering is designed to unite industry leaders, foster collaboration, and explore the transformative power of open source. As the world embraces open-source technologies at an unprecedented rate, this summit offers a unique opportunity to stay ahead of the curve and unlock the full potential of your business.

What to expect at the Partner Executive Summit

The Canonical Partner Executive Summit London is an engaging event packed with insightful sessions and networking opportunities for IT companies looking to partner with Canonical and enhance customer value through open-source solutions.

Attendees at the summit will experience a dynamic and informative programme. They’ll have the chance to delve deep into the latest trends and innovations within the open-source ecosystem, gain insights from industry leaders, and explore cutting-edge technologies.

From understanding the intricacies of the UKI and EMEA markets to discovering the transformative potential of open-source solutions in cloud infrastructure, edge computing, and telecommunications, participants will leave the summit empowered with the knowledge and connections needed to propel their businesses forward. Moreover, the event offers abundant opportunities for networking and collaboration with industry peers.

Join us

The Canonical Partner Executive Summit London 2024 is an exclusive event designed to equip you with the knowledge and connections to thrive in the open-source era. Don’t miss this opportunity to be at the forefront of innovation and shape the future of your business.

Date: 18 September, 2024
Time: From 14:00 onwards
Location: Canonical’s Office, London 

Join us for a day of insightful discussions and networking at the Canonical Partner Executive Summit in London. Stay up-to-date on the latest trends and innovations in open source.


Click here to view the agenda and register:

Register now

Learn more about the Partner Executive Summits. Click here.

Please note: This event is exclusive to Canonical partners. Registration is subject to confirmation, and spaces are limited.

29 August, 2024 09:12AM

hackergotchi for GreenboneOS

GreenboneOS

Greenbone Basic – Vulnerability Management for SMEs

Greenbone Basic: Small Businesses Can Now Easily Protect Themselves from Vulnerabilities

Cyber attacks have become the greatest threat to modern businesses of all sizes. An abundant amount of attacks have caused a stir, cost billions, and resulted in production downtime and significant damage all over the world. Not a lot has changed in 2024, and the threat situation is the same for small and large companies: Studies show that nine out of ten businesses are aware of the danger and intend to invest in protection against attackers, malware, and ransomware.
Only modern vulnerability management, as provided by us can offer professional protection. For many years, the products of our company have proven their reliability and quality under highest demands, every single day, in critical infrastructures, government agencies, corporations, and organisations. Now, we are launching a tailor-made solution for small and medium-sized enterprises (SMEs).

The Full Power of Greenbone’s Vulnerability Management for SMEs

What large organizations have been deploying for a long time, is now also available for smaller companies: We are launching a new version of our proven Greenbone Vulnerability Management: Greenbone Basic is perfect for small and medium-sized businesses, is more affordable than the competition, is based on our tried and tested Greenbone Enterprise Products, and offers the well-known, strong protection for everyone.

At a entry level price, Greenbone Basic delivers many of the features of the larger product, while remaining significantly more affordable than comparable competitor offerings. Like the Greenbone Enterprise Products, Greenbone Basic also comes with the best detection rate for vulnerabilities, the fastest zero-day protection, and ease of use.

Greenbone Basic excels with fast deployment, ease of use and efficient operation – qualities that professional vulnerability management demands in today’s infrastructures.

“We tailored Greenbone Basic specifically to the needs of SMEs because we were increasingly approached by representatives of smaller and medium-sized companies. SMEs have very specific requirements but today face the same challenges as large enterprises.”

Hannes Nordlohne, Business Development Manager

Greenbone Basic scans up to 200 IP addresses within 24 hours and comes with Greenbone’s proven and continuously updated CVE tests for predictive scanning. Administrators can quickly and easily deploy the software, which is prepared for various platforms. Basic supports common server virtualization such as Microsoft’s Hyper-V, VMware, or Oracle Virtualbox, with integration also covering functions like mandatory regular backups.

Features and Functionality

For many years, we have been market leader in open-source vulnerability management. This extensive experience and knowledge now is utilized in Greenbone Basic, delivering features indispensable in small and medium-sized environments – such as the intuitive and clear browser-based user interface.

Start page Scan results

The graphical web interface not only provides a constantly updated overview of the system’s status and performance, but it also allows for the comfortable management and initiation of scans, and integrates reporting plugins with filters, sorting, notes, and risk assessment. A dedicated module handles certificate management, scans can be automated via schedules, and after completion, Greenbone Basic automatically provides administrators with all relevant information. These reports are available in PDF, HTML, text, and XML formats.

Modern vulnerability management aims to identify, assess, prioritize, and remediate security gaps (vulnerabilities) in IT systems, networks, applications, and devices. This process is crucial to ensure the security of a company’s or organizations IT infrastructure and to minimize risks from cyber attacks or data loss.

Greenbone identifies vulnerabilities using customized tests and proprietary scanners, as well as proven open-source software, to classify threats and suggest solutions, patches, configuration changes, workarounds, or updates to the administrator.

“Greenbone has one of the largest databases and the best algorithms for vulnerability detection. Greenbone Basic customers benefit from the fastest zero-day protection on the market – we respond faster than other providers, and our system almost always detects new vulnerabilities on the first day. Greenbone Basic brings all these advantages to small businesses as well.”

Benjamin Höner, Chief Product Officer, Greenbone

High Performance at entry level price: Greenbone Basic

For just €2,450 per year, small and medium-sized businesses can now afford the same protection that large companies use to safeguard their infrastructure. Especially in comparison to the free version “Greenbone Free“, Greenbone Basic offers significantly more security and numerous extra features: In addition to the many scans and tests required by enterprise customers (for Oracle, Microsoft, Cisco, VMware, Palo Alto, Trend Micro, Fortinet, Juniper, and many more), there are also automatic scans and the essential integration into enterprise directories like LDAP and RADIUS.

Feature comparison (Greenbone Free, Greenbone Basic, Greenbone Enterprise)

However, for those who need sensors, API access, remediation tickets, and other enterprise features, and who also want to purchase support from Greenbone, the classic Greenbone Enterprise, is the right choice, suitable for businesses of all sizes. You can request the Greenbone Basic entry-level product here and test it free of charge for seven days.

29 August, 2024 07:25AM by Markus Feilner

hackergotchi for Deepin

Deepin

Recognizing deepin en Español: An Integral Part of the deepin Ecosystem

We firmly believe that the strength of the deepin operating system lies not only in its outstanding features but also in its vibrant overseas communities. Among these, the Spanish-speaking community, deepin en Español, holds a significant position. Over the years, this community has played an indispensable role in bringing the deepin operating system to Spanish-speaking users worldwide, embodying the spirit of collaboration, innovation, and inclusivity. Today, we would like to introduce deepin en Español and express our sincere gratitude for their ongoing dedication.   The Origin and Growth of the deepin en Español Community The deepin en Español community was ...Read more

29 August, 2024 06:43AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E313 Rute Correia I

Esta semana recebemos a visita de Rute Correia; jogadora inveterada, malhadeira extraordinaire e uma das Tias que Malham em Jogos. Falámos de tudo um pouco, mas sobretudo de jogos de vídeo, computadores, consolas fofinhas de muitas cores e Software Livre (claro). É mesmo inevitável que uma pessoa que cresça rodeada de computadores e ecrãs se torne míope e nerd? Hmmm. Qual é a diferença entre um Steamdeck e um Stream Deck? É verdade que lhe chamam a Zita Seabra das Consolas, por ter abandonado o PC? A conversa correu tão bem e divertimo-nos tanto, que tivemos de dividir este episódio em duas partes - esta é a primeira.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

29 August, 2024 12:00AM

August 27, 2024

Ubuntu Blog: Join industry experts at Data and AI Masters

Canonical’s new event brings you hands-on workshops from NVIDIA, Intel, Google, Microsoft and Dell

Join us this October 1-2 as we host our inaugural Data and AI Masters event. Streamed online, the conference will be a two-day deep dive into the latest innovations in machine learning, data science, and data solutions. Together with our partners – including NVIDIA, Intel, Google, Microsoft, Dell and others – we’ll deliver a series of technical workshops and presentations, all with a focus on actionable outcomes.

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/f969/General-landscape-_-light-blog.png" width="1280" /> </noscript>

There are dozens, even hundreds of AI events happening this year, so why should you attend this one? The answer’s in the question. These events have become prolific because of the overwhelming hype surrounding AI.

Data and AI Masters is different. It’s about what happens when great engineers and data scientists put their minds together to deliver real-world outcomes. No speculation, no hyperbole – only real, practical insights and exciting use cases from industry leaders. Our goal is to equip attendees with useful knowledge and best practices to help drive meaningful value with data and AI at scale.

The title is no accident either. There is no AI without data, which is why we’ll be covering the entire data and AI stack, including everything from choosing your database to deploying GenAI in production.

Register now

What to expect at Data and AI Masters

Day 1 is fully geared towards technical attendees, featuring a schedule packed with hands-on workshops and talks from Canonical’s engineers, product managers and partners. Follow along on your own computer as we cover topics including:

  • How to set up a data science environment with three commands
  • How to create a cloud native data lake
  • How to build LLMs with retrieval augmented generation (RAG)

On day 2, we’ll be shifting the focus to thought leadership and real world use cases. Leading players in data and AI will share their latest developments, including confidential AI, GenAI and the role of an open source software stack. The day will also feature organisations that are already putting theory into practice and achieving value from data and AI initiatives. Hear how the European Space Agency is advancing space exploration with AI, and how Rehrig Pacific Company is transforming the logistics industry.

The event will close with a discussion panel between leaders in automotive, telco and silicon, exploring the outlook of data and AI across sectors.

See you soon

You may know Canonical for Ubuntu, and how we deliver a reliable, enterprise-grade operating system backed by long-term security maintenance and a predictable release cadence. What you may not know is that we offer the same benefits for the wider open source ecosystem, including data and AI. We are experts in delivering the end-to-end, open source data and AI stack, we partner with leading organisations in the industry, and we can’t wait to share our learnings with you this October.

To read more about the event, see the full agenda and register, visit the Data and AI Masters event page.

For a taste of the insights you can expect at the conference, read our case study with the European Space Agency.

27 August, 2024 03:39PM

hackergotchi for Deepin

Deepin

hackergotchi for VyOS

VyOS

VyOS Project August 2024 Update

Hello, Community! 

This month's development news includes many bug fixes and features, including remote access IPsec using VTI interfaces, support for WPA enterprise clients, and machine-readable tech support reports.  

 

27 August, 2024 12:12AM by Daniil Baturin (daniil@sentrium.io)

August 26, 2024

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 854

Welcome to the Ubuntu Weekly Newsletter, Issue 854 for the week of August 18 – 24, 2024. The full version of this issue is available here.

In this issue we cover:

  • “Something has gone seriously wrong,” dual-boot systems warn after Microsoft update
  • SRU announcement
  • Call for nominations: Ubuntu Community Council
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • Weekly Meeting Reports
  • Starcraft Clinic 2024-Aug-16
  • Midwest Superfest and Software Freedom Day 2024
  • UbuCon Asia 2024
  • UbuCon Korea 2024 has wrapped up with 151 attendees this year!
  • LoCo Events
  • Ubuntu WSL channel on Matrix
  • The CMA wants your comments on web apps
  • Other Community News
  • Canonical News
  • In the Blogosphere
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • soumyadghosh
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

26 August, 2024 11:04PM by guiverc

Call for nominations: Ubuntu Community Council

We’re looking for motivated people that want to join the Ubuntu Community Council!

The Community Council is the highest governance body of the Ubuntu project. They handle Code of Conduct violations, mediate conflict, and support the community.

For more concrete examples of what the Ubuntu Community Council works on, take a look at “A year in the Ubuntu community council”.

Who can apply

Any Ubuntu Member can apply.

Who we are looking for

The Ubuntu project turned 20 this year, but is still in constant flux. The advent of new communication platforms, new projects under our umbrella, and the ever-growing popularity of the project requires our community to evolve. We need to make sure Ubuntu is set to tackle the challenges of the next 20 years. It needs a strong and active community council to guide the project forwards.

  • You show humanity, gentleness and kindness in your communication.
  • You create a welcoming atmosphere.
  • You want to invest time in the next two years to handle CoC violations, mediate conflict and help improve the Ubuntu community.
  • You are willing to regularly meet with the other council members

Why you should apply

  • You will be able to play a significant role in building the future of the Ubuntu community.
  • You will be able to work directly with the Canonical Community team to improve how our community works and to make it a more welcoming place.
  • You will get wide recognition of your contributions with a high-profile, elected, position.

How to apply

You can apply by sending your self-nomination to community-council@lists.ubuntu.com. In that email, please include the following.

  • Your name
  • Your Launchpad ID
  • Whether you’re a Canonical employee
  • A link to a page describing who you are, and explaining why you want to join this council. This can be a Discourse post, a page on your personal website, or your page in the Ubuntu Wiki.

Nominations are now open and will close on Sunday September 22th, 23:59 UTC. After that, the Community Council will review the submissions and will set up an election.

Please do not hesitate to share this post with anyone you will be a great fit for the Community Council!

Election Timeline

  • Sunday September 22th, 23:59 UTC: Call for nomination closes
  • Sunday September 29th: voting starts
  • Sunday October 13st, 23:59 UTC: voting ends

Originally posted to the Ubuntu Community Discourse on Sun Aug 25 2024 by Merlijn Sebrechts, on behalf of the Ubuntu Community Council.

26 August, 2024 12:58PM by guiverc

hackergotchi for Deepin

Deepin

August 25, 2024

hackergotchi for OSMC

OSMC

OSMC's August update is here with Kodi 21.1

We've been very busy behind the scenes and we're now happy to announce the availability of Kodi v21 (codename Omega) for all OSMC supported devices. All devices supported by OSMC on Kodi v20 remain supported for Kodi v21.

To ensure that this was a stable release and that the user experience stays at the highest level you would expect from OSMC, we waited until the first point release of Kodi v21 before making a release.

Kodi v21.1

Kodi v21.1 (Omega) is now available as standard on OSMC, and release details for Kodi v21.0 can be found here.

On the OSMC side, we've made a number of changes to keep things running smoothly:

Bug fixes

  • Fixed an issue with backing up user data via My OSMC to an SMB share
  • Fixed an issue with playback of some VC-1 content on Vero 4K / 4K + and V

Improving the user experience

  • Improved OSMC remote keymap messages
  • Add support for custom EDID on Vero 4K / 4K + and V
  • Removed the unnecessary space in a log URL returned after uploading logs to My OSMC
  • Added support for Full SBS / TAB support on Vero 4K / 4K + and V
  • Vero V: implemented full range video output
  • OSMC Skin: add support for Kodi v21 and new view type selection dialog

Miscellaneous

  • Updated and improved translations
  • Added support for 2560x1440p60 video mode on Vero 4K / 4K + and V
  • Updated Vero V WiFi driver

Wrap up

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on X, like us on Facebook and consider making a donation if you would like to support further development.

You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the most out of OSMC.

Vero V is our latest and greatest flagship and the best way to enjoy OSMC. It's also currently on sale, so grab a great device at a great price while you can!

25 August, 2024 12:13AM by Sam Nazarko

August 23, 2024

hackergotchi for Deepin

Deepin

August 22, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Simos Xenitellis: How to use your Android phone with ADB in an Incus container

Incus is a manager for virtual machines and system containers.

A system container (therein container) is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently let an Incus container to have access to the Android USB debugging (ADB) of our Android mobile phone. Normally, you would run adb commands on the host. But in this case, we are launching a container and give it access to ADB. We set it up so that only the container can access the phone with ADB.

The reason why I am writing this post is that there are several pitfalls along the way. Let’s figure out how it all works.

Setting up the Android phone for USB Debugging

To enable USB debugging on your Android phone, there’s a hidden list of steps that involves going into your phone settings, tapping seven times on the appropriate place of the About page, and then your phone shows the message You are now a developer. You will need to search for the exact steps for your device, as the place where you need to tap seven times may be different among manufacturers.

Still on the phone, you then need to visit another location in the phone Settings, one that is called Developer options. In there, you need to scroll a bit until you see the option USB debugging. You will need to enable that. When you click to enable, you will be presented with a warning dialog box about the ramifications of having enabled the USB debugging. Read that carefully, and enable USB debugging. Note that after this exercise, and if you do not need to use ADB for a long period, you should disable the Developer options. The risk with the enabled USB debugging is that if you connect your phone with a (data) USB cable to some malicious computer or even a malicious USB charger, they may take over your device in a very bad way.

In the first screenshot it shows the warning when you try to enable USB debugging. The second screenshot shows that the USB debugging has been enabled successfully.

There’s an option in the second screenshot to Revoke USB debugging authorizations. I recommend to do that, especially if you have already connected your phone to your computer. By doing so, we can make sure that the host is not able to connect successfully to the device, and only the container can do so. Note that when you try to connect to the device with adb, you get a dialog box on the phone on whether to authorize this new connection.

When you connect your Android phone to your Linux host, the device should appear in the output of lsusb (list USB devices) as follows. I think the USB Vendor and Product IDs should be the same, 0x18d1 and 0x4ee7 respectively. And it should say at the end “(charging + debug)“. If you get something else, then you fell for the Android notification that says Use USB for [File Transfer / Android Auto]. That’s not good for us that we need USB debugging. The proper setting is Use USB for [No data transfer]. Yeah, a bit counter-intuitive.

$ lsusb
...
Bus 005 Device 005: ID 18d1:4ee7 Google Inc. Nexus/Pixel Device (charging + debug)
...
$ 

Setting up the host

In order to let the container have access to the phone, we need to make sure that there is no adb command on the host that is running. We can make sure that this is the case, if we run the following. If the ADB server is running on the host, then the container does not have access to the device. Interestingly, if you setup the container properly and ADB is running on the host, then as soon as you adb kill-server on the host, the container immediately has access to the device.

sudo adb kill-server

Creating the container

We are creating the container, we will called it adb, and it has access to USB debugging on the Android phone. The way we work with Incus is that we create a container with the task of accessing the phone, and then keep that container whenever we need to access the phone. In the incus config device command, we add to the adb container a device called adb(can use any name here), which is of type usb, and has the vendor and product IDs shown below. Finally, we restart the container.

$ incus launch images:debian/12/cloud adb
Launching adb
$ incus config device add adb adb usb vendorid=18d1 productid=4ee7
Device adb added to adb
user@user-desktop:~$ incus exec adb -- apt install -y adb
...
$ incus restart adb
$ 

Using adb in the Incus container

Let’s run adb devices in the container. You will most likely get unauthorized. When you run this command, your phone will prompt you whether you want to authorize this access. You should select to authorize the access.

$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	unauthorized
$ 

Now, run again the command.

$ incus exec adb -- adb devices
List of devices attached
42282370	device
$ 

And that’s it.

Stress testing adb in the Incus container

You would like to make this setup as robust as possible. One step is to remove the adb binary from the host.

Another test is to restart the container, and then check whether it still has access to the device.

$ incus restart adb
$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	device

$ incus restart adb
$ incus exec adb -- adb devices
* daemon not running; starting now at tcp:5037
* daemon started successfully
List of devices attached
42282370	device
$ 

Obviously, when you want to perform more work with adb, you can just get a shell into the container.

$ incus exec adb -- sudo --login --user debian
debian@adb:~$ adb devices
List of devices attached
42285120	device

debian@adb:~$ adb shell
komodo:/ $ exit
debian@adb:~$ logout
$ 

Conclusion

If you setup your system so that only a designated container can have access to your Android phone with USB debugging, then you get a bit better setup in terms of security.

22 August, 2024 03:35PM

Simos Xenitellis: How to share a folder between a host and a container in Incus

Incus is a manager for virtual machines and system containers.

A system container (therein container) is an instance of an operating system that also runs on a computer, along with the main operating system. A system container uses, instead, security primitives of the Linux kernel for the separation from the main operating system. You can think of system containers as software virtual machines.

In this post we are going to see how to conveniently share a folder between the host and an Incus container. The common use-case is that you want to share directly files between the host and a container and you want the file ownership to be handled well. Note that in a container the UID and GID do not correspond to the same values as on the host.

Therefore, we are looking on how to share storage between the host and one or more Incus containers. The other case that we look into earlier, is to share storage between containers that has been allocated on the Incus storage pool.

Quick answer

incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED shift=true

Table of Contents

Background

On a Linux system the User ID (UID) and Group ID (GID) values are generally in the range of 0 (for root) to 65534 (for nobody). You can verify this by having a look at your /etc/passwd file on your system. In this file, each line is a different account; either a system account or a user account. Such a sample line is the following. There are several fields, separated by a colon character (:). The first field is the user name (here, root). The third field is the numeric user ID (UID) with the value of 0. The fourth field is the numeric group ID (GID) with the value of 0 as well. This is the root account.

root:x:0:0:root:/root:/bin/bash

Let’s do another one. The default user account in Debian and Debian-derived Linux distributions. The username in this Linux installation is myusername, the UID is 1000 and the GID is 1000 as well. This value of 1000 is quite common between Linux distributions.

myusername:x:1000:1000:User,,,:/home/user:/bin/bash

And now this one is the last one we will do. This is a special account with username nobody, and UID/GID at 65534. The purpose of this account is to be used for resources that somehow do not have a valid ID, or for processes and services that are expected to have the least privileges. In an Incus container you will see nobody and nogroup if you shared a folder and the translation of the IDs between the host and the container did not work well or did not happen at all.

nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin

Somewhat sharing folders between the host and Incus containers

To share a directory from the host to your Incus container(s), you add a disk device using incus config device. Here, mycontainer is the name of the container. And mysharedfolder is the name of the share which is only visible in Incus. You specify the source (an existing folder on the host) and the path (a folder in the container). However, there’s one big issue here. There’s no translation between the UID and GID of the files and directories in that folder.

incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED

Here’s on the host and in the container and then again on the host. In the last command it shows an empty folder. I think it’s possible to view the contents of SHARED in the container from the viewpoint of the host. It would require a smart use of the nsenter command to enter the proper namespace which I am not sure yet how to do for ZFS storage pools. If it worked, it would show that UID of myusername in the container from the viewpoint of the host, it would be something like 1001000 (Base ID 100000 plus 1000 for the first non-root account). That is, if the container has files with UID/GID outside of its range of 100000 – 165535, those files are not accessible from within the container.

$ ls -l SHARED/
total 0
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 one
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 two
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 three
$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 one
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 three
-rw-rw-r-- 1 nobody nogroup 0 Jul 30 16:26 two
$ sudo ls -l /var/lib/incus/containers/mycontainer/rootfs/SHARED
total 0
$ 

This was a good exercise to show the common mistake in sharing a folder from the host to the container. You can now remove the disk device and do it again properly.

$ incus config device remove mycontainer mysharedfolder
Device mysharedfolder removed from mycontainer
$ 

How to properly share a folder between a host and a container in Incus

The shared folder requires some sort of automated UID/GID shifting so that from the point of view of the container, it has valid (within range) values for the UID/GID. This is achieved with the parameter shift=true when we create the Incus disk device.

Let’s see a full example. We create a container and then create a folder on the host, which we call SHARED. Then, in that folder we create three empty files using the touch command. We are being fancy here. Then, we create the Incus disk device to share the folder into the container, and enable shift=true.

$ incus launch images:ubuntu/24.04/cloud mycontainer
Launching mycontainer
$ mkdir SHARED
$ touch SHARED/one SHARED/two SHARED/there
$ incus config device add mycontainer mysharedfolder disk source=/home/myusername/SHARED path=/SHARED shift=true
Device mysharedfolder added to mycontainer
$ 

where

  • incus config device, the Incus command to configure devices.
  • add, we add a device.
  • mycontainer, the name of the container.
  • mysharedfolder, the name of the shared folder. This is only visible from the host, and it’s just any name. We need a name so that we can specify the device when we want to perform further management.
  • disk, this is a disk device. Currently, Incus supports 12 types of devices.
  • source=/home/myusername/SHARED, the absolute path to the source folder. We type source= and then source folder, no spaces in between. The source folder (which is on the host) must already exist.
  • path=/SHARED, the absolute path to the folder in the container.
  • shift=true, the setting that automagically performs the necessary UID/GID translation.

In some cases, for example when your Linux kernel on the Incus host is old, the shift=true setting may not work. Or, some filesystems may not support it. I leave it to you to report back any cases where this did not work. In the comments below.

Let’s verify that everything work OK. First we see the files on the host. In my case, both the user myusername and group myusername have UID and GID 1000. In the container (which was images:ubuntu/24.04/cloud) they also have the same UID/GID of 1000, but for this Ubuntu runtime the default username with UID 1000 is ubuntu, and the default group with GID 1000 is lxd. These are just names and are not important. If you are still unsure, then use ls with the added parameter --numeric-uid-gid to show the numeric IDs. In both cases below, the UID and GID are 1000.

$ ls -l SHARED/
total 0
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 one
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 two
-rw-rw-r-- 1 myusername myusername 0 Ιουλ 30 19:26 three
$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 one
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 two
-rw-rw-r-- 1 ubuntu lxd 0 Jul 30 16:26 three
$ 

If we had created an images:debian/12/cloud container, here is how the files would look like. The username with UID 1000 is debian, and the group with GID 1000 is netdev.

$ incus exec mycontainer -- ls -l /SHARED/
total 0
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 one
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 three
-rw-rw-r-- 1 debian netdev 0 Jul 30 16:26 two
$ 

You can create files and subfolders in the shared folder, either on the host or in the container. You can also share it between more containers.

Incus config device commads

Let’s have a look at the incus config device commands.

$ incus config device 
Usage:
  incus config device [flags]
  incus config device [command]

Available Commands:
  add         Add instance devices
  get         Get values for device configuration keys
  list        List instance devices
  override    Copy profile inherited devices and override configuration keys
  remove      Remove instance devices
  set         Set device configuration keys
  show        Show full device configuration
  unset       Unset device configuration keys

Global Flags:
      --debug          Show all debug messages
      --force-local    Force using the local unix socket
  -h, --help           Print help
      --project        Override the source project
  -q, --quiet          Don't show progress information
      --sub-commands   Use with help or --help to view sub-commands
  -v, --verbose        Show all information messages
      --version        Print version number

Use "incus config device [command] --help" for more information about a command.
$ 

We are going to use some of those commands. First, we list the disk devices. There’s currently only one disk device, mysharedfolder. We then show the disk device. We get the list of parameters, which we can get individually and even set them to different values. We then get the value of the path parameter. Finally, we remove the disk device. We have not shown how to override, set and unset.

$ incus config device list mycontainer
mysharedfolder
$ incus config device show mycontainer
mysharedfolder:
  path: /SHARED
  shift: "true"
  source: /home/myusername/SHARED
  type: disk
$ incus config device get mycontainer mysharedfolder path
/SHARED
$ incus config device remove mycontainer mysharedfolder 
Device mysharedfolder removed from mycontainer
$ 

Conclusion

We have seen how to create disk devices on Incus containers in order to share a folder from the host to one or more containers. Using shift=true we can take care of the translation of the UIDs and GIDs. I am interested in corner cases where these do not work. It would help me fix this post, and eventually move it to the official documentation of Incus.

22 August, 2024 01:08PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Univention App Highlights: SSL Certificates for Univention Corporate Server with Let’s Encrypt

Welcome to the sixth edition of our UCS App Series! Today, it’s all about Let’s Encrypt, the top provider of free SSL/TLS certificates. With the Let’s Encrypt app in the Univention App Center, you can easily and automatically secure your UCS services like Apache, Postfix, and Dovecot.

Before we dive into how the app can help you effortlessly secure UCS services like Apache, Postfix, and Dovecot, let’s cover some basics. Why are HTTP and other plain-text protocols so risky? What exactly is SSL/TLS, and why is a Certificate Authority (CA) a good idea? No tech jargon, no headaches—just simple explanations and a few clicks to keep your digital communications safe from prying eyes.

Why Are HTTP, IMAP, and POP3 Insecure?

Imagine you’re sending a postcard from your vacation in Sweden. Anyone who gets their hands on it along the way can read the message. That’s exactly how it works with plain-text protocols like HTTP (Hypertext Transfer Protocol), SMTP (Simple Mail Transfer Protocol), IMAP (Internet Message Access Protocol), and POP3 (Post Office Protocol). Data sent over these protocols is not particularly secure. An intruder could, just like a curious postman, intercept and read your data as it travels.

It gets even worse: Not only can attackers eavesdrop, but they could also alter the message before it reaches the recipient. Instead of a friendly greeting from Gothenburg, your plant-sitting neighbor might receive a rude message—and neither of you would be any the wiser. The result? Friendship in ruins.

This is where encrypted communication comes to the rescue. By using HTTPS (Hypertext Transfer Protocol Secure) and other secure protocols like SMTPS (Simple Mail Transfer Protocol Secure), IMAPS (IMAP over SSL), or POP3S (SSL/TLS extension for POP3), you can protect your data from prying eyes and tampering. Only the intended services with the right decryption key can read the messages, keeping your digital correspondence safe and sound.

What Makes SSL/TLS Encryption Different?

SSL/TLS is like a sealed envelope for your digital messages. When communication passes through this encrypted transport layer, an “s” is added to the protocol name: HTTP becomes HTTPS, SMTP turns into SMTPS, and so on. You can also secure other protocols with SSL, such as FTPS (File Transfer Protocol Secure) and LDAPS (Lightweight Directory Access Protocol Secure). This encryption ensures that the data exchanged remains secure and private. Even if someone intercepts the message along the way, they wouldn’t be able to read it.

But SSL/TLS doesn’t stop there. The protocol also verifies that you’re actually communicating with the intended website or email sender—much like an official seal on an envelope confirms the sender’s authenticity. This is done using SSL/TLS certificates, which act like digital IDs, ensuring that a website or communication partner is genuine and trustworthy. This way, you can be confident that your data is not only encrypted but also sent to the right recipient.

SSL vs. TLS: What’s the Difference?

SSL (Secure Sockets Layer), developed in the early 1990s, was the dominant encryption protocol for secure communications on the Internet for many years. However, it was eventually replaced by TLS (Transport Layer Security), which addressed several vulnerabilities found in SSL. Today, TLS is the standard for secure Internet connections. Introduced in 1999 as the direct successor to SSL 3.0, TLS is not only more secure but also more flexible and efficient. TLS 1.3, released in 2018, is currently the most widely used version for encrypting data streams between clients and servers.

Interestingly, despite TLS being the modern standard, the term “SSL” remains more commonly recognized and frequently used. As a result, many applications still refer to SSL or use the combined term SSL/TLS, though they typically mean the latest version of TLS, specifically TLS 1.3.

What Are SSL Certificates?

SSL certificates function much like a seal that verifies the authenticity and integrity of a letter. Originally designed to secure the transmission of sensitive data, such as credit card numbers or passwords, these digital documents confirm the identity of a website or mail server. Beyond just confirming identity, SSL certificates ensure that data is transmitted in an encrypted form, safeguarding it from unauthorized access.

Each SSL certificate contains several key pieces of information:

  • Public Key: This key is used to encrypt data before it is sent.
  • CA Information: This specifies which Certificate Authority (CA) issued the certificate.
  • Details About the Website or Mail Server: This includes the domain name, the issuing organization, and the certificate’s validity period.

The corresponding private key remains on the server (whether it’s a web server, mail server, etc.) and is used to decrypt the data once it has been received.

Why Do You Need a CA?

With the right tools, anyone can generate SSL certificates; for example, on Linux and macOS, the OpenSSL toolkit is often used for this purpose. However, these self-signed certificates are not officially recognized. To mark these certificates as secure for public use on websites and applications, a Certificate Authority (CA) is required. Much like a trusted agency that verifies the authenticity and trustworthiness of seals on letters, a CA guarantees the authenticity of the information stored within an SSL certificate.

The primary responsibilities of a CA include:

  • Identity Verification: Before issuing an SSL certificate, the CA verifies the identity of the applicant.
  • Issuing Certificates: After successful verification, the CA issues the certificate and digitally signs it, making it recognizable as trustworthy by others.
  • Management and Revocation: The CA manages the issued certificates and can revoke them if necessary, such as when a certificate has been compromised or is no longer trustworthy.

Not all certificate authorities are created equal: there are countless CAs on the Internet, organized in a hierarchical structure that forms a trust chain. At the top of this hierarchy is the Root CA, which acts as the trust anchor. Certificates from all subordinate CAs are signed by the Root CA, meaning the Root CA has verified the identity and trustworthiness of the subordinate CAs.

For example, when you open a website in your browser, the browser checks the website’s certificate and traces it back through the chain of certificates up to the Root CA. The Root CA is considered trustworthy, and most operating systems and browsers recognize it as such. The browser follows this chain of trust from the end certificate through all subordinate CAs to the Root CA. The entire chain must be intact for the certificate to be considered trustworthy.

Most web browsers display a warning message for self-signed certificates and allow users to add exceptions. For websites that are publicly accessible from the Internet, this isn’t just bad for reputation—it’s a significant security risk.

SSL Certificates for UCS

Univention Corporate Server (UCS) also uses certificates to secure and encrypt network communication—both between UCS systems and between UCS servers and client devices. All services provided by UCS that support SSL/TLS can use these certificates to ensure secure communication, including the directory service, mail server, notifier/listener services, web server, and more.

Each UCS Primary Node is automatically set up as a CA (Certificate Authority) for the domain. If additional UCS systems are added to the domain, the CA automatically issues new certificates for them. This setup is typically sufficient for internal communication. However, if services need to be accessible from the outside, it’s recommended to replace the self-signed certificate with one issued by a public certificate authority.

What Makes Let's Encrypt Special?

Enter Let’s Encrypt, a nonprofit certificate authority that provides free SSL/TLS certificates with the goal of making the entire Internet safer by promoting widespread and accessible web encryption.

What sets Let’s Encrypt apart is its emphasis on automation and ease of use. Utilizing the ACME protocol (Automatic Certificate Management Environment), which is based on JSON and HTTPS, Let’s Encrypt enables certificates to be automatically created, validated, installed, and renewed. Administrators don’t have to worry about expiring SSL certificates—Let’s Encrypt handles everything behind the scenes.

Let's Encrypt App in the Univention App Center

The Let’s Encrypt app available in the Univention App Center seamlessly integrates the Let’s Encrypt client into UCS, providing free SSL certificates for securing services like the Apache web server, Postfix (SMTP) mail server, and Dovecot (IMAP) mail server. The installation process is straightforward and involves just a few steps:

    1. Search for the Let’s Encrypt app in the Univention App Center and install it on your UCS system.
    2. In the app settings, enter the domains for which you need certificates.
    3. Select the services you want to secure: Apache, Dovecot, and Postfix.

A convenient feature of the app is that it sets up a cron job to automatically renew the certificates every 30 days, ensuring that a valid certificate is always in place without any manual intervention required.

Installing and Configuring Let's Encrypt

After clicking on Install, select a computer from your domain where you want to install Let’s Encrypt. Keep in mind that the machine on which the Let’s Encrypt app is installed must be accessible via HTTP from the Internet to ensure that the certificates can be successfully issued and renewed.

Screenshot Let's Encrypt Installation

Clicking on Continue installs the univention-letsencrypt package. After that, the App Center will display a brief help page with setup tips. Once you’ve confirmed that your UCS system is accessible from the Internet via the desired domain, open the app settings to proceed with the configuration.

Screenshot Let's Encrypt Configuration

Configure the desired domain(s), separating multiple entries with commas. Click the checkboxes to enable the services you wish to secure, then click Apply Changes. After about 10 seconds, the configuration dialog will display the status, indicating whether the certificate was successfully configured.

Using the Staging Environment

Enable the Use Let’s Encrypt staging environment checkbox to test certificate retrieval and domain verification without altering the configuration of your services. After clicking Apply Changes, the app will contact Let’s Encrypt’s staging endpoint.

If successful, you will see a message indicating that the retrieved certificate is invalid for production use. Important: Do not activate any additional services that are not explicitly configured for use with Let’s Encrypt. Enabling unsupported services may cause them to fail to start or function incorrectly. This limitation is in place to protect system stability and ensure proper implementation of Let’s Encrypt certificates. Once testing is complete, you can disable the test option and click Apply Changes again to switch to the production endpoint and obtain a valid certificate.

Restarting UCS Services and Updating the CA

During the initial setup (not for later certificate renewals), you will need to restart the relevant services. To do this, open the System services module in the Univention Management Console and search for apache2, postfix, and dovecot. Check the box next to each service name and click on RESTART.

Screenshot Let's Encrypt System Services

To ensure that all programs installed in the UCS environment recognize the new certificate as valid, you need to run a command in the terminal once. Open a terminal and, as the root user, enter the following command:

update-ca-certificates

Adjusting Apache Configuration

You can configure the web server to redirect all incoming HTTP connections to secure HTTPS connections. To do this, open the System module in the Univention Management Console (UMC) and navigate to the Univention Configuration Registry. Set the variable apache2/force_https to yes. Do not modify any UCR variables that begin with

apache2/force_https/exclude*.

These exclusions define conditions under which secure connections are not enforced for the local machine and the portal. Also, ensure that port 80 remains open so that Let’s Encrypt can renew the certificate regularly.

For your reference, the Let’s Encrypt log file can be found at /var/log/univention/letsencrypt.log, where the app records all actions.

Letter Privacy 2.0: Modern-Day Confidentiality with Let's Encrypt and UCS

The importance of encryption on the Internet can’t be emphasized enough. Protecting your data—and that of others—from prying eyes and tampering is crucial! With the Let’s Encrypt app in the Univention App Center, this process is incredibly straightforward. The installation and setup are user-friendly, allowing administrators to ensure that services like Apache, Postfix, and Dovecot are always protected with valid SSL certificates in no time.

 

Do you have any questions or comments? Leave us a message and share your experiences and ideas—right here on the blog or in the Forum Univention Help!

Image source: Icon created by nangicon from flaticon.com

Der Beitrag Univention App Highlights: SSL Certificates for Univention Corporate Server with Let’s Encrypt erschien zuerst auf Univention.

22 August, 2024 10:48AM by Luisa Schwichtenberg

hackergotchi for Deepin

Deepin

hackergotchi for Purism PureOS

Purism PureOS

Flipping the Script: Exploring New Paradigms in Secure Mobile Computing

Secure Networking Infrastructure + Secure Smartphones Last month, Purism announced a collaboration with Abside to deliver secure mobile solutions for the U.S. government and NATO countries. This solution leverages private cellular networks (PCNs) owned and operated by government entities rather than the traditional dependence on Public Switched Network Providers (PSNs) such as AT&T, Verizon, Vodafone, […]

The post Flipping the Script: Exploring New Paradigms in Secure Mobile Computing appeared first on Purism.

22 August, 2024 12:53AM by Randy Siegel

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E312 Nuno Do Carmo

Um Português, um Suíço exilado em Portugal e um Português exilado na Suíça entram num bar: a Microsoft é uma empresa Open Source? Como é que se pronuncia SUSE correctamente? Suza? Suze? Suzy? Suzette? A resposta a esta e muitas outras perguntas serão dadas pelo Nuno do Carmo (a.k.a.: Corsário), um multifacetado Tech Writer (Kubernetes e outras aplicações cloud native) e Microsoft MVP (Profissional Muito Valioso), Embaixador da CNCF (Cloud Native Computing Foundation) e da CIVO. É perito em WSL (Windows Subsystem for Linux) - e trabalha na SUSE, um sistema operativo Gnu-Linux muito popular no sector empresarial.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. (https://creativecommons.org/licenses/by/4.0/). A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

22 August, 2024 12:00AM

August 21, 2024

Ubuntu Blog: How Ubuntu keeps you secure with KEV prioritisation

The Known Exploited Vulnerabilities Catalog (KEV) is a database published by the US Cybersecurity and Infrastructure Security Agency (CISA) that serves as a reference to help organisations better manage vulnerabilities and keep pace with threat activity.

Since its first publication in 2021, it has gone beyond its US federal agency scope and has been adopted by various organisations across the globe as guidance for their vulnerability management prioritisation frameworks.

The reason for this is two-fold and lies in effective vulnerability management and how the KEV entries are curated.

What is vulnerability management?

Vulnerability management is a continuous process to keep systems up to date against a consistent stream of emerging threats. Deciding on what to patch and how to patch requires a decision process on what vulnerabilities pose the greater risk, what patches lower that risk, and repeating it over all vulnerabilities of interest until a consensus over the patching order can be reached. 

As security research continues to improve, modern operations are faced with an ever-increasing amount of vulnerabilities which, in turn, creates prioritisation challenges. For example, the Ubuntu Security Engineering team currently tracks 16,898 active CVEs, with more being added each day. Every new CVE can cause a shift in priorities but it takes time to analyse the information and make those changes. That’s where the KEV can help. 

How KEV tracks vulnerabilities 

While it represents a small subset of all tracked vulnerabilities, to be included in the catalogue a CVE number must have been assigned, so the vulnerability information is known, and, more importantly, evidence of active exploitation must exist. This means that threat actors are actively pursuing that vulnerability and, as cyber attackers know no physical borders, this should raise the risk associated with the vulnerability in question, bumping it in priority. These indicators are tracked across a wide chronological span, so you are as likely to find the latest vulnerability from 2024 as one from 2007 that suddenly became popular again.

In addition to that, the vulnerabilities contained in the KEV carry a patching mandate for US government agencies that follow CISA’s Binding Operational Directive (BOD) 22-01, so they are only added when a remediation strategy exists, be it a patch, a configuration change, or even a version update.

Companies using the KEV as reference can then see the vulnerability shows up in the catalogue, know that there is remediation, and proceed to prioritise them above all else.

How can Canonical help you with this process?

By having a commitment to prioritise vulnerabilities contained in the KEV, Ubuntu is placed in a strong position to help organisations meet compliance requirements.

The Security Engineering team is tracking all KEV entries, will prioritise them as High (or above), ensuring that those get worked on in a timely fashion, and will release a fix where possible.

Every Ubuntu LTS comes with security fixes for the core operating system (around 2,500 packages) for five years. But the whole ecosystem of software available with Ubuntu is far wider – over 30,000 packages, covering applications, databases and runtimes. Ubuntu Pro is a subscription on top of every Ubuntu LTS that provides security coverage for all of this software, which matches up directly with the CE requirements.  Learn more about Ubuntu Pro in this FAQ.

Are you using KEV in your vulnerability management? Talk to us so we can help you with Ubuntu Pro.

To learn more about open source vulnerability management, check out our introductory guide.

21 August, 2024 06:30PM

hackergotchi for GreenboneOS

GreenboneOS

Save the date: it-sa 2024

On October 22, the “it-sa Expo&Congress” will open its doors again in Nuremberg. The trade fair is now one of the largest platforms for IT security solutions worldwide. Last year, it set new records with 19,449 trade visitors from 55 countries and 795 exhibitors from 30 countries. This year, Greenbone will be at the ADN partner stand in Hall 6, booth 6-346. Our CEO Jan-Oliver Wagner will be giving a live presentation at the Forum 6-B on the opening day (11:00 – 11:15).

  • Opening hours: 
    • Tuesday, October 22, 2024: 09:00 – 18:00
    • Wednesday, October 23, 2024: 09:00 – 18:00
    • Thursday, October 24, 2024: 09:00 – 17:00
  • Location: Nuremberg, Exhibition Center 
  • Information: Tickets, exhibitors, hall plan

Visit us at our booth or schedule an appointment with the security experts from Greenbone. We look forward to seeing you at the fair!

21 August, 2024 02:15PM by Greenbone AG