May 27, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Magazine #181

This month:
* Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu : KDE Science Pt.2
* Micro This Micro That
* Review : Ubuntu 22.04
* Review : Puppy Linux Slacko 7
* My Story : My Journey To Ubuntu 22.04
Ubports Touch
* Ubuntu Games : The Darkside Detective
plus: News, The Daily Waddle, Q&A, and more.

Get it while it’s hot:

27 May, 2022 04:28PM

Ubuntu Blog: Canonical attends World Data Summit 2022

Canonical, the publisher of Ubuntu, joined the World Data Summit held in Amsterdam, Netherlands, last May 18-20, 2022. Michelle Anne Tabirao, Data Solutions Product Manager, participated as a speaker in a technical workshop and a panel discussion.

Discussing the latest trends in data

World Data Summit is a three-day conference covering multiple vital topics such as data management, data analytics, AI, future technologies, and more. In addition, the event shares best practices for developing an analytical model to drive business growth and optimisation.

During this edition, experts discussed multiple aspects of data analysis, visualisation and interpretability. In addition, the conference had various sessions on customer analytics, technical deep dives and panel discussions.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Data solutions on any cloud

Michelle from Canonical shared perspectives on data solutions, highlighting cloud-native computing, open source database applications, and Canonical database operators – and the Juju Charmed Operator Framework. These were the key highlights from the talk:

  • There is an increasing trend of containerized applications, e.g. Kubernetes, in production.
  • Cloud-native technologies empower organisations to build and run scalable applications in public, private and hybrid environments.
  • Due to organisations’ requirements for databases, running a cloud-native database application in Kubernetes is becoming a trend. 
  • Organisations should consider the operational work that needs support when running a database in Kubernetes. 
  • Operators control database and Kubernetes primitives to simplify deployment and automate the apps’ operations.

Reducing costs with AI

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Michelle also participated in a panel discussion on reducing company costs using AI. Camila Manera, the Chief Data Officer of LDP, moderated the session. In addition, other representatives from organisations such as Tawuniya, Boston University, and Intel shared their perspectives.

Many companies and organisations use AI models to reduce costs and increase revenue. For example, many minimise errors with AI, improve production output, and improve organisational decision-making. As a result, cost reductions can be seen in long-term and short-term investments for different organisations. More takeaways from the event: There are multiple perspectives to consider when delivering AI solutions: the cost of having AI, the cost of running AI, and the margin between the investment and return. 

To reduce costs, the industry needs to  improve data literacy and redefine what good looks like – for ourselves, our firms, and the societies we live in.

The fundamentals are: understanding the value and problems we are trying to resolve through technology. 

There are open source tools and innovations that organisations can build, run and innovate AI projects, e.g. TensorFlow, Python, Pandas, Kubeflow

Stay tuned

Follow the World Data Summit organisation as they prepare for the 2023 event! Next year, we hope to see you at this gathering for data professionals and decision-makers.

World Data Summit is organised by Growth Innovation Agility Global Group (GIA).

27 May, 2022 01:06PM

Ubuntu Blog: Embedded Linux development on Ubuntu – Part II

Welcome to Part II of this three-part mini-series on embedded Linux development on Ubuntu. In Part I, we set the stage for the remainder of the series and gave an overview of snaps, the packaging format at the heart of embedded devices running Ubuntu.

Snaps are a secure, confined, dependency-free, cross-platform Linux packaging format. Software publishers often want to manage their application components using containers. Whereas one can achieve this with various runtimes, the Snap ecosystem provides a security-focused approach to containerisation with strict privilege and capability separation between containers. If you missed it, head over to Part I to review the role of snaps in embedded Linux development.  

If you are already familiar with snaps and do not wish to refresh your memory, keep reading. 

Developers can build containerised, isolated snap applications on their machine using Snapcraft, access them from anywhere and farm from the global, public Snap Store. Snapcraft and the Snap Store are the focus of this blog post.

<noscript> <img alt="" height="170" src=",q_auto,fl_sanitize,c_fill,w_254,h_170/" width="254" /> </noscript>
Ubuntu Core is embedded Linux 2.0. Building upon Linux traditions, Ubuntu Core provides a sharp focus on predictability, reliability and security while at the same time enabling developer freedom and control.

Without much further ado, let’s dive straight in.

Snapcraft for embedded Linux development

Snapcraft is the framework and command-line packaging tool used to simplify embedded Linux development. Snapcraft builds and publishes snaps by orchestrating disparate components and build systems into one cohesive distributable package. Snapcraft helps you assemble a whole project in a single tree out of many pieces, including source or existing debs. 

When doing embedded Linux development via Snapcraft, you can bundle components and build systems directly into your application for a fully orchestrated package. Snapcraft is extensible and able to understand other build systems and software. Continuous development and integration of new plugins like Java, Python, Catkin (ROS), Go, CMake, qmake and make, enable developers to leverage the latest technologies for their software. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Furthermore, Snapcraft improves embedded Linux development by easily integrating into existing CI systems. After receiving a PR on GitHub, you can test it with e.g. Travis or another CI system, and the code lands on your GitHub master. Seamless integration with Travis, Jenkins, GitLab and TeamCity can generate automatic snap builds on every Git commit.  

Snap the format for embedded Linux development

Snapcraft levels the embedded Linux development playing field: any developer can build a snap by putting their software into a YAML file.

The YAML format to define applications is simple and declarative. It is a compressed filesystem with a single metadata file describing the security profile and desired snap integrations. The snap format uses only three stanzas to declare the metadata, confinement, and build definition, ensuring the security of the system a snap is running on and enabling the software to behave as expected.

Such a structure facilitates developers to extend a snap by adding shell commands and plugins for popular build systems and languages in the YAML. Developers can further bundle all dependencies inside the snap for predictable behaviour and make artefacts like databases more accessible and secure. 

Snap Store for embedded Linux development

In Part I, we overviewed the pain points of finding new software for embedded Linux devices. 

The key takeaway was that discovering new software on Linux is difficult, as publishers need to be on the hunt for PPAs and GitHub repositories with daily builds of all kinds of new software.

Snaps ease the process via the Snap Store, a central repository where developers publish their apps and users discover new software. It’s a universal app store for any Linux distribution to upload, browse, install, distribute and deploy applications in the cloud, on desktop or to IoT devices.  The free enterprise-level capabilities of the Store solve the traditional software distribution challenges on Linux.

<noscript> <img alt="" height="122" src=",q_auto,fl_sanitize,c_fill,w_398,h_122/" width="398" /> </noscript>
Managing software and updates across a huge number of devices can be challenging, especially if subsets of hardware require different applications to run on them. Delivering automatic updates and handling software across machines is one of the key features of the IoT App Store. This custom, enterprise store allows you to cherry-pick the optimal combination of applications you want your devices to use, including software published in the global Snap Store and custom software developed internally for a specific use case.

Managing software and updates across a huge number of devices can be challenging, especially if subsets of hardware require different applications to run on them. Delivering automatic updates and handling software across machines is one of the key features of the IoT App Store. This custom, enterprise store allows you to cherry-pick the optimal combination of applications you want your devices to use, including software published in the global Snap Store and custom software developed internally for a specific use case.

Community-backed snaps for every use case

Besides being a repository of snaps backed by the biggest, growing Linux developer community, the Snap Store has additional features. For one,  developers can push updates to their apps at their cadence and not wait for distribution maintainers to catch up. Also, the store can host multiple versions of the same snap on different risk levels, with users picking the one they prefer.

Snaps use channels to represent software maturity, enabling end-users to subscribe and switch between a track/risk/branch scheme. Embedded Linux devices can track software across their chosen channel and will automatically update to the latest revision of that software. The release channels strengthen embedded Linux development by promoting a standardised way of tracking deployments and enforcing rigorous iterative testing and stable releases. 

<noscript> <img alt="" height="128" src=",q_auto,fl_sanitize,c_fill,w_425,h_128/" width="425" /> </noscript>
With Build from GitHub, a snap is rebuilt whenever a change is merged into the main branch of its respective GitHub repository. When a build successfully completes, it’s automatically released to a snap’s edge channel.

Continuous software delivery with snaps

Developers can use tracks to publish multiple supported snap releases under the same snap name (for instance, a released snap may be on the “latest” track for external users and an ”insider” for an internal QA team). 

Risk levels represent a progressive potential trade-off between stability and new features. The Store modernises embedded Linux development by promoting snaps between Edge, Beta, Candidate and Stable channels, facilitating continuous software delivery. For instance, publishers can request users to participate in beta testing and, once the beta programme is over, move consumers back to stable, while users can pick which maturity level is most appealing to them.

And finally, branches are optional and hold temporary releases intended to help with bug-fixing. 

Final considerations for embedded Linux development

Snapcraft is a powerful and easy to use command-line tool for building snaps. It helps embedded Linux developers reach a wider audience by building and publishing snaps on the Snap Store

Snapcraft raises the bar for embedded Linux development by using channels, tracks and branches to control updates and releases, and secures it by building and debugging snaps within a confined environment. Snapcraft also simplifies embedded Linux development in that it uses a single declarative YAML file to define a snap. Developers who previously created packages for Linux distros will find it is similar to rpm spec files, Debian/control files or Arch Linux pkgbuild files, but one difference: it is much simpler.

Furthermore, the build and publish life cycle can be automated by integrating Snapcraft into an existing CI/CD pipeline. If your embedded Linux project does not already have a CI/CD process, you can connect your GitHub projects directly to our free build service. It will build a new snap on every commit and publish them to the edge channel in the Snap Store.

Now that you have a better understanding of snaps, Snapcraft and the Snap Store, jump to the last blog of this series to learn about the final, revolutionary step in the world of Linux. In the concluding chapter, we will connect all the concepts mentioned throughout this series and introduce Ubuntu Core. The combination of a hardened OS, snap packages and Store, gives developers a platform for secure, open-source embedded software development and deployment.

Are you evaluating Ubuntu Linux for your embedded device?

Get in touch

Further reading for embedded Linux development

Why is Linux the OS of choice for embedded systems? Check out the official guide to Linux for embedded applications in whitepaper or webinar form.

Interested in a detailed comparison of Yocto and Ubuntu Core? Watch the Yocto or Ubuntu Core for your embedded Linux project? webinar.

Did you hear the news? Real-time Ubuntu 22.04 LTS Ubuntu is now available. Check out the latest webinar on real-time Linux to find out more.

Do you have a question, feedback, or news worth sharing? Join the conversation on IoT Discourse to discuss everything related to the Internet of Things and tightly connected, embedded devices.

27 May, 2022 08:30AM

Ubuntu Blog: New Active Directory integration features in Ubuntu 22.04 – FAQ

Linux Active Directory integration is one of the most popular and requested topics from both the community and our clients. On May 17 we delivered a webinar on the new AD integration features introduced with 22.04 (now available on demand) and following that we received an overwhelming number of questions.

In this blog post we would like to address directly the most frequent ones

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
New Active Directory Integration features webinar agenda

What is ADsys and how is it different from SSSD?

SSSD is an upstream Active Directory service that manages access to remote directory services and authentication mechanisms including, but not limited to, Active Directory.

ADsys is the new, Ubuntu specific Active Directory Client. ADsys extends SSSD functionalities by adding the following :

  • Native Group Policy Object support for both machine and user policies targeting dconf settings on the client machine
  • Privilege management, allowing the possibility to grant or revoke superuser privileges for the default local user, and Active Directory users and groups
  • Custom scripts execution, giving the possibility to schedule shell scripts to be executed at startup, shutdown, login and logout
  • Admx and adml administrative templates for all supported versions of Ubuntu

Which Ubuntu versions does Adsys support?

ADsys is supported on 20.04.2+, 22.04 and future desktop releases.

Does ADsys work with Ubuntu Server?

Yes it does, however gsettings are not available on Ubuntu Server by default.

Once you install the package you can use the ADsys functionalities by following the same steps included in the documentation.

Does Canonical offer a cloud management system for ubuntu?

Yes, Canonical offers Landscape, which is a management and monitoring solution that works for both server and desktop. Landscape is not intended to be an AD replacement, rather compliment it by adding Linux specific functionalities like the ability to configure mirrors.

You can find more information about Landscape on its dedicated product page.

With ADsys, as well as future enterprise products, we are trying to extend Ubuntu compatibility with popular enterprise management and compliance tools, allowing IT administrators to reuse the same knowledge, tools and processes they have developed for Windows to manage their Ubuntu fleet.

What is required to enable privilege escalation and remote script execution?

The ADsys GPO functionality can be used by everyone free of charge, however you need an Ubuntu Advantage Desktop token to use the privilege escalation and remote script execution functiontionalities.

The differences between the free and paid tiers is summarized in the table below:

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Comparison between free and premium features

Can we use Powershell scripts in ADsys?

The ADsys remote script execution feature supports all binaries that can be executed on Ubuntu. This means that Powershell scripts can be executed if the related snap is installed on the machine.

You can install Powershell on Ubuntu using the snap install Powershell command.

Is Samba/Winbind supported?

No, Winbind is not supported as ADsys requires SSSD. We currently have no plans to add Winbind support.

If your machine has samba shares attached you can reference files in these directories (e.g. a wallpaper).

The scripts execution feature requires you to make the scripts available in your Active Directory sysvol samba share.

Is SSSD required to use ADsys?

Yes, SSSD is required as machines need to be joined to the domain for ADsys to work.

Can the sudo permissions be tuned to restrict access to a specific set of commands?

Not at the moment. The privilege escalation feature of ADsys allows you to disable local administrators and add/remove sudo privileges to Active Directory users and groups.

Please contact us if your organization has a specific use case you would like to discuss.

Does the machine need to be joined to AD before enabling ADsys?

Yes, the machines need to be joined through SSSD. You can join a machine both using the initial installer flow or at any time during the life of the machine.

You can find a detailed description of the steps required to join a machine to a domain in our Active Directory integration whitepaper.

How can you map file shares and printers?

Currently the best way to map file shares and printers is through a logon shell script. We are looking closely at the possibility of performing this action through GPOs and we will consider adding it to the product backlog based on customer interest.

Please contact us if your organization has a specific use case you would like to discuss.

Can you push certificates through AD GPOs?

Currently you cannot push certificates through GPOs. We are looking closely at the feature and will consider adding it to the product backlog based on customer interest.

Please contact us if your organization has a specific use case you would like to discuss.

Does Ubuntu support Azure AD?

ADsys and SSSD are currently clients targeted at Active Directory Domain Services and they do not support Azure AD.

Azure AD authentication is a very requested feature and it is in our future product roadmap.

Are there any AD schema changes required?

No schema changes are required to use the new ADsys features, however you need to import the relevant administrative templates for your distribution.

The ADsys client has a command to download the correct administrative templates automatically, alternatively you can find them on the relevant project GitHub page.

Is there a GUI to add an Ubuntu machine to a domain?

The installer flow provides a graphical user interface that guides you through the Active Directory configuration steps.

Ubuntu machines can be joined to a domain also after installation, however no UI is available at this point.

Are roaming profiles supported?

Roaming profiles are not supported at this point. We are looking closely at the feature and will consider adding it to the product backlog based on customer interest.

Please contact us if your organization has a specific use case you would like to discuss.

Can you map a unified home directory?

Yes, this can be done using a logon shell script.

Can you disable USB auto mounting?

Yes, ADsys allows you to set GPOs that enforce default or custom dconf settings on the client.

After you install the Administrative Profiles included in the tool you can disable USB auto mounting by setting the key desktop/media-handling/automount value to false.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

27 May, 2022 07:22AM

May 26, 2022

hackergotchi for Purism PureOS

Purism PureOS

Qubes 4.1 Now Available for Pre-Install

I’m convinced that the Librem 14 is the best laptop for Qubes and our customers seem to agree. Originally, customers who selected Qubes with their order would have to install it themselves with a USB thumb drive we added to the order. More recently we started offering Qubes as a pre-installed option, all set up […]

The post Qubes 4.1 Now Available for Pre-Install appeared first on Purism.

26 May, 2022 03:39PM by Kyle Rankin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How are we improving Firefox snap performance? Part 1

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>
Photo by Tim Mossholder, Unsplash

Ubuntu Desktop aims to deliver an open source operating system, available to everyone that just works for whatever they need. With Ubuntu 22.04 LTS, we believe we’ve come closer than ever to achieving that goal. However, as always, there are still a number of areas we want to improve to deliver the highest quality user experience. One of those areas is our default browser, Firefox, which transitioned to being distributed as a snap with Ubuntu 21.10.

To understand this decision, I want to focus on the ‘just works’ part of my opening statement. The Firefox snap offers a number of benefits to daily users of Ubuntu as well as a range of other Linux distributions. It improves security, delivers cross-release compatibility and shortens the time for improvements from Mozilla to get into the hands of users.

Currently, that decision has trade-offs when it comes to performance, most notably in Firefox’s first launch after a system reboot. A part of this is due to the inherent nature of sandboxing, however we feel there is still significant opportunity to improve start-up times across the board. We want to share the results of some of those investigations today, as well as highlight some recent meaningful changes in this area

This is an ongoing journey, and this blog article will be the first in a series as we update you on our progress. 

Ultimately, the real test will be how you, the user, experience these updates as they land. At the end of this post, we’ve put together some tools to help you keep track of the snap performance on your own machines. If you still have questions you can also join us tomorrow for our monthly Ubuntu Desktop Team Indaba, where this topic will be our main focus.

Let’s dive right in.

Why did we choose to make Firefox a snap?

This decision was made in collaboration with Mozilla based on the quality of life improvements that snap delivers:

  • Confinement: snaps add an extra security layer on top of the browser’s already-robust sandboxing mechanism. The browser sandbox protects the browser against malicious code, whilst the snap confinement protects the user from the browser acting in ways that it shouldn’t.
  • Effortless updates: browsers receive frequent updates and, with the snap, users are able to receive security patches from Mozilla more quickly than with other software distribution methods.
  • Authenticity: Whilst Canonical builds the snap, it is published and maintained by Mozilla. This is Firefox straight from the source, directly to users, without the overhead of keeping build dependencies up to date.
  • Cross-release compatibility: If your distro runs snapd, it can run the Firefox snap, from Ubuntu to the official flavours and beyond. It also means that older releases get the latest updates without additional maintenance.

Let’s talk about performance

We can divide our performance analysis into three specific areas:

  • Cold start performance: This refers to the time taken when Firefox is booted for the first time after a system restart (or in the worst-case scenario after a completely fresh install). This is where the Firefox snap performance is most noticeable and is our primary area of focus. Whilst a cold start will be the least frequent action for typical users, first impressions matter!
  • Warm start performance: This is Firefox startup on subsequent runs. Since the cold start caches a lot of data, this is a lot faster and much closer to our expected performance.
  • Runtime performance: This represents performance during active usage of Firefox whilst it’s running. We’ve recently introduced some fixes that have significantly improved this experience, detailed later in the post.

Our current focus is on the cold start performance and we’ve been following a similar approach to our work on the Chromium snap to isolate the root causes.

Cold Start Performance

What do we mean by cold start?

When we talk about “cold start” we mean without any libraries loaded into memory.  Essentially running Firefox right after a reboot, where the profile already exists on disk.

What do we mean by cold – purge?

A “cold start after purge” means without any caches, nothing in memory, and no profile created.  When an app initialises, various things are created in memory and on disk and a ‘cold – purge’ is essentially the worst-case scenario.  This is basically what you have after a fresh install of Ubuntu, but we can simulate that by purging the snap and rebooting

In practice this is done by running:

sudo snap remove --purge firefox 

This removes the snap and all snap data, including your Firefox profile and caches

Reboot and login. Then run:

sudo snap install firefox

Click the Firefox icon and wait for the window to appear to get your cold – purge time.

Example Benchmarks

2019 Dell XPS 13
stable – rev 1377 (s)
Thinkpad X240
stable – rev 1377 (s)
Pi 400 (SD card)
stable – rev 1381 (s)
Cold – Purge7.6715.0738.23


With these benchmarks established, we started profiling the cold – purge start under various configurations, including:

  • Different releases of the Firefox snap
  • The unconfined (or ‘unsnapped’) Snap
  • The unsquashed snap
  • Different AppArmor profiles
  • Both the Firefox deb and tarball
  • A range of hardware configurations, from 4GB Raspberry Pi 4s to laptops and high-end PCs with a mixture of Intel, AMD and NVIDIA GPUs.

From this we were able to measure and compare:

  • Where the CPU was being used
  • Active threads
  • Disk I/O
  • Files created in the cache directory
  • GPU acceleration


Based on these tests, we identified a number of culprits for the slower startup (in order of estimated impact).

Squashfs seeking

The snap is packed into a compressed squashfs which can create a bottleneck on more resource-constrained systems like the Raspberry Pi. For Firefox, which is quite heavy on the I/O during startup, this creates noticeable overhead as it searches for files in the squashfs. We’re investigating improving the ordering of content in the squashfs to improve seek times.

Software rendering

Something we identified whilst testing on the Raspberry Pi was that the current Firefox snap  fails to determine which GPU driver it should use in its glxtest program. This causes Firefox to start up with the software renderer, adding significant overhead to shader compilation time. This was also observed on AMD GPUs.

Good news! A fix for this has now landed in upstream snapd.

Extension handing

Firefox creates a copy of all extensions that are bundled with the firefox package to a user-specific directory upon first start for each user. This is done by going through each extension and copying them block-by-block. In the snap, we bundle 98 language packs which unfortunately take quite a while to copy into the user directory. Especially because the language-packs are read from a compressed squashfs image.

Font and icon handling

When Firefox is confined, significant time is spent discovering all possible icon themes, font configurations and available fonts. This is not done when running unconfined, where Firefox simply loads what it needs instead. 

These four issues are our current areas of focus when it comes to cold starts. We will add tracking bugs and updates to these sections going forward so you can follow our progress.

Additional improvements

Future blogs will dive further into other areas of Firefox snap performance, but in the meantime, we wanted to share two additional updates.

Improved runtime performance

With the release of Firefox 100.0 we enabled PGO and LTO. This has made significant improvements to runtime performance and is available in the current release. It also has some impact on startup times.

Native messaging

Beyond performance, native messaging has been our most significant outstanding issue to resolve. Native messaging enables a number of key features such as 2FA devices and GNOME extensions.

We have implemented a new XDG Desktop Portal to support this which is already in 22.04 as a distro patch that we are working to upstream. This portal is also useful for other packaging systems like flatpak as well as snap.

The integration with Firefox is also currently in review and expected to land next month.

If you are experiencing any other bugs or issues, please report them on the Mozilla meta-bug.

Create your own benchmarks

Whilst we’ve attempted to portray our performance improvements in this post as transparently and fairly as possible, we know that there’ll always be folks who want to see the data for themselves, on their own hardware. To that end, we’ve collected a suite of options for users to run their own benchmarks.

Our very own Marco ‘3v1n0‘ Trevisan has created a handy GNOME extension, the Applications Startup Time Measure, to avoid you wearing out your stopwatch.

To get it you first need to install the GNOME Extension Manager:

sudo apt install gnome-shell-extension-manager

Then open extension-manager, navigate to “Browse” and search for “startup”:

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Choose ‘Install’ and that’s it.  Now every application you launch (limited to launching via application icons) will display how long the application took to go from start to shown on-screen.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

For side by side comparisons with the original snap that launched with Ubuntu 22.04 you can also switch to this channel:

sudo snap refresh –channel=latest/stable/jammy-release

Finally, if you want to go more in-depth, you can always add the Firefox Profiler to Firefox and share your insights.

<noscript> <img alt="" src=",q_auto,fl_sanitize,c_fill,w_720/" width="720" /> </noscript>

Let us know your results in the ‘Known issues with the Firefox Snap?’ Discourse thread, and keep us updated over time as new improvements roll out.

Some tips on reporting benchmarks

  • List the Firefox version being tested, and add comparisons if needed to help with deltas
  • List your OS and release version (we’d love numbers from a wide range of distributions)
  • Provide hardware info to help us gauge the specs of the machine
  • Split by cold (and cold – purge) vs warm starts
  • Take multiple readings to get a sense of the average

That’s all for now! Check back in soon for part 2 and an update on our ongoing performance improvements.

26 May, 2022 02:28PM

Lubuntu Blog: Lubuntu Kinetic Kudu 22.10 Artwork Contest

The Lubuntu Team is pleased to announce we are running a Kinetic Kudu artwork competition, giving you, our community, the chance to submit, and get your favorite wallpapers for both the desktop and the greeter/login screen (SDDM) included in the Lubuntu 22.10 release. Show Your Artwork To enter, simply post your image into this thread […]

26 May, 2022 10:57AM

hackergotchi for Qubes


Fedora 34 approaching EOL; Fedora 35 templates available

Fedora 34 is scheduled to reach EOL (end-of-life) on 2022-06-07, and new Fedora 35 templates are now available for both Qubes 4.0 and 4.1.

We strongly recommend that all Qubes users upgrade their Fedora 34 templates and standalones to Fedora 35 before Fedora 34 reaches EOL.

We provide fresh Fedora 35 template packages through the official Qubes repositories, which you can install in dom0 by following the standard installation instructions. Alternatively, we also provide step-by-step instructions for performing an in-place upgrade of an existing Fedora template. After upgrading your templates, please remember to switch all qubes that were using the old template to use the new one.

For a complete list of template releases that are supported for your specific Qubes release, see our supported template releases.

Please note that no user action is required regarding the OS version in dom0. For details, please see our note on dom0 and EOL.

26 May, 2022 12:00AM

May 25, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Harald Sitter: DrKonqi ❤️ coredumpd

Get some popcorn and strap in for a long one! I shall delight you with some insights into crash handling and all that unicorn sparkle material.

Since Plasma 5.24 DrKonqi, Plasma’s infamous crash reporter, has gained support to route crashes through coredumpd and it is amazing – albeit a bit unused. That is why I’m telling you about it now because it’s matured a bit and is even more amazing – albeit still unused, I hope that will change.

To explain what any of this does I have to explain some basics first, so we are on the same page…

Most applications made by KDE will generally rely on KCrash, a KDE framework that implements crash handling, to, well, handle crashes. The way this works depends a bit on the operating system but one way or another when an application encounters a fault it first stops to think for a moment, about the meaning of life and whatever else, we call that “catching the crash”, during that time frame we can apply further diagnostics to help later figure out what went wrong. On POSIX systems specifically, we generate a backtrace and send that off to our bugzilla for handling by a developer – that is in essence the job of DrKonqi.

Currently DrKonqi operates in a mode of operation generally dubbed “just-in-time debugging”. When a crash occurs: KCrash immediately starts DrKonqi, DrKonqi attaches GDB to the still running process, GDB creates a backtrace, and then DrKonqi sends the trace along with metadata to bugzilla.

Just-in-time debugging is often useful on developer machines because you can easily switch to interactive debugging and also have a more complete picture of the environmental system state. For user systems it is a bit awkward though. You may not have time to deal with the report right now, you may have no internet connection, indeed the crash may be impossible to trace because of technical complications occurring during just-in-time debugging because of how POSIX signals work (threads continue running :O), etc.

In short: just-in-time really shouldn’t be the default.

Enter coredumpd.

Coredumpd is part of systemd and acts as kernel core handler. Ah, that’s a mouthful again. Let’s backtrace (pun intended)… earlier when I was talking about KCrash I only told part of the story. When fault occurs it doesn’t necessarily mean that the application has to crash, it could also neatly exit. It is only when the application takes no further action to alleviate the problem that the Linux kernel will jump in and do some rudimentary crash handling, forcefully. Very rudimentary indeed, it simply takes the memory state of the process and dumps it into a file. This is then aptly called a core dump. It’s kind of like a snapshot of the state of the process when the fault occurred and allows for debugging after the fact. Now things get interesting, don’t they? 🙂

So… KCrash can simply do nothing and let the Linux kernel do the work, and the Linux kernel can also be lazy and delegate the work to a so called core handler, an application that handles the core dumping. Well, here we are. That core handler can be coredumpd, making it the effective crash handler.

What’s the point you ask? — We get to be lazy!

Also, core dumping has one huge advantage that also is its disadvantage (depending on how you look at it): when a core dumps, the process is no longer running. When backtracing a core dump you are looking at a snapshot of the past, not a still running process. That means you can deal with crashes now or in 5 minutes or in 10 hours. So long as the core dump is available on disk you can trace the cause of the crash. This is further improved by coredumpd also storing a whole lot of metadata in journald. All put together it allows us to run drkonqi after-the-fact, instead of just-in-time. Amazing! I’m sure you will agree.

For the user everything looks the same, but under the hood we’ve gotten rid of various race conditions and gotten crash persistence across reboots for free!

Among other things this gives us the ability to look at past crashes. A GUI for which will be included in Plasma 5.25. Future plans also include the ability to file bug reports long after the fact.

Inner Workings

The way this works behind the scenes is somewhat complicated but should be easy enough to follow:

  • The application produces a fault
  • KCrash writes KCrash-specific metadata into a file on disk and doesn’t exit
  • The kernel issues a core dump via coredumpd
  • The systemd unit coredump@ starts
  • At the same time drkonqi-coredump-processor@ starts
  • The processor@ waits for coredump@ to finishes its task of dumping the core
  • The processor@ starts drkonqi-coredump-launcher@ in user scope
  • launcher@ starts DrKonqi with the same arguments as though it had been started just-in-time
  • DrKonqi assembles all the data to produce a crash report
  • the user is greeted by a crash notification just like just-in-time debugging
  • the entire crash reporting procedure is the same

Use It!

If you are using KDE neon unstable edition you are already using coredumpd based crash reporting for months! You haven’t even noticed, have you? 😉

If not, here’s your chance to join the after-the-fact club of cool kids.


in your `/etc/environment` and make sure your distribution has enabled the relevant systemd units accordingly.

25 May, 2022 07:59PM

hackergotchi for Purism PureOS

Purism PureOS

Introducing AweSIM, Simple Plus and SIMple Plans for Securing Your Phone Data

Protect your personal data with AweSIM, a privacy-focused cellular service. Get started on Purism's cellular plans with a Librem 5 phone or any unlocked GSM phone.

The post Introducing AweSIM, Simple Plus and SIMple Plans for Securing Your Phone Data appeared first on Purism.

25 May, 2022 06:00PM by Purism

May 24, 2022

Purism Launches SIMple Plus for Data Privacy

For those looking for a privacy-focused cellular service in the United States, Purism has launched another option in its suite of privacy-first cellular plans. With other big telecom providers, phone data does not stay private; it’s collected, linked with a person’s identity, and sold to advertisers. With Purism’s cellular services users can get peace of mind and protect […]

The post Purism Launches SIMple Plus for Data Privacy appeared first on Purism.

24 May, 2022 03:10PM by Purism

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.0.0-26 Released

This release of Clonezilla live (3.0.0-26) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 2.8.1-12

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2022/May/22).
  • Linux kernel was updated to 5.17.6-1.
  • Partclone was updated to 0.3.20.
  • This release supports APFS (Apple File System) imaging/cloning now.
  • Add LUKS support. Basically a better mechanism than dd is implemented. //NOTE// It's recommended to encrypt the image when saving the LUKS device.
  • Update language files de_DE, el_GR, es_ES, fr_FR, ja_JP, hu_HU, pl_PL and sk_SK. Thanks to Michael Vinzenz, Stamatis Mavrogeorgis, Juan Ramón Martínez, Jean-Francois Nifenecker, Akira Yoshiyama, Greg, kris and Ondrej Dzivý Balucha.
  • Add wavemon, memtester, edac-utils, shc and uml-utilities in the live system.
  • Remove s3ql from the live system.
  • A better mechanism was implemented to check GPT/MBR format of a disk. This is a workaround to deal with ChromeOS Flex partition table. Ref:
  • Add the dummy option "-k0" for creating partition in ocs-sr and ocs-onthefly. It's the same as default action. Just easier for us to explain.
  • Add memtester in the uEFI boot menu.
  • Boot parameter use_os_prober="no" now skips running os-prober. Thanks to Bernard Michaud for this idea. Ref:
  • Add a mechanism to skip using devices list cache. If the boot parameter use_dev_list_cache=no in the boot parameter, then the devices list cache mechanism won't be used.


  • Show the prompt that ocs-iso & ocs-live-dev can not be run in netboot env. Thanks to Constantino Michailidis. Ref:
  • Fixed the issue that update-efi-nvram-boot-entry created empty boot menu.
  • Program pixz has no option to stdout with "-d". Therefore program pixz was replaced by xz since using "-T 0" works the same. Thanks to nurupo for reporting this issue.*

24 May, 2022 01:18PM by Steven Shiau

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Holbach: Mixtape: Jardin De Amor

Check in here for an hour-long trip around the globe and experience a few of my new favorites. Sometimes a little trippy and dreamy, but all very danceable…

  1. JÇÃO & Caracas Dub - Suena la decadente
  2. Los Destellos - Jardin De Amor (David Pacheco & Tribilin Sound Remix)
  3. VON Krup Feat. Alekzal - Fosfenos (jiony Remix)
  4. Ka Moma - Lamba Da Di (Harro Triptrap Edit)
  5. Eartha Kitt - Angelitos Negros (Billy Caso’s Sliced Sky Remix)
  6. hubbabubbaklubb - Mopedbart (Barda Edit)
  7. Crussen - Bufarsveienen
  8. Josephine Baker - La Conga Blicoti (Polo & Pan Remix)
  9. Gene Farris & Kid Enigma - David Copperfield
  10. Viidra - Mitally
  11. Quantic - You Used to Love Me feat. Denitia (Selva Remix)
  12. Dombrance - Taubira (Prins Thomas Diskomiks)

24 May, 2022 01:00PM

May 23, 2022

The Fridge: Ubuntu Weekly Newsletter Issue 736

Welcome to the Ubuntu Weekly Newsletter, Issue 736 for the week of May 15 – 21, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

23 May, 2022 10:28PM

May 22, 2022

hackergotchi for SparkyLinux



There is a new application available for Sparkers: Hypnotix

What is Hypnotix?

Hypnotix is an IPTV streaming application with support for live TV, movies and series. It can support multiple IPTV providers of the following types: M3U URL, Xtream API, Local M3U playlist. Hypnotix does not provide content or TV channels, it is a player application which streams from IPTV providers. By default, Hypnotix is configured with one IPTV provider called Free-TV.

Installation (Sparky 6 & 7):

sudo apt update
sudo apt install hypnotix

License: GNU GPL 3


22 May, 2022 09:50AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Small EInk Phone

Aside in 2022-05-22. it's not the same.. but there is a renewed push by Pebble creator Eric Migicovsky to show demand for a SmallAndroidPhone. It's currently at about 29,000.

Update 2022-02-26: Only got 12 responses which likely means there isn't that much demand for this product at this time (or it wasn't interesting enough to spread). Here are the results as promised:

What's the most you would be willing to spend on this? 7 - $200, 4 - $400. But that doesn't quite capture it. Some wanted even cheaper than $200 (which isn't doable) and others were will to spend a lot more.

Of the priority's that got at least 2 people agreeing (ignoring rating): 4 - Openness of components, Software Investments 3 - Better Modem, Headphone Jack, Cheaper Price 2 - Convergence Capable, Color eInk, Replaceable Battery

I'd guess about half of the respondents would likely be happy with a PinePhone (Pro) that got better battery life and "Just Works".

End Update.

Would you be interested in crowdfunding a small E Ink Open Phone? If yes, check out the specs and fill out the form below.

If I get 1000 interested people, I'll approach manufacturers. I plan to share the results publicly in either case. I will never share your information with manufacturers but contact you by email if this goes forward.


  • Small sized for 2021 (somewhere between 4.5 - 5.2 inches)
  • E Ink screen (Maybe Color) - battery life over playing videos/games
  • To be shipped with one of the main Linux phone OSes (Manjaro with KDE Plasma, etc).
  • Low to moderate hardware specs
  • Likely >6 months from purchase to getting device

Minimum goal specs (we might be able to do much better than these, but again might not):

  • 4 Core
  • 32 GB Storage
  • USB Type-C (Not necessarily display out capable)
  • ~8 MP Front camera
  • GPS
  • GSM Modem (US)

Software Goals:

  • Only open source apps pre-installed
  • Phone calls
  • View websites / webapps including at least 1 rideshare/taxi service working (may not be official)
  • 2 day battery life (during "normal" usage)

Discussions: Phoronix

22 May, 2022 04:30AM

May 20, 2022

Kubuntu General News: Plasma 5.25 Beta available for testing

Are you using Kubuntu 22.04 Jammy Jellyfish, our current Stable release? Or are you already running our development builds of the upcoming 22.10 Kinetic Kudu?

We currently have Plasma 5.24.90 (Plasma 5.25 Beta)  available in our Beta PPA for Kubuntu 22.04, and in the Ubuntu archive and daily ISO build for the 22.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: Today we are bringing you the preview version of KDE’s Plasma 5.25 desktop release. Plasma 5.25 Beta is aimed at testers, developers, and bug-hunters. To help KDE developers iron out bugs and solve issues, install Plasma 5.25 Beta and test run the features listed below. Please report bugs to our bug tracker. We will be holding a Plasma 5.25 beta review day on May 26 (details will be published on our social media) and you can join us for a day of bug-hunting, triaging and solving alongside the Plasma devs! The final version of Plasma 5.25 will become available for the general public on the 14th of June. DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.

Testers of the Kubuntu 22.10 Kinetic Kudu development series:

Testers with a current install can simply upgrade their packages to install the 5.25 Beta.

Alternatively, a live/install image is available at:

Users on Kubuntu 22.04 Jammy Jellyfish:

5.25 Beta packages and required dependencies are available in our Beta PPA.

The PPA should work whether you are currently using our backports PPA or not.

If you are prepared to test via the PPA, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.24?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.


Please stop by the Kubuntu-devel IRC channel on if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on
[2] –

20 May, 2022 05:06PM

hackergotchi for GreenboneOS


TISAX Certification for Greenbone

Greenbone is now a TISAX participant and its Information Security Management System (ISMS) and data protection processes are certified within the German automotive industry’s TISAX scheme. “We have taken this step as an effort in providing the best possible protection of sensitive and confidential information for our customers, as the next logical step after being successfully certified for worldwide accepted international industry standards like ISO 27001 and ISO 9001.” – Dr. Jan-Oliver Wagner, CEO of Greenbone. The results are available on the ENX portal using the Scope ID S3LW9L and the Assessment ID A1P7V9. TISAX and TISAX results are not intended for general public.

TISAX, the “Trusted Information Security Assessment Exchange”, is a mechanism for checking and exchanging test results according to industry-specific standards. Originally created as a system for the exchange of standardized test results in the automotive industry, it is optimized for the risk assessment of suppliers. Therefore, TISAX is being developed and governed by the ENX Association and published by the German Association of the Automotive Industry (VDA). Its focus lies on secure information processing between business partners, protection of prototypes and data protection in accordance with the EU’s General Data Protection Regulation (GDPR) for potential deals between car manufacturers and their service providers or suppliers.

As a crucial part of a secure supply chain, TISAX is a standard for Information Security Management Systems (ISMS), originally derived from the ISO/IEC 27001 standard in 2017, but has since diverged. For the automotive industry, TISAX brings standardization, quality assurance and guarantees information security measures are assessed by audit providers in accordance with the VDA standards. Audits according to TISAX, especially for service providers and suppliers, are carried out by so-called “TISAX audit service providers” and come with three levels of maturity an overview of which you can find in the TISAX Participant Handbook and on websites of certification providers like Adacor (German only).

Greenbone’s certifications increase our products’ value for our customers, not just by saving time and money, but also by proving our outstanding security level and high standards. Elmar Geese, CIO at Greenbone: “With TISAX, we document our independently audited security status. Customers do not need to do individual assessments, work with lengthy questionnaires or all the other things needed in a bottom-up audit. We guarantee that we meet their security requirements.”

Therefore, Greenbone follows the question catalogue of information security of the German Association of the Automotive Industry (VDA ISA). The assessment was conducted by an audit provider. The result is exclusively retrievable via the ENX portal (Scope ID: S3LW9L, Assessment ID: A1P7V9).

20 May, 2022 11:12AM by Elmar Geese

May 17, 2022

hackergotchi for Purism PureOS

Purism PureOS

Free Software Support Is Critical to Its Success

I’ve been in many “Linux on the Desktop” debates over the years and my stance today is largely the same as two decades ago: if you want free software to succeed, it must be pre-installed on hardware where all hardware features work, with a hardware vendor that supports it. It doesn’t matter nearly as much […]

The post Free Software Support Is Critical to Its Success appeared first on Purism.

17 May, 2022 06:37PM by Kyle Rankin

hackergotchi for Pardus


Strateji ve Bütçe Başkanlığı Engerek Kullanıyor

TÜBİTAK ULAKBİM Pardus ekibi tarafından yürütülmekte olan Kamu kurumlarında Pardus ve açık kaynak kodlu yazılımların yaygınlaştırılmasına yönelik çalışmalar tüm hızıyla devam ediyor. Cumhurbaşkanlığı Strateji Bütçe Başkanlığı ile TÜBİTAK ULAKBİM arasında gerçekleştirilen iş birliği neticesinde kullanıcı bilgisayarlarında Pardus dönüşümü, Pardus ekibi tarafından geliştirilen açık kaynak kodlu uygulamalar ve diğer kendini ispatlamış açık kaynak kodlu sistemlerin kullanıma alınmasına yönelik çalışmalar hayata geçirilmiştir. Geldiğimiz noktada canlı sistemlerde Pardus ve açık kaynak kodlu yazılımlar verimli bir şekilde kullanılmaktadır.


Engerek Kimlik Yönetim Sistemi Devreye Alındı

Kurumsal yapıların önemli ihtiyaçlarından biri olan kimlik yönetimi konusunda Strateji ve Bütçe Başkanlığı, TÜBİTAK ULAKBİM Pardus ekibi tarafından geliştirilen Engerek Kimlik Yönetimi Sistemini devreye aldı. Engerek Kimlik Yönetim Sistemi ile kurum içerisinde çok sayıda farklı uygulamada bulunan kullanıcı hesaplarının merkezi olarak yönetilmesi sağlanmıştır. Bu şekilde bir kullanıcının bütün kurumsal uygulamalarda bulunan hesapları tek tıkla yönetilebilir hale gelmiştir. Kurumsal uygulamalarda bulunan hesapların otomatik bir şekilde senkronize edilmesi ve çeşitli mekanizmalarla parola yönetimi yapılması gibi bir çok yeni özellik kullanıma sunulmuştur. Bu şekilde kullanıcı hesaplarının yönetimi güvenilir, hızlı ve daha verimli bir şekilde yapılmaya başlanmıştır. TÜBİTAK ULAKBİM Pardus ekibi Engerek Kimlik Yönetimi Sistemi ile ilgili teknik destek, entegrasyon konularında ve ilgili kurum personelinin teknik yeterliliğinin artırmak için gerekli desteği sağlamaktadır.


İş birliği devam edecek

Kurum içerisinde platform ve uygulama bağımsızlığını kazanmak adına gerçekleştirilecek yazılım ve sistem dönüşümleri ile ilgili kurumlar arası işbirliğine devam edilecektir. Ayrıca teknik destek ve bakım ihtiyaçlarının karşılanması noktasında TÜBİTAK ULAKBİM ile ortak çalışmalar sürdürülecektir.


Engerek Kimlik Yönetimi Sistemi Hakkında

Engerek, Web tabanlı geliştirilmiş bir kimlik yönetim sistemidir. Java programlama dili ile geliştirilmiştir. Temel hedefi kurum kullanıcılarını ve hesaplarını merkezden yönetmektir. Açık kaynaklı olarak geliştirilmiştir. Tomcat uygulama sunucusu üzerinde çalışmakta, kimlik deposu olarak MariaDB / MySQL / PostgreSQL veritabanlarını desteklemektedir.

Hesap yönetimi yapmak üzere, OpenLDAP gibi dizin sistemleri, MS Active Directory / MS Exchange, MariaDB / MySQL / PostgreSQL veritabanları, özel veritabanı tabloları, Pardus da dahil olmak üzere Linux işletim sistemleri için bağlayıcılar sağlanmıştır. Engerek içerisinde diğer BT sistemleri ile entegrasyon için hazır konnektörler bulunmaktadır. Bu sayede BT uygulamalarını kolay bir şekilde EnGerek’e entegre ederek kullanıcılar yönetilebilmektedir.

EnGerek ile kullanıcı hesapları yönetimi ve şifre yönetimi yapılmaktadır. Şifre politikaları tanımlayabilmek mümkündür. Örneğin 5 karakter, içinde 3 adet nümerik bulunsun ve diğerleri sadece harf olsun gibi bir şifre politikası tanımlanabilmektedir. Ayrıca, kullanıcıların şifrelerini unutmaları durumunda şifrelerini yenileyebilecekleri bir self servis arayüz de EnGerek içerisinde mevcuttur.

EnGerek ile görevler ayrılığı ilkesi çerçevesinde iş akışları tanımlanabilmektedir. Bu iş akışları sayesinde kullanıcıların self servis olarak kendine rol veya hesap istemesini olanaklı hale getirilebilir. Kullanıcı kendi ara yüzünden hesap veya rol isteyebilmekte ve gerekli onay noktalarından geçtikten sonra istediği rol ve hesaba sahip olabiliyor. Ayrıca, yine görevler ayrılığına bağlı kalarak roller arasında ilişkiler kurulabilmektedir. Örneğin, A rolünü alan bir kullanıcı B rolünü alamaz şeklinde kurallar tanımlayarak kullanıcıların istenmeyen rolleri almasına engel olunabilmektedir.

EnGerek ayrıca bir XML editörü içerisinde barındırıyor. Bu editörü kullanarak EnGerek’e yeni kaynaklar tanımlanabilir, iş akışları tanımlanabilir, zamanlanmış görevler düzenlenebilir, rapor şablonları hazırlanabilir durumdadır.

EnGerek sisteminin, kullanıcı ve sistem türü ve sayısının yüksek olduğu tüm özel şirket, kamu kurumu ve üniversitelerde uygulanması hedeflenmektedir. Bu kurumlarda, yukarıda sorunların ve etkilerinin en aza indirilmesi, BT hesapları için gerekli verilerin bir personel yönetim sisteminden alınması halinde veri giriş tekrarlarının engellenmesi, yeni çalışanlar için hesap açılma süresinin kısaltılması, ayrılan personelin tüm hesaplarının zamanında kapatılması, uzun süreli izin gibi durumlarda personelin tüm hesaplarının pasife çekilip izin dönüşü tekrar hemen aktif yapılması, tayin / terfi nedeni ile veya organizasyon, ünvan ve diğer bilgi değişikliklerini tüm hesaplara doğru olarak yansıtılması, kullanıcıların şifrelerini unutması durumunda kendilerinin şifre sıfırlaması ve ayrıca dönemsel veya anlık olarak izleme / denetleme yeteneklerinin artırılması mümkün olmaktadır.


Neden Engerek?

Açık kaynak kodlu.
Lisans ücreti yoktur.
Basit bir kullanıma sahip, karmaşık değildir.
TÜBİTAK ULAKBİM tarafından destek verilmektedir.

17 May, 2022 12:52PM

hackergotchi for Tails


Tails report for April 2022

  • Disoj, our new Project Manager, started working with us.

    It's the first time Tails has a dedicated Project Manager and we are all very excited about this change. Disoj will help us work better and faster to accomplish our mission.

  • We got very busy preparing Tails 5.0.

  • We upgraded all our infrastructure to Debian 11 to ensures that all our public and internal servers will continue receiving security updates timely after August.

  • We organized 2 online trainings for people in Mexico.

    You can download our slides and the structure of the training if you want to organize Tails workshops yourself.

  • Tails has been started more than 783 850 times this month. This makes 25 031 boots a day on average.

17 May, 2022 09:30AM

hackergotchi for ZEVENET


Ways to find breached data

Data breaches are fearsome, they can alter the trajectory of our lives. Leakage of sensitive information causes irretrievable loss for individuals, governments, and organizations. Users are creating humongous data, with each passing moment, increasing the threat of getting it compromised. A small vulnerability can create a domino effect and results in a data breach. Lack of awareness and knowledge...


17 May, 2022 09:00AM by Zevenet

May 16, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 735

Welcome to the Ubuntu Weekly Newsletter, Issue 735 for the week of May 8 – 14, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

16 May, 2022 11:14PM

May 13, 2022

hackergotchi for Purism PureOS

Purism PureOS

The Second Best Time to Protect Your Privacy

There is a well-known Chinese proverb that says “The best time to plant a tree was 20 years ago, the second best time is now.” This saying applies to many areas of life, and it also applies to privacy. The last few decades have seen a dramatic increase in the depth and breadth of privacy […]

The post The Second Best Time to Protect Your Privacy appeared first on Purism.

13 May, 2022 03:26PM by Kyle Rankin

hackergotchi for Pardus


Kamuoyuna Duyuru

Değerli Pardus kullanıcıları,

Mart 2022 tarihinde bazı medya organlarında halka arzı ile gündeme gelen, Güler Holdinge ait A1 Girişim Sermayesi Yatırım Ortaklığı A.Ş.’nin unvanını “Pardus Girişim Sermayesi Yatırım Ortaklığı A.Ş.” olarak değiştirdiğine dair çıkan haberlerle ilgili kamuoyunu doğru bilgilendirmek ve tarafımıza gelen sorulara yanıt verebilmek adına aşağıdaki açıklamamızı bilgilerinize sunarız.


Türkiye Bilimsel ve Teknolojik Araştırma Kurumu (TÜBİTAK) bünyesinde 2003 yılından planlamasına başlanan Pardus Projesi tescilli bir marka olup, Pardus Girişim Sermayesi Yatırım Ortaklığı adlı firma ile hiçbir ilişiği bulunmamaktadır.


Adını Anadolu parsından alan Pardus Projesi, Debian GNU/Linux temelli, özgür ve açık kaynak kodlu bir işletim sistemidir. İnternet üzerinden ücretsiz olarak indirilebilmekte ve kurulabilmektedir. Pardus’un, bireysel kullanımının yanı sıra kamu kurum ve kuruluşları ile KOBİ’lerde kolay yaygınlaştırılabilmesi için kurumsal ihtiyaçları karşılayan alt projeleri bulunmaktadır. Türkiye Bilimsel ve Teknolojik Araştırma Kurumu Ulusal Akademik Ağ ve Bilgi Merkezi (TÜBİTAK ULAKBİM) bünyesinde geliştirilmeye devam eden bu sistemlerden başlıcaları; ‘Liderahenk’ Merkezi Yönetim Sistemi, ‘Engerek’ Kimlik Yönetim Sistemi, ‘Ahtapot’ Bütünleşik Siber Güvenlik Sistemi ve ‘ETAP’ Etkileşimli Tahta Arayüz Projesi’dir. TÜBİTAK ULAKBİM ve Pardus Projesi, Türkiye Açık Kaynak Platformu kurucu üyesidir.

Pardus, kamuda kullanımını yaygınlaştırmakta olup Milli Eğitim Bakanlığı, Diyanet İşleri Başkanlığı, Bilgi Teknolojileri Kurumu, Strateji Bütçe Başkanlığı, İSKİ, AFAD gibi kurumlarımızda, Pardus işletim sistemi ve tamamlayıcı kurumsal çözümleri ETAP, Liderahenk, Ahtapot ve Engerek kullanılmaktadır. Pardus ile ilgili son güncel gelişmeleri takip etmek için‘yi ziyaret edebilirsiniz.

Kamuoyuna saygı ile duyurulur.


Pardus Ekibi



13 May, 2022 01:09PM

May 12, 2022

hackergotchi for Purism PureOS

Purism PureOS

Summer Sale on Librem 14 Laptops

Looking for the best time to order your Librem 14 laptop? Librem 14 is one of the most secure laptops we’ve built so far.  The laptop is designed chip-by-chip, line-by-line, to respect your rights to privacy, security, and freedom. Standard orders ship within 10 days. All you have to do is enter the coupon code, L14SUMMER […]

The post Summer Sale on Librem 14 Laptops appeared first on Purism.

12 May, 2022 08:04PM by Purism

May 11, 2022

Improved Delivery Time for Librem 5 USA: May 2022 Update

We are almost there! We have overcome a number supply chain and manufacturing challenges for the Librem 5 USA and have been steadily shipping through orders this Spring. It’s been really gratifying to hear all of the positive feedback from Librem 5 USA customers who now have their phones. Based on the current backlog of […]

The post Improved Delivery Time for Librem 5 USA: May 2022 Update appeared first on Purism.

11 May, 2022 05:00PM by Purism

hackergotchi for ZEVENET


ZEVENET CE 5.12 Released

Hello everyone, ZEVENET is glad of announcing that Community Edition 5.12 has been released. New features: [webgui] new web GUI with new Angular technology v12 [ssl] letsencrypt integration [lslb] http: add and delete HTTP headers [lslb] http: priority load balancing support [lslb] http: rewrite URL directive (proxy pass) [lslb] http: updated ZEVENET HTTP/S core zproxy [lslb] l4: updated ZEVENET...


11 May, 2022 08:19AM by Zevenet

May 10, 2022

hackergotchi for Pardus


İSKİ Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

İSKİ, ilk çıktığı günden bu yana Pardus’u kullanmaya başlayan ve destekleyen kurumların başında geliyor. İki yıldır TÜBİTAK ile işbirliği içerisinde Pardus dönüşümüne hız veren İSKİ, kullanıcı sayısını arttırmanın yanı sıra merkezi yönetim sunucusu Liderahenk’i ve bu yapıda gerekli diğer Pardus sunucularını konumlandırmış durumda.

İSKİ Bilgi İşlem Dairesi Başkanı Tayfun İşbilen Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

1. Açık kaynak yazılım kullanma kararını almanız nasıl oldu?

Bildiğiniz gibi Açık kaynak kodlu yazılımlar, kaynak kodları açık, özgürce dağıtımı yapılan, her kullanıcı tarafından geliştirmeye açık yazılımlardır.

Bu kodların bir yazılımcıya veya kurum/kuruluşa ait olmaması, sürekli değiştirilebilir ve geliştirilebilir olması da diğer önemli avantajıdır. Bahsetmiş olduğum faydalar, açık kaynak kodlu yazılımların kamu yönetiminin “açıklık ve şeffaflık” ilkesi ile örtüşmekte olup sürdürülebilirliği yüksek ve yazılım maliyetlerinden büyük tasarruflar sağladığı için benim de uzun süredir takip ettiğim ve hayata geçirmek istediğim bir proje olmuştur.

2. İSKİ’de nasıl bir sistem topolojiniz var? Pardus sunucu ve istemcileriyle bu topolojinin neresinde duruyor?

Sistem topolojimiz yedekli veri merkezlerimizi, İSKİ abonelerine hizmet verdiğimiz şube ağlarını, bunun haricinde kritik altyapı olmamız sebebiyle mevcut tesislerimizin bağlantılarını sağlayan bir network üzerine kuruludur. Bu merkezi yapıda şuan kullanıcı ve sunucu bazında hibrit bir yapıdayız. Yani hem Linux hem Windows işletim sistemlerini kullanıyoruz.

Kullanıcı bilgisayarlarında Pardus 21 ve Windows versiyonlarını kullanmaktayız. Tabi ki bu Windows’tan Pardus’a dönüşüm sürecimiz devam etmektedir.

Sunucu tarafında ise Linux türevlerini ve Windows sürümlerini kullanıyoruz. Açık kaynak kodlu uygulamaları yaygınlaştırmaya ayrı bir önem verdiğimiz için Linux tabanlı sunucu sayımız da oldukça fazla ve giderek artmaktadır.

Kurum olarak Pardus ilk çıktığı günden bu yana Pardus kullanmaya başlayan ve destekleyen kurumların başında geliyoruz. Uzun süre Pardus kurulumları ve yaygınlaştırılması kullanıcı bazında ilerledi kurumumuzda. Geçtiğimiz 2 yıl içinde TÜBİTAK ile bir işbirliği yaparak mevcut Pardus kullanıcı sayımızı arttırmak ve merkezileştirmek için bir proje başlattık. Merkezi yönetim sunucusu Liderahenk’i ve bu yapıda gerekli diğer Pardus sunucuları konumlandırdık. Pardus kullanıcılarını artık merkezi bir yapıdan yönetebiliyoruz.

3. İSKİ’de açık kaynak yazılımlara geçişi hangi iç süreçlerinizde (uygulama sunucusu, terminaller, ofis yazılımları, firewall vs) gerçekleştirdiniz?

Ofis yazılımlarında Pardus PC’lerde LibreOffice kullanımına geçtik ve TÜBİTAK ile olan çalışmamızda da ilgili personellere LibreOffice eğitimi aldırdık. Pardus göçü yol haritamızda Pardus işletim sistemi haricindeki bilgisayarlara da LibreOffice yükleyip kullanıcıları bir adım daha Pardus’a yaklaştırmak istiyoruz.

Birçok işte açık kaynak kodlu uygulamaları devreye aldık; örnek vermek gerekirse iş süreci takibinde ”Redmine”, kendi eğitim platformumuz olarak “Moodle”, çevrimiçi toplantı uygulaması olarak “Big Blue Button” sunucu performans takibi için ”Zabbix”, kendi dosya depolama ve paylaşım platformumuz olarak “Nextcloud”, free radius, grafana… vs gibi. Tabi ki bu noktadaki çalışmalarımız devam etmektedir.

4. İSKİ içinde Pardus ve açık kaynak yazılımlara geçiş sürecinin hangi aşamasındasınız? Dönüşüm sürecinde ilerleyen yıllara dair yeni planlarınız var mı?

Aslında Pardus kurumumuzda çok önceden başlayan bir süreç, Pardus’a ilk geçen kurumlardan biriyiz. TÜBİTAK ile bir çalışma sürecine girene kadar kendi öz kaynaklarımız ile Pardus geçiş sürecini de yönetmişiz. Açık kaynak yazılımlara geçiş ise benim özellikle önem verdiğim bir nokta ve bu konuda çalışmalar da öyle başladı. Her ne kadar bir salgın süreci yaşasak da bu konuda araştırma, kurulum ve çalışmalara devam ettik. Kendi içimizde açık kaynak kodlu yazılımların yaygınlaştırılması ile ilgilenen bir ekip oluşturduk. Özgür yazılım firması ile anlaşarak bu süreci başlattık. Tabi bu devam eden bir süreç, ve bunu her noktada yapıp lisans, işletim sistemi vs bağımlılıklarından mümkün mertebe kurtulmak istiyoruz.

5. İSKİ’nin bu dönüşüm sürecinde Pardus iş/göç ortağı firmalarla çalıştınız mı?

İSKİ olarak bu dönüşüm sürecinde Pardus göçü için TÜBİTAK ile güzel bir çalışma dönemi geçirdik, yeni bir çalışma dönemini de başlatmak istiyoruz. Bunun haricinde Pardus göçünde takıldığımız bazı noktalar var özellikle kurum içi kullanılan uygulamaların Pardus uyumluluğu bu süreci oldukça etkilemekte. Bu noktada firmalarla çalışmalarımız oluyor. Mesela Autocad programı kurumumuzda plan proje birimlerince kullanılan bir program. Biz de Pardus üzerinde çalışacak muadil program arayışında bulunduk ve bulduk.

Ayrıca TÜBİTAK ile sürekli iletişimde kalarak Pardus göçünün ilerlemesi ve merkezileştirilmesi için yine sistemimizde kullandığımız ve kullanmayı planladığımız programlar, ürünler, yazılımlar ve sürücülerle ilgili uyum çalışmalarımızı devam ettirmekteyiz.

6. İSKİ içinde hangi Pardus ürünlerini (Ahtapot, Engerek, Liderahenk vs) kullanıyorsunuz?

Liderahenk ve Libreoffice kullanmaktayız. Diğer ürünlerin de özelliklerini görmek için demo çalışmaları planlamaktayız.

7. Açık kaynak yazılımlara geçişle ne tür faydalar sağladınız? Toplam Sahip Olma Maliyeti’nde (TCO) sağlanan tasarruf tutarı nedir?

Açık kaynak kodlu yazılımlara geçiş bence kurumlar için bir özgürlük, hem maliyet hem kullanım noktasında. Biz kullanıcı tarafında işletim sistemi ve ofis programı lisans maliyetlerinden oldukça fayda sağladık. Yaklaşık 2000 Pardus kullanıcısı olduğunu düşünürsek bunların Windows ve MS Office lisans maliyetlerinden kuruma önemli bir fayda sağladık. Pardus göç sürecinin devam ettiğini göz önünde bulundurursak kurumumuz için ciddi bir tasarruf maliyeti sağlayacaktır.

8. Açık kaynak yazılımlara geçişte zorluk yaşadınız mı? Kurum içinde bir direnç gerçekleşti mi? Bu direnci nasıl aştınız?

Tabi ki yaşadık ve hala yaşamaktayız. Bu bir vizyon işi, bunu yönetimsel yapmak ve kullanıcı alışkanlıklarını kırmak biraz zorladı. Açık kaynak kodlu yazılımların kullanımında merkezi olarak yaptığımız projelerde çok problem yaşamadık, çünkü bunlar, teknik ekiple arka planda çalışarak kullanıcı hizmetine sunduğumuz işler.

Bizi özellikle zorlayan kısım Pardus kullanıcı tarafı, kullanıcılar Windows işletim sistemi alışkanlıklarını bırakmak istemiyorlar. Ama gördük ki bunların hepsi teknik destek ve eğitim ile aşılabilir. İSKİ olarak İstanbul çapında birçok şube ve personele sahibiz, bu yapının dağınık olması bir dezavantaj ama Pardus konusunda tecrübeli ve yetkin bir teknik ekibimiz var. Uzak uçlarda da olsa bu teknik ekibimizle, sorun yaşayan her kullanıcıya erişerek problemleri gidermekteyiz. Ayrıca yerinde çözüm gerektirmeyen sorunları da Pardus kullanıcılara uzaktan bağlanarak hızlıca çözüm sağlamaktayız.

Pardus ve LibreOffice kullanımına ilişkin eğitimleri yaygınlaştırmak ve merkezi eğitim platformumuza kullanıcıların her erişebileceği şekilde ayarlamak gibi bir düşüncemiz var, bu noktada da TÜBİTAK ile yeni çalışma döneminde farklı işlerimiz olacak.

9. Pardus özelinde bakacak olursak, hem yerli hem de açık kaynak bir yazılımı kullanmanın avantajları neler?

Öncelikle kesinlikle maliyet büyük bir avantaj, daha önce de belirttiğim gibi işletim sistemi ve ofis yazılımlarının lisans maliyetleri kuruma ciddi bir tasarruf sağlamaktadır.

Yerli olması ise diğer bir avantajı, böylelikle geliştirme ve sorun çözümleri için esneklik ve hız sağlıyor. Kullanıcı noktasında Pardus kullanımını yaygınlaştırdığımızda bu diğer açık kaynak kodlu yazılım ürünlerinin kullanımı ve sistem merkezinde çoğalması için bize bir basamak sağlayacaktır.

10 May, 2022 11:56AM

hackergotchi for GreenboneOS


Active and Passive Vulnerability Scans – One Step Ahead of Cyber Criminals

In networked production, IT and OT are growing closer and closer together. Where once a security gap “only” caused a data leak, today the entire production can collapse. Those who carry out regular active and passive vulnerability scans can protect themselves.

What seems somewhat strange in the case of physical infrastructure – who would recreate a break-in to test their alarm system – is a tried and tested method in IT for identifying vulnerabilities. This so-called active scanning can be performed daily and automatically. Passive scanning, on the other hand, detects an intrusion in progress, because every cyber intrusion also leaves traces, albeit often hidden.

Controlling the Traffic

Firewalls and antivirus programs, for example, use passive scanning to check traffic reaching a system. This data is then checked against a database. Information about malware, unsafe requests and other anomalies is stored there. For example, if the firewall receives a request from an insecure sender that wants to read out users’ profile data, it rejects the request. The system itself is unaware of this because the passive scan does not access the system but only the data traffic.

The advantage of this is the fact that the system does not have to use any additional computing power. Despite the scan, the full bandwidth can be used. This is particularly useful for critical components. They should have the highest possible availability. The fewer additional activities they perform, the better.

The disadvantage of passive scanning is that only systems that are actively communicating by themselves can be seen. This does not include office software or PDF readers, for example. But even services that do communicate do so primarily with their main functions. Functions with vulnerabilities that are rarely or not at all used in direct operation are not visible, or are only visible when the attack is already in progress.

Checking the Infrastructure

Active scans work differently and simulate attacks. They make requests to the system and thereby try to trigger different reactions. For example, the active scanner sends a request for data transfer to various programs in the system. If one of the programs responds and forwards the data to the simulated unauthorized location, the scanner has found a security hole.

Differences between active and passive vulnerability scans

Left: Active scans send queries to the system in an attempt to trigger different responses. Right: Passive scans check the traffic reaching a system and match this data against a database.

The advantage: the data quality that can be achieved with active scanning is higher than with passive scanning. Since interaction takes place directly with software and interfaces, problems can be identified in programs that do not normally communicate directly with the network. This is also how vulnerabilities are discovered in programs such as Office applications.

However, when interacting directly, systems have to handle extra requests which may then affect the basic functions of a program. Operating technology such as machine control systems, for example, are not necessarily designed to perform secondary tasks. Here, scanning under supervision and, as a supplement, continuous passive scanning are recommended.

Scanning Actively, but Minimally Invasive

Nevertheless, active scanning is essential for operational cyber security. This is because the risk posed by the short-term overuse of a system component is small compared to a production outage or data leak. Moreover, active scans not only uncover vulnerabilities, they can also enhance passive scans. For example, the vulnerabilities that are detected can be added to firewall databases. This also helps other companies that use similar systems.

Active and Passive Scanning Work Hand in Hand

Since the passive scanner can also provide the active scanner with helpful information, such as information about cell phones or properties about network services, these two security tools can be considered as complementary. What they both have in common is that they always automatically get the best out of the given situation in the network. For the passive and active scanning techniques, it does not matter which or how many components and programs the network consists of. Both security technologies recognize this by themselves and adjust to it. Only with a higher level of security does the optimized tuning of network and scanners begin.

So it is not a question of whether to use one or the other. Both methods are necessary to ensure a secure network environment. A purely passive approach will not help in many cases. Proactive vulnerability management requires active scans and tools to manage them. This is what Greenbone’s vulnerability management products provide.

10 May, 2022 10:03AM by Jan-Oliver Wagner

hackergotchi for Ubuntu developers

Ubuntu developers

Utkarsh Gupta: FOSS Activites in April 2022

Here’s my (thirty-first) monthly but brief update about the activities I’ve done in the F/L/OSS world.


This was my 40th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I did this month but mostly non-technical, now that DC22 is around the corner. Here are the things I did:

Debian Uploads

  • Helped Andrius w/ FTBFS for php-text-captcha, reported via #977403.
    • I fixed the samed in Ubuntu a couple of months ago and they copied over the patch here.

Other $things:

  • Volunteering for DC22 Content team.
  • Leading the Bursary team w/ Paulo.
  • Answering a bunch of questions of referees and attendees around bursary.
  • Being an AM for Arun Kumar, process #1024.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.


This was my 15th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my thirty-first month as a Debian LTS and twentieth month as a Debian ELTS paid contributor.
I worked for 23.25 hours for LTS and 20.00 hours for ELTS.

LTS CVE Fixes and Announcements:

  • Issued DLA 2976-1, fixing CVE-2022-1271, for gzip.
    For Debian 9 stretch, these problems have been fixed in version 1.6-5+deb9u1.
  • Issued DLA 2977-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 9 stretch, these problems have been fixed in version 5.2.2-1.2+deb9u1.
  • Working on src:tiff and src:mbedtls to fix the issues, still waiting for more issues to be reported, though.
  • Looking at src:mutt CVEs. Haven’t had the time to complete but shall roll out next month.

ELTS CVE Fixes and Announcements:

  • Issued ELA 593-1, fixing CVE-2022-1271, for gzip.
    For Debian 8 jessie, these problems have been fixed in version 1.6-4+deb8u1.
  • Issued ELA 594-1, fixing CVE-2022-1271, for xz-utils.
    For Debian 8 jessie, these problems have been fixed in version 5.1.1alpha+20120614-2+deb8u1.
  • Issued ELA 598-1, fixing CVE-2019-16935, CVE-2021-3177, and CVE-2021-4189, for python2.7.
    For Debian 8 jessie, these problems have been fixed in version 2.7.9-2-ds1-1+deb8u9.
  • Working on src:tiff and src:beep to fix the issues, still waiting for more issues to be reported for src:tiff and src:beep is a bit of a PITA, though. :)

Other (E)LTS Work:

  • Triaged gzip, xz-utils, tiff, beep, python2.7, python-django, and libgit2,
  • Signed up to be a Freexian Collaborator! \o/
  • Read through some bits around that.
  • Helped and assisted new contributors joining Freexian.
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts).
  • General and other discussions on LTS private and public mailing list.
  • Attended monthly Debian meeting. Held on Jitsi this month.

Debian LTS Survey

I’ve spent 18 hours on the LTS survey on the following bits:

  • Rolled out the announcement. Started the survey.
  • Answered a bunch of queries, people asked via e-mail.
  • Looked at another bunch of tickets:
  • Sent a reminder and fixed a few things here and there.
  • Gave a status update during the meeting.
  • Extended the duration of the survey.

Until next time.
:wq for today.

10 May, 2022 05:41AM

May 09, 2022

hackergotchi for Purism PureOS

Purism PureOS

Cameras: It’s Complicated

Two years before I started working on cameras for the Librem 5, I thought the work would go something like this: first, write a driver, then maybe calibrate the colors, connect to the camera support infrastructure, and bam! PureOS users on the phone would then do teleconferences with Jitsi or snap selfies with Cheese, just […]

The post Cameras: It’s Complicated appeared first on Purism.

09 May, 2022 03:39PM by Dorota Czaplejewicz

May 07, 2022

hackergotchi for SparkyLinux


Sparky 6.3

The 3rd update of Sparky 6 – 6.3 is out.

It is a quarterly updated point release of Sparky 6 “Po Tolo” of the stable line. Sparky 6 is based on and fully compatible with Debian 11 “Bullseye”.

– system upgraded from Debian & Sparky stable repos as of May 5, 2022
– PC: Linux kernel 5.10.106 (5.16.12 can be installed from Debian backports; 5.17.5~sparky can be installed from Sparky unstable repos)
– ARM: Linux kernel 5.15.32-v7+
– Firefox (100.0 Mozilla build can be installed from Sparky repos as the ‘firefox-sparky’ package, but it uses a new user profile so your bookmarks, passwords, settings have to be synchronized from the Mozilla account; PC only)
– Thunderbird 91.8.0
– VLC 3.0.16
– LibreOffice 7.0.4
– LXQt 0.16.0
– Xfce 4.16
– Openbox 3.6.1
– KDE Plasma 5.20.5
– small improvements

System reinstallation is not required; if you have Sparky 6.x installed, make full system upgrade with the following command:


or via the System Upgrade GUI tool.

Sparky 6 is available in the following flavours:
– amd64: LXQt, KDE Plasma, Xfce, MinimalGUI (Openbox) & MinimalCLI (text mode)
– i686: LXQt, MinimalGUI (Openbox) & MinimalCLI (text mode)
– armhf: Openbox & CLI (text mode)

New live/install media of the stable line can be downloaded from the download/stable page.

Informacja o wydaniu w języku polskim:

07 May, 2022 08:43PM by pavroo

May 06, 2022

hackergotchi for Purism PureOS

Purism PureOS

Purism and Linux 5.18

Following up on our report for Linux 5.17 this summarizes the progress on mainline support for the Librem 5 phone and its development kit during the 5.18 development cycle. This summary is only about code flowing upstream. Librem 5 camera support This time it’s been all about various media drivers that are needed when using […]

The post Purism and Linux 5.18 appeared first on Purism.

06 May, 2022 04:02PM by Martin Kepplinger

May 05, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Weekly News #259

Release of the GNU Coreutils 9.1 set of core system utilities:

LXQt 1.1 User Environment Released:\

Rsync 3.2.4 Released:

Celestial shuns snaps:

The SDL developers have canceled the default Wayland switch in the 2.0.22 release:

New versions of Box86 and Box64 emulators that allow you to run x86 games on ARM systems:

Release of the QEMU 7.0 emulator:

PPA proposed for Ubuntu to improve Wayland support in Qt:

Movement to include proprietary firmware in the Debian distribution:

Git 2.36 source control released:

oVirt 4.5.0 Virtualization Infrastructure Management System Release:

New versions of OpenWrt 21.02.3 and 19.07.10:

Ubuntu 22.04 LTS distribution release:

Valve has released Proton 7.0-2, for running Windows games on Linux:

Release of OpenBSD 7.1:

Summary ofresults of the election of the leader of the Debian project:

New release of the Silero speech synthesis system:

Release of KDE Gear 22.04:

Full Circle Magazine
Host:, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust

05 May, 2022 01:13PM

hackergotchi for SparkyLinux



There is a new application available for Sparkers: Nala

What is Nala?

Nala is a front-end for libapt-pkg. Specifically we interface using the python-apt api. Especially for newer users it can be hard to understand what apt is trying to do when installing or upgrading. We aim to solve this by not showing some redundant messages, formatting the packages better, and using color to show specifically what will happen with a package during install, removal, or an upgrade.

Installation (Sparky 6):

sudo apt update
sudo apt install nala-legacy

Installation (Sparky 7):

sudo apt update
sudo apt install nala

License: GNU GPL 3.0


05 May, 2022 12:57PM by pavroo

hackergotchi for Qubes


Automated OS testing on physical laptops

Our journey towards automating OS tests on physical laptops started a few years ago with the idea of using Intel AMT to drive tests on physical machines. To start, I got an initial implementation working. In particular, VNC for input/output and power control worked. I tried to get a virtual CD working, but it turned out to be quite unstable. Worse — and more importantly — it was really just a CD, not a CD/DVD, which meant that the protocol couldn’t handle images larger than 2 GB. Some time later I abandoned this approach, for two related reasons:

  1. Many machines that we want Qubes OS to support intentionally do not have Intel AMT.
  2. The single AMT-enabled machine that I had been using to develop this feature broke.

If anyone would like to resume this work, this page includes a lot of useful info about Intel AMT on Linux.

Recently, I came back to the project with a new approach: to capture video from HDMI output and use an emulated USB keyboard and mouse for input. Then, I added power control to the mix, combined everything on a Raspberry Pi, and got a working prototype of an openQA worker that runs the tests on a physical machine, instead of a virtual one.

The whole setup includes several devices:

  • One “central” Raspberry Pi that controls a power strip and serves boot files.
  • One Raspberry Pi per laptop that runs an openQA worker for that laptop. It emulates a USB device for that laptop and captures HDMI output from it.

All these elements are detailed below.

Base system

The goal was to run an openQA worker on a Raspberry Pi 4. Why a Raspberry Pi (RPi)?

  • Their USB controllers can play the role of a device, not just that of USB host.
  • They’re powerful enough to run the video processing required by openQA.
  • They’re (mostly) readily available and relatively cheap.

As a base system, I chose OpenSUSE, because that’s openQA’s native distribution. Getting OpenSUSE to work on an RPi was rather straightforward, but the choice did lead to a few issues discussed later in this article.

Power control

Power control was the first stage of this project. I thought it looked like the simplest part.

To reliably run unattended tests, I needed a way to interrupt a test when it went into some unrecoverable state (kernel panic, hard hang, etc.). With AMT, I had a built-in API for that, but now I needed something else. I chose a power strip that was remotely controlled via USB. Then, I removed the batteries from the laptops connected to the setup. This gave me a very reliable way to interrupt whatever was running on the machines by simply powering them down. But it turned out that powering them back on may not be that simple.

In the current setup, there are several laptops, each of them slightly different, and each (sic!) requiring a slightly different approach to power management. Here are some things I tried and that worked on some machines:

  1. Setting the BIOS to automatically power on the machine when a power supply was connected. This is the simplest method. Sadly, only one of the machines supported it.
  2. Sending a Wake-On-Lan packet. Here, reliability depends on the device. For some, it just works, while others require enabling it in the network card (with the ethtool -s eth0 wol g command), and some lose the setting either on system startup or on disconnecting the power…
  3. When all else fails, one can just press the physical power button. Of course it would be too much work to do it manually, so I attached a servo motor in the exact spot where the power button is. Then, I drove that servo motor from an RPi.

Power button servo

System startup

After achieving control over system power, the next step was taking control of which operating system starts there. I considered two options:

  • A USB boot drive, emulated from an RPi
  • Network boot

The first option turned out to be problematic when combined with emulated USB input devices (see below), at least on some laptops. While a single USB device can have multiple interfaces (basically being sub-devices), many types of system firmware do not like to boot from such devices. When I exposed a USB device that has both a storage interface and a HID interface (keyboard/mouse), the system didn’t consider it a bootable device. One solution would be to use two separate devices, but that would require yet another RPi (or something similar), since most (all?) such boards support emulating only a single device. Another way around it could be emulating a USB hub and getting two virtual devices this way, but Linux does not support USB hub emulation. Since I had an alternative, I didn’t explore this option any further. On systems that are fine with a single multi-function USB device, I can use that. On others, I use network boot.

The second option turned out not to be that straightforward either. First of all, not all systems support booting from the network to begin with. To solve this problem, I got a USB stick and put iPXE on it. Then, I configured the system to boot from that USB stick. I couldn’t use Grub here to gain network boot, because Grub supports only network devices via the system firmware (BIOS/EFI) support, and this support is missing on systems not capable of network booting. iPXE, on the other hand, supports a wide range of network devices on its own, and also allows simple scripting, like booting different systems depending on various settings. Unfortunately, it cannot boot Xen via the multiboot2 protocol (required to boot with full EFI support), it can only do multiboot1. So, I did need Grub. Luckily, iPXE does register its drivers as appropriate EFI services, so when I load Grub from iPXE, it can talk to the network.

I prepared a Grub configuration that can boot different systems on different laptops depending on a separate configuration file (loaded via the load_env Grub command) and a tool to conveniently switch between them. This got me a nice menu:

$ testbed-control 2 help
Selected target: 2

Available commands:
 - reset - hard reset the target
 - poweron - power on the target
 - poweroff - (hard) power off the target
 - wake - wake up the system (either wake-on-lan, or button press)
 - rescue - switch next boot to rescue system (doesn't load anything from the disk)
 - fallback - switch next boot to fallback system (loads /boot/efi/EFI/qubes/grub-fallback.cfg)
 - normal - switch next boot to normal system
 - custom - switch next boot to custom grub config (/srv/tftp/test2/grub.cfg)

The first four commands are about power control (see above), and the rest are about choosing what to boot. The normal command simply starts the system installed on the local disk, while rescue allows booting an initramfs-only system to diagnose why the normal system doesn’t work. The custom option allows, in practice, starting an arbitrary kernel (not necessarily from the disk). That option is especially useful for debugging Linux and Xen issues, including doing automatic bisection, although it requires a bit more in terms of glue scripts (but that’s a topic for another article).

Surprisingly, I had one case where booting the local system turned out to be tricky. When the bootloader is loaded from the network, that particular UEFI does not register services to access the local disk. As it turns out, Grub does not support NVMe drives directly; it supports them only via UEFI services. I could have switched to another disk, or to booting via USB, but neither of those options felt appealing. I wanted to run tests on NVMe drives too, and while USB booting works, it is a bit fragile, because one needs to be careful not to overwrite that boot drive (especially when testing system installations). So, I developed a workaround: setting a BootNext EFI variable (selecting the alternative boot option for just the next startup) and rebooting. Unfortunately, Grub itself does not have a function to set EFI variables (it can only read them), but building Linux + minimal initrd with relevant tools is rather easy. By the way, if I were starting Linux anyway, I could simply kexec the target kernel from the NVMe disk using Linux’s drivers, but I wanted the actual startup to remain closer to the “normal” startup, including respecting the relevant Grub configuration.

There was one final problem to solve. When installing Qubes OS, it will set itself as the default boot target. This means that all of the above boot options will be overridden by the installer. To solve this issue, I passed a kickstart file to the installer that restores the original boot order at the very last step (%post script).

To summarize, I now had:

  • A way to load Grub2 on each test system (either via PXE or via iPXE loaded from a USB stick)
  • A way to conveniently control which OS Grub2 will start
  • A way to load a local kernel even if Grub2 does not see the disk

Startup diagram

Video capture

I started experimenting with HDMI-over-IP extenders. Some turned out to use a rather standard video format for streaming. It worked fine… with one little inconvenience: handling the network stream put a significant load on the Raspberry Pi that handled it. I could use a different system for video processing than the RPi responsible for USB emulation, but that would make the whole setup even more complex. Anyway, that’s just a minor inconvenience that requires some more cooling on the RPi, not a deal breaker.

About the time I got all of this working, I came across PiKVM, which looked almost exactly like what I needed. It uses a TC358743 chip connected directly to an RPi (via camera interface) instead of a separate HDMI-to-IP encoder. Setting it up presented some challenges, but the PiKVM project (or, I should say, Maxim Davaev, the guy behind the project) had all of this figured out already.

The first issue I encountered was getting a TC358743 device initialized and detected at all. There were several parts to this:

  1. The default kernel from OpenSUSE does not include all the necessary drivers (in particular, bcm2835-unicam). They’re currently available only in a kernel from the Raspberry Pi Foundation. I chose to compile it myself with a config based on the one from the PiKVM project. There could be something I’m missing here, but this approach got me a working setup, and I didn’t want to spend too much time on debugging video drivers.
  2. Several modifications to config.txt were required:
    • dtoverlay=tc358743 — let the kernel know where the device is
    • start_x=1 — load GPU firmware with video input processing included
    • gpu_mem=128 — required by start_x=1 The latter two must be in config.txt specifically, not a file included from there, which is a bit problematic on OpenSUSE, because config.txt is forcefully overridden on each update and only the included extraconfig.txt is meant for user modification. I worked around the issue by mounting the bootloader partition under an alternative mountpoint to disarm the config.txt override. This issue is in OpenSUSE’s bug tracker. I have yet to test the upstream fix for the issue.

After fixing the above, I had a /dev/video0 device. Then, it was just a matter of configuring it. Specifically:

  1. Loading an appropriate EDID: v4l2-ctl --set-edid=.... The EDID describes the capabilities of this “monitor”. There is a catch if you want to use Full HD resolution: the interface bandwidth is a bit too low for 1920x1080 with a 60Hz refresh rate, but it is enough for 50Hz (yes, unfortunately). This had to be described in the EDID. The author of the tutorial linked above provided some examples.
  2. Setting digital video timings: v4l2-ctl --set-dv-bt-timings query. This can be done only when the system connected to the HDMI port starts and chooses a resolution, and it needs to be repeated each time the resolution changes.

I’ve integrated both of the above into the openQA driver.

For the openQA integration, using 1920x1080 resolution was not perfect. OpenQA operates on images at 1024x768. If it receives anything else, it scales it. The result of a 1920x1080 screen capture downscaled to 1024x768 was not nice, to put it mildly. It not only made some text unreadable, but the difference in aspect ratios heavily distorted the image. For example, this made it impossible to reuse reference images made in other tests. I am considering enhancing openQA to support other resolutions too, but for now I have set the resolution on the tested system to 1024x768 (and used an EDID that lists that resolution).

Scalled down screenshot

On the test system, something needs to actually enable HDMI output. For this purpose, I passed a kickstart file to the Qubes OS installer that includes commands to execute before installation (the %pre section). While at it, I could use the same kickstart file for other test-related customizations, like restoring the default boot order at the end or enabling SSH access for collecting logs.

HID input

Recording video output is not everything. To run tests, one also needs to send commands to the system under test (SUT). This can be done in several ways, including via serial console and SSH connection. In order to have the most realistic setup, I chose to emulate USB input devices. With this, we could interact with the system in the same way a user would. To emulate USB input device(s), I used the Linux USB Gadget subsystem. To emulate HID devices, I had to prepare a HID descriptor — a description for the driver, listing what kind of device it is and what events it can send.

I wanted the device(s) to meet the following requirements:

  • Have two interfaces (which in practice is two separate HID devices): keyboard and pointer (mouse/tablet)
  • Be properly categorized by udev (so the input proxy picks it up properly)
  • Be properly categorized by Xorg
  • Support both absolute pointer position events (like “move mouse to a specific point” instead of “move mouse a bit to the right”) and normal mouse buttons

I searched for a descriptor meeting the above requirements. The one for keyboards is rather standard, but the one for mouse/tablet devices is not. So, I took the Device Class Definition for HID 1.11 together with HID Usage Tables 1.22 and crafted one myself. This was a bit of a challenge, because both udev and Xorg have a set of heuristics to categorize devices, and they differ in subtle ways.

Then, I wrote a script that sets this all up and controls the device(s) according to what openQA requests.

The last detail is about connecting an RPi to the target system. The RPi4 has a single USB-C port used both for powering the RPi itself as well as for USB device emulation. Generally, this would be fine, with the exception that the target system is going to be disconnected from power from time to time. If the RPi were powered this way, it would lose power too, and there would be nothing capable of turning it back on. This is yet another case where the PiKVM project provided an inspiration: a Y-split cable that connects the VBUS pin to only one end and the data pins to the other.

Serial console

Several openQA functions require some kind of console access. This includes retrieving command outputs (and exit codes), waiting for various events, etc. Unfortunately, a real serial console is very rare in modern laptops. I could restructure the tests not to use those functions, but that would be rather disappointing in terms of test result quality. As a solution, I added a small qrexec service in dom0 that reads a pipe that pretends to be a serial console, then I used qvm-connect-tcp in sys-net to redirect the TCP port to that service. This isn’t as reliable as a real serial console (especially for things like restarting sys-net), but it does work in the majority of cases. In the future, I will restructure the tests not to rely on this functionality in order to account for the few rare cases where it doesn’t work.

Bonus: remote-controlled test laptops for developers

Remote power and boot control is useful not only for automatic tests, but also for ordinary developers. There are several cases where it is useful:

  • Additional machines to develop and test features on different versions of Qubes
  • Access to specific hardware

I’ve prepared the whole setup to be usable not only with openQA, but also to allow for the delegation of specific test machines to individual trusted developers. More importantly, this allows not only for manually testing software on those machines, but also for automating several tasks, such as the Git bisection mentioned earlier.

Final thoughts

Testing a whole operating system is a challenging task, because there are a lot of moving parts. OpenQA is a great tool for that, but its main target is running tests in a virtualized environment. This works fine for several components (like Qubes Manager and GUI virtualization) but not for hardware-related features (sys-net, sys-usb, system suspend, and several others). Before this work, we ran tests on actual laptops manually, but that was time-consuming and thus not all updates or configurations were tested. Automation allows our testing to be much more comprehensive, including ensuring ongoing compatibility with Qubes OS certified hardware.

05 May, 2022 12:00AM

May 04, 2022

hackergotchi for Purism PureOS

Purism PureOS

Why I Support Purism, A Tech Company that Respects Digital Rights

I started working with personal computers over 40 years ago, back when an IBM desktop computer with 64KB of RAM and two 360KB floppy disks cost almost CAD$10,000. I bought the first Macintosh computer sold in Canada from the Hudson’s Bay Company in Montreal. I bought the first iPhone in Buffalo on the day the […]

The post Why I Support Purism, A Tech Company that Respects Digital Rights appeared first on Purism.

04 May, 2022 05:54PM by Ben Trister

hackergotchi for VyOS


Which VPN protocol to use

Hello Community!

A question that often comes up in networking software discussions is "which VPN protocol should I use?" followed by "why should I use that one when there are so many others?" I'm going to try to tackle these questions in lay terms here.

04 May, 2022 09:45AM by Erkin Batu Altunbas (

hackergotchi for Ubuntu developers

Ubuntu developers

Riccardo Padovani: Why you should contribute to GitLab

Contributing to any open-source project is a great way to spend a few hours each month. I started more than 10 years ago, and it has ultimately shaped my career in ways I couldn’t have imagined!

GitLab logo as cover

The new GitLab logo, just announced on the 27th April 2022.

Nowadays, my contributions focus mostly on GitLab, so you will see many references to it in this blog post, but the content is quite generalizable; I would like to share my experience to highlight why you should consider contributing to an open-source project.

Writing blog posts, tweeting, and helping foster the community are nice ways to contribute to a project ;-)

And contributing doesn’t mean only coding: there are countless ways to help an open-source project: translating it to different languages, reporting issues, designing new features, writing the documentation, offering support on forums and StackOverflow, and so on.

Before deep diving in this wall of text, be aware there are mainly three parts in this blog post, after this introduction: a context section, where I describe my personal experience with open source, and what does it mean to me. Then, a nice list of reasons to contribute to any project. In closing, there are some tips on how to start contributing, both in general, and something specific to GitLab.


Ten years ago, I was fresh out of high school, without (almost) any knowledge about IT: however, I found out that I had a massive passion for it, so I enrolled in a Computer Engineering university (boring!), and I started contributing to Ubuntu (cool!). I began with the Italian Local Community, and I soon moved to Ubuntu Touch.

I often considered rewriting that old article: however I have a strange attachment to it as it is, with all that English mistakes. One of the first blog posts I have written, and it was really well received! We all know how it ended, but it still has been a fantastic ride, with a lot of great moments: just take a look at the archive of this blog, and you can see the passion and the enthusiasm I had. I was so enthusiast, that I wrote a similar blog post to this one! I think it highlights really well some considerable differences 10 years make.

Back then, I wasn’t working, just studying, so I had a lot of spare time. My English was way worse. I was at the beginning of my journey in the computer world, and Ubuntu has ultimately shaped a big part of it. My knowledge was very limited, and I never worked before. Contributing to Ubuntu gave me a glimpse of real world, I met outstanding engineers who taught me a lot, and boosted my CV, helping me to land my first job.

Advocacy, as in this blog post, is a great way to contribute! You spread awareness, and this helps to find new contributors, and maybe inspiring some young student to try out! Since then, I completed a master’s degree in C.S., worked in different companies in three different countries, and became a professional. Nowadays, my contributions to open source are more sporadic (adulthood, yay), but given how much it meant to me, I am still a big fan, and I try to contribute when I can, and how I can.

Why contributing


During my years contributing to open-source software, I’ve met countless incredible people, with some of whom I’ve become friend. In the old blog post I mentioned David: in the last 9 years we stayed in touch, met in different occasions in different cities: last time was as recent as last summer. Back at the time, he was a Manager in the Ubuntu Community Team at Canonical, and then, he became Director of Community Relations at GitLab. Small world!

The Ubuntu Touch Community Team in Malta, in 2014

The Ubuntu Touch Community Team in Malta, in 2014. It has been an incredible week, sponsored by Canonical!

One interesting thing is people contribute to open-source projects from their homes, all around the world: when I travel, I usually know somebody living in my destination city, so I’ve always at least one night booked for a beer with somebody I’ve met only online; it’s a pleasure to speak with people from different backgrounds, and having a glimpse in their life, all united by one common passion.


Having fun is important! You cannot spend your leisure time getting bored or annoyed: contributing to open source is fun ‘cause you pick the problems you would like to work on, and you don’t need all that bureaucracy and meetings that is often needed in your daily job. You can be challenged, and feeling useful, and improving a product, without any manager on your shoulder, and with your pace.

Being up-to-date on how things evolve

For example, the GitLab Handbook is a precious collection of resources, ideas, and methodologies on how to run a 1000 people company in a transparent, full remote, way. It’s a great reading, with a lot of wisdom.

Contributing to a project typically gives you an idea on how teams behind it work, which technologies they use, and which methodologies. Many open-source projects use bleeding-edge technologies, or draw a path. Being in contact with new ideas is a great way to know where the industry is headed, and what are the latest news: it is especially true if you hang out in the channels where the community meets, being them Discord, forums, or IRC (well, IRC is not really bleeding-edge, but it is fun).


When contributing in an area that doesn’t match your expertise, you always learn something new: reviews are usually precise and on point, and projects of a remarkable size commonly have a coaching team that help you to start contributing, and guide you on how to land your first patches.

In GitLab, if you need a help in merging your code, there are the Merge Request Coaches! And for any type of help, you can always join Gitter, or ask on the forum, or write to the dedicated email address.

Feel also free to ping me directly if you want some general guidance!

Giving back

I work as a Platform Engineer. My job is built on an incredible amount of open-source libraries, amazing FOSS services, and I basically have just to glue together different pieces. When I find some rough edge that could be improved, I try to do so.

Nowadays, I find crucial having well-maintained documentation, so after I have achieved something complex, I usually go back and try to improve the documentation, where lacking. It is my tiny way to say thanks, and giving back to a world that really has shaped my career.

This is also what mostly of my blogs posts are about: after having completed something on which I spent fatigue on, I find it nice being able to share such information. Every so often, I find myself years later following my guide, and I really also appreciate when other people find the content useful.


Who doesn’t like swag? :-) Numerous projects have delightful swags, starting from stickers, that they like to share with the whole community. Of course, it shouldn’t be your main driver, ‘cause you will soon notice that it is ultimately not worth the amount of time you spend contributing, but it is charming to have GitLab socks!

A GitLab branded mechanical keyboard

A GitLab branded mechanical keyboard, courtesy of the GitLab's security team! This very article has been typed with it!


I hope I inspired you to contribute to some open-source project (maybe GitLab!). Now, let’s talk about some small tricks on how to begin easily.

Find something you are passionate about

You must find a project you are passionate about, and that you use frequently. Looking forward to a release, knowing that your contributions will be included, it is a wonderful satisfaction, and can really push you to do more.

Moreover, if you already know the project you want to contribute to, you probably know already the biggest pain points, and where the project needs some contributions.

Start small and easy

You don’t need to do gigantic contributions to begin. Find something tiny, so you can get familiar with the project workflows, and how contributions are received.

Launchpad and bazaar instead of GitLab and git — down the memory lane! My journey with Ubuntu started correcting a typo in a README, and here I am, years later, having contributed to dozens of projects, and having a career in the C.S. field. Back then, I really had no idea of what my future would have held.

For GitLab, you can take a look at the issues marked as “good for new contributors”. They are designed to be addressed quickly, and onboard new people in the community. In this way, you don’t have to focus on the difficulties of the task at hand, but you can easily explore how the community works.

Writing issues is a good start

Writing high-quality issues is a great way to start contributing: maintainers of a project are not always aware of how the software is used, and cannot be aware of all the issues. If you know that something could be improved, write it down: spend some time explicating what happens, what you expect, how to reproduce the problem, and maybe suggest some solutions as well! Perhaps, the first issue you write down could be the very first issue you resolve.

Not much time required!

Contributing to a project doesn’t require necessarily a lot of time. When I was younger, I definitely dedicated way more time to open-source projects, implementing gigantic features. Nowadays, I don’t do that anymore (life is much more than computers), but I like to think that my contributions are still useful. Still, I don’t spend more than a couple of hours a month, based on my schedule, and how much is raining (yep, in winter I definitely contribute more than in summer).

GitLab is super easy

Do you use GitLab? Then you should undoubtedly try to contribute to it. It is easy, it is fun, and there are many ways. Take a look at this guide, hang out on Gitter, and see you around. ;-)

Next week (9th-13th May 2022) there is also a GitLab Hackathon! It is a real fun and easy way to start contributing: many people are available to help you, there are video sessions talking about contributing, and just doing a small contribution you will receive a pretty prize.

And if I was able to do it with my few contributions, you can as well! And in time, if you are consistent in your contributions, you can become a GitLab Hero! How cool is that?

I really hope this wall of text made you consider contributing to an open-source project. If you have any question, or feedback, or if you would like some help, please leave a comment below, tweet me @rpadovani93 or write me an email at


04 May, 2022 12:00AM

May 03, 2022

hackergotchi for Tails


Tails 5.0 is out

We are especially proud to present you Tails 5.0, the first version of Tails based on Debian 11 (Bullseye). It brings new versions of a lot of the software included in Tails and new OpenPGP tools.

New features


We added Kleopatra to replace the OpenPGP Applet and the Password and Keys utility, also known as Seahorse.

The OpenPGP Applet was not actively developped anymore and was complicated for us to keep in Tails. The Password and Keys utility was also poorly maintained and Tails users suffered from too many of its issues until now, like #17183.

Kleopatra provides equivalent features in a single tool and is more actively developed.

Changes and updates

  • The Additional Software feature of the Persistent Storage is enabled by default to make it faster and more robust to configure your first additional software package.

  • You can now use the Activities overview to access your windows and applications. To access the Activities overview, you can either:

    • Click on the Activities button.
    • Throw your mouse pointer to the top-left hot corner.
    • Press the Super () key on your keyboard.

    You can see your windows and applications in the overview. You can also start typing to search your applications, files, and folders.

Included software

Most included software has been upgraded in Debian 11, for example:

  • Update Tor Browser to 11.0.11.

  • Update GNOME from 3.30 to 3.38, with lots of small improvements to the desktop, the core GNOME utilities, and the locking screen.

  • Update MAT from 0.8 to 0.12, which adds support to clean metadata from SVG, WAV, EPUB, PPM, and Microsoft Office files.

  • Update Audacity from 2.2.2 to 2.4.2.

  • Update Disk Utility from 3.30 to 3.38.

  • Update GIMP from 2.10.8 to 2.10.22.

  • Update Inkscape from 0.92 to 1.0.

  • Update LibreOffice from 6.1 to 7.0.

Hardware support

  • The new support for driverless printing and scanning in Linux makes it easier to make recent printers and scanners work in Tails.

Fixed problems

  • Fix unlocking VeraCrypt volumes that have very long passphrases. (#17474)

For more details, read our changelog.

Known issues

  • Additional Software sometimes doesn't work when restarting for the first time right after creating a Persistent Storage. (#18839)

    To solve this, install the same additional software package again after restarting with the Persistent Storage for the first time.

  • Thunderbird displays a popup to choose an application when opening links. (#18913)

  • Tails Installer sometimes fails to clone. (#18844)

See the list of long-standing issues.

Get Tails 5.0

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are not available to 5.0.

    All users have to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.0 directly:

What's coming up?

Tails 5.1 is scheduled for May 31.

Have a look at our roadmap to see where we are heading to.

03 May, 2022 12:34PM

May 02, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Serge Hallyn: Openconnect (anyconnect) on Ubuntu Jammy

Sorry, I should have posted this weeks ago to save others some time.

If you are running openconnect-sso to connect to a Cisco anyconnect VPN, then when you upgrade to Ubuntu Jammy, openssl 3.0 may stop openconnect from working. The easiest way to work around this is to use a custom configuration file as follows:

cat > $HOME/ssl.cnf
openssl_conf = openssl_init

ssl_conf = ssl_sect

system_default = system_default_sect

Options = UnsafeLegacyRenegotiation

Then use this configuration file (only) when running openconnect:

OPENSSL_CONF=~/ssl.cnf openconnect-sso

02 May, 2022 02:39PM

Sebastian Dröge: Instantaneous RTP synchronization & retrieval of absolute sender clock times with GStreamer

Over the last few weeks, GStreamer’s RTP stack got a couple of new and quite useful features. As it is difficult to configure, mostly because there being so many different possible configurations, I decided to write about this a bit with some example code.

The features are RFC 6051-style rapid synchronization of RTP streams, which can be used for inter-stream (e.g. audio/video) synchronization as well as inter-device (i.e. network) synchronization, and the ability to easily retrieve absolute sender clock times per packet on the receiver side.

Note that each of this was already possible before with GStreamer via different mechanisms with different trade-offs. Obviously, not being able to have working audio/video synchronization would be simply not acceptable and I previously talked about how to do inter-device synchronization with GStreamer before, for example at the GStreamer Conference 2015 in Düsseldorf.

The example code below will make use of the GStreamer RTSP Server library but can be applied to any kind of RTP workflow, including WebRTC, and are written in Rust but the same can also be achieved in any other language. The full code can be found in this repository.

And for reference, the merge requests to enable all this are [1], [2] and [3]. You probably don’t want to backport those to an older version of GStreamer though as there are dependencies on various other changes elsewhere. All of the following needs at least GStreamer from the git main branch as of today, or the upcoming 1.22 release.

Baseline Sender / Receiver Code

The starting point of the example code can be found here in the baseline branch. All the important steps are commented so it should be relatively self-explanatory.


The sender is starting an RTSP server on the local machine on port 8554 and provides a media with H264 video and Opus audio on the mount point /test. It can be started with

$ cargo run -p rtp-rapid-sync-example-send

After starting the server it can be accessed via GStreamer with e.g. gst-play-1.0 rtsp:// or similarly via VLC or any other software that supports RTSP.

This does not do anything special yet but lays the foundation for the following steps. It creates an RTSP server instance with a custom RTSP media factory, which in turn creates custom RTSP media instances. All this is not needed at this point yet but will allow for the necessary customization later.

One important aspect here is that the base time of the media’s pipeline is set to zero


This allows the timeoverlay element that is placed in the video part of the pipeline to render the clock time over the video frames. We’re going to use this later to confirm on the receiver that the clock time on the sender and the one retrieved on the receiver are the same.

let video_overlay = gst::ElementFactory::make("timeoverlay", None)
    .context("Creating timeoverlay")?;
video_overlay.set_property_from_str("time-mode", "running-time");

It actually only supports rendering the running time of each buffer, but in a live pipeline with the base time set to zero the running time and pipeline clock time are the same. See the documentation for some more details about the time concepts in GStreamer.

Overall this creates the following RTSP stream producer bin, which will be used also in all the following steps:


The receiver is a simple playbin pipeline that plays an RTSP URI given via command-line parameters and runs until the stream is finished or an error has happened.

It can be run with the following once the sender is started

$ cargo run -p rtp-rapid-sync-example-send -- "rtsp://"

Please don’t forget to replace the IP with the IP of the machine that is actually running the server.

All the code should be familiar to anyone who ever wrote a GStreamer application in Rust, except for one part that might need a bit more explanation

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);

playbin is going to create an rtspsrc, and at that point it will emit the source-setup signal so that the application can do any additional configuration of the source element. Here we’re connecting a signal handler to that signal to do exactly that.

By default rtspsrc introduces a latency of 2 seconds of latency, which is a lot more than what is usually needed. For live, non-VOD RTSP streams this value should be around the network jitter and here we’re configuring that to 40 milliseconds.

Retrieval of absolute sender clock times

Now as the first step we’re going to retrieve the absolute sender clock times for each video frame on the receiver. They will be rendered by the receiver at the bottom of each video frame and will also be printed to stdout. The changes between the previous version of the code and this version can be seen here and the final code here in the sender-clock-time-retrieval branch.

When running the sender and receiver as before, the video from the receiver should look similar to the following

The upper time that is rendered on the video frames is rendered by the sender, the bottom time is rendered by the receiver and both should always be the same unless something is broken here. Both times are the pipeline clock time when the sender created/captured the video frame.

In this configuration the absolute clock times of the sender are provided to the receiver via the NTP / RTP timestamp mapping provided by the RTCP Sender Reports. That’s also the reason why it takes about 5s for the receiver to know the sender’s clock time as RTCP packets are not scheduled very often and only after about 5s by default. The RTCP interval can be configured on rtpbin together with many other things.


On the sender-side the configuration changes are rather small and not even absolutely necessary.

rtpbin.set_property_from_str("ntp-time-source", "clock-time");

By default the RTP NTP time used in the RTCP packets is based on the local machine’s walltime clock converted to the NTP epoch. While this works fine, this is not the clock that is used for synchronizing the media and as such there will be drift between the RTP timestamps of the media and the NTP time from the RTCP packets, which will be reset every time the receiver receives a new RTCP Sender Report from the sender.

Instead, we configure rtpbin here to use the pipeline clock as the source for the NTP timestamps used in the RTCP Sender Reports. This doesn’t give us (by default at least, see later) an actual NTP timestamp but it doesn’t have the drift problem mentioned before. Without further configuration, in this pipeline the used clock is the monotonic system clock.

rtpbin.set_property("rtcp-sync-send-time", false);

rtpbin normally uses the time when a packet is sent out for the NTP / RTP timestamp mapping in the RTCP Sender Reports. This is changed with this property to instead use the time when the video frame / audio sample was captured, i.e. it does not include all the latency introduced by encoding and other processing in the sender pipeline.

This doesn’t make any big difference in this scenario but usually one would be interested in the capture clock times and not the send clock times.


On the receiver-side there are a few more changes. First of all we have to opt-in to rtpjitterbuffer putting a reference timestamp metadata on every received packet with the sender’s absolute clock time.

    glib::closure!(|_playbin: &gst::Pipeline, source: &gst::Element| {
        source.set_property("latency", 40u32);
        source.set_property("add-reference-timestamp-meta", true);

rtpjitterbuffer will start putting the metadata on packets once it knows the NTP / RTP timestamp mapping, i.e. after the first RTCP Sender Report is received in this case. Between the Sender Reports it is going to interpolate the clock times. The normal timestamps (PTS) on each packet are not affected by this and are still based on whatever clock is used locally by the receiver for synchronization.

To actually make use of the reference timestamp metadata we add a timeoverlay element as video-filter on the receiver:

let timeoverlay =
    gst::ElementFactory::make("timeoverlay", None).context("Creating timeoverlay")?;

timeoverlay.set_property_from_str("time-mode", "reference-timestamp");
timeoverlay.set_property_from_str("valignment", "bottom");

pipeline.set_property("video-filter", &timeoverlay);

This will then render the sender’s absolute clock times at the bottom of each video frame, as seen in the screenshot above.

And last we also add a pad probe on the sink pad of the timeoverlay element to retrieve the reference timestamp metadata of each video frame and then printing the sender’s clock time to stdout:

let sinkpad = timeoverlay
    .expect("Failed to get timeoverlay sinkpad");
    .add_probe(gst::PadProbeType::BUFFER, |_pad, info| {
        if let Some(gst::PadProbeData::Buffer(ref buffer)) = {
            if let Some(meta) = buffer.meta::<gst::ReferenceTimestampMeta>() {
                println!("Have sender clock time {}", meta.timestamp());
            } else {
                println!("Have no sender clock time");

    .expect("Failed to add pad probe");

Rapid synchronization via RTP header extensions

The main problem with the previous code is that the sender’s clock times are only known once the first RTCP Sender Report is received by the receiver. There are many ways to configure rtpbin to make this happen faster (e.g. by reducing the RTCP interval or by switching to the AVPF RTP profile) but in any case the information would be transmitted outside the actual media data flow and it can’t be guaranteed that it is actually known on the receiver from the very first received packet onwards. This is of course not a problem in every use-case, but for the cases where it is there is a solution for this problem.

RFC 6051 defines an RTP header extension that allows to transmit the NTP timestamp that corresponds an RTP packet directly together with this very packet. And that’s what the next changes to the code are making use of.

The changes between the previous version of the code and this version can be seen here and the final code here in the rapid-synchronization branch.


To add the header extension on the sender-side it is only necessary to add an instance of the corresponding header extension implementation to the payloaders.

let hdr_ext = gst_rtp::RTPHeaderExtension::create_from_uri(
    .context("Creating NTP 64-bit RTP header extension")?;
video_pay.emit_by_name::<()>("add-extension", &[&hdr_ext]);

This first instantiates the header extension based on the uniquely defined URI for it, then sets its ID to 1 (see RFC 5285) and then adds it to the video payloader. The same is then done for the audio payloader.

By default this will add the header extension to every RTP packet that has a different RTP timestamp than the previous one. In other words: on the first packet that corresponds to an audio or video frame. Via properties on the header extension this can be configured but generally the default should be sufficient.


On the receiver-side no changes would actually be necessary. The use of the header extension is signaled via the SDP (see RFC 5285) and it will be automatically made use of inside rtpbin as another source of NTP / RTP timestamp mappings in addition to the RTCP Sender Reports.

However, we configure one additional property on rtpbin

    glib::closure!(|_rtspsrc: &gst::Element, rtpbin: &gst::Element| {
        rtpbin.set_property("min-ts-offset", gst::ClockTime::from_mseconds(1));

Inter-stream audio/video synchronization

The reason for configuring the min-ts-offset property on the rtpbin is that the NTP / RTP timestamp mapping is not only used for providing the reference timestamp metadata but it is also used for inter-stream synchronization by default. That is, for getting correct audio / video synchronization.

With RTP alone there is no mechanism to synchronize multiple streams against each other as the packet’s RTP timestamps of different streams have no correlation to each other. This is not too much of a problem as usually the packets for audio and video are received approximately at the same time but there’s still some inaccuracy in there.

One approach to fix this is to use the NTP / RTP timestamp mapping for each stream, either from the RTCP Sender Reports or from the RTP header extension, and that’s what is made use of here. And because the mapping is provided very often via the RTP header extension but the RTP timestamps are only accurate up to clock rate (1/90000s for video and 1/48000s) for audio in this case, we configure a threshold of 1ms for adjusting the inter-stream synchronization. Without this it would be adjusted almost continuously by a very small amount back and forth.

Other approaches for inter-stream synchronization are provided by RTSP itself before streaming starts (via the RTP-Info header), but due to a bug this is currently not made use of by GStreamer.

Yet another approach would be via the clock information provided by RFC 7273, about which I already wrote previously and which is also supported by GStreamer. This also allows inter-device, network synchronization and used for that purpose as part of e.g. AES67, Ravenna, SMPTE 2022 / 2110 and many other protocols.

Inter-device network synchronization

Now for the last part, we’re going to add actual inter-device synchronization to this example. The changes between the previous version of the code and this version can be seen here and the final code here in the network-sync branch. This does not use the clock information provided via RFC 7273 (which would be another option) but uses the same NTP / RTP timestamp mapping that was discussed above.

When starting the receiver multiple times on different (or the same) machines, each of them should play back the media synchronized to each other and exactly 2 seconds after the corresponding audio / video frames are produced on the sender.

For this, both sender and all receivers are using an NTP clock ( in this case) instead of the local monotonic system clock for media synchronization (i.e. as the pipeline clock). Instead of an NTP clock it would also be possible to any other mechanism for network clock synchronization, e.g. PTP or the GStreamer netclock.

println!("Syncing to NTP clock");
    .context("Syncing NTP clock")?;
println!("Synced to NTP clock");

This code instantiates a GStreamer NTP clock and then synchronously waits up to 5 seconds for it to synchronize. If that fails then the application simply exits with an error.


On the sender side all that is needed is to configure the RTSP media factory, and as such the pipeline used inside it, to use the NTP clock


This causes all media inside the sender’s pipeline to be synchronized according to this NTP clock and to also use it for the NTP timestamps in the RTCP Sender Reports and the RTP header extension.


On the receiver side the same has to happen


In addition a couple more settings have to be configured on the receiver though. First of all we configure a static latency of 2 seconds on the receiver’s pipeline.


This is necessary as GStreamer can’t know the latency of every receiver (e.g. different decoders might be used), and also because the sender latency can’t be automatically known. Each audio / video frame will be timestamped on the receiver with the NTP timestamp when it was captured / created, but since then all the latency of the sender, the network and the receiver pipeline has passed and for this some compensation must happen.

Which value to use here depends a lot on the overall setup, but 2 seconds is a (very) safe guess in this case. The value only has to be larger than the sum of sender, network and receiver latency and in the end has the effect that the receiver is showing the media exactly that much later than the sender has produced it.

And last we also have to tell rtpbin that

  1. sender and receiver clock are synchronized to each other, i.e. in this case both are using exactly the same NTP clock, and that no translation to the pipeline’s clock is necessary, and
  2. that the outgoing timestamps on the receiver should be exactly the sender timestamps and that this conversion should happen based on the NTP / RTP timestamp mapping

source.set_property_from_str("buffer-mode", "synced");
source.set_property("ntp-sync", true);

And that’s it.

A careful reader will also have noticed that all of the above would also work without the RTP header extension, but then the receivers would only be synchronized once the first RTCP Sender Report is received. That’s what the test-netclock.c / test-netclock-client.c example from the GStreamer RTSP server is doing.

As usual with RTP, the above is by far not the only way of doing this and GStreamer also supports various other synchronization mechanisms. Which one is the correct one for a specific use-case depends on a lot of factors.

02 May, 2022 01:00PM

hackergotchi for SparkyLinux


Sparky news 2022/04

The 4th monthly Sparky project and donate report of 2022:

– Linux kernel updated up to 5.17.5 & 5.18-rc3
– Added to repos: BadWolf web browser, Crow Translate
– Sparky 2022.04 & 2022.04 Special Editions of the rolling line released
– added option to support Sparky by sending donations via Bitcoin (BTC)

Many thanks to all of you for supporting our open-source projects. Your donations help keeping them alive.

Don’t forget to send a small tip in May too, please.

Antoine B.
€ 15
Terry C.
€ 9.06
Krzysztof M.
PLN 50
Andrzej T.
PLN 100
Krzysztof S.
PLN 80
Olaf T.
€ 10
Mitchel V.
$ 150
Tom C.
$ 15
Keith K.
$ 10
Tomasz W.
PLN 110
Andrzej P.
PLN 10
Frank M.
€ 20
Karl A.
€ 1.66
Rafał Z.
PLN 25
Marek B.
PLN 10
Rudolf L.
€ 10
Alexander F.
€ 10
Stanisław G.
PLN 40
Aymeric L.
€ 10
Jorg S.
€ 5
Mateusz G.
PLN 20
Dariusz M.
€ 10
Ryan S.
€ 25
Costa Rica
Michael S.
€ 10
Sebastian K.
€ 10
Maciej P.
PLN 22
Mariusz S.
PLN 123
United Kingdom
Aggeusz K.
€ 10
Ralf A.
€ 15
64 %
€ 170.72
PLN 590
$ 175

* Keep in mind that some amounts coming to us will be reduced by commissions for online payment services. Only direct sending donations to our bank account will be credited in full.

* Miej na uwadze, że kwota, którą przekażesz nam poprzez system płatności on-line zostanie pomniejszona o prowizję dla pośrednika. W całości wpłynie tylko ta, która zostanie przesłana bezpośrednio na nasze konto bankowe.

02 May, 2022 09:33AM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

Improving the Stability and Reliability with a Modular Modem in the Librem 5

Usually we can fully rely on our phones to be reachable at any time—given cellular reception of course—and we take that for granted. You surely know situations in your life where that becomes especially critical. Be it when you’re expecting an important call or when you need to be able to receive “emergency” calls in […]

The post Improving the Stability and Reliability with a Modular Modem in the Librem 5 appeared first on Purism.

02 May, 2022 03:32AM by Martin Kepplinger

April 29, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Magazine #180 – 15th Anniversary issue!

This month:
Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu : KDE Science
Micro This Micro That
* Review : CutefishOS
* Review : FreeOffice 2021
* My Opinion : First Look At Ubuntu 22.04
Ubports Touch
* Ubuntu Games : Growbot
plus: News, My Story, The Daily Waddle, Q&A, and more.


Get it while it’s hot!

29 April, 2022 07:35PM

hackergotchi for Purism PureOS

Purism PureOS

Privacy Washing: Do As I Say, Not As I Do

People care about their privacy. Some have doubted this in the past, pointing to the amount of personal information people willingly shared, often in exchange for free software or services. Yet I’ve long thought that many people simply were not aware of the privacy implications of sharing their data and how it could be misused […]

The post Privacy Washing: Do As I Say, Not As I Do appeared first on Purism.

29 April, 2022 03:16PM by Kyle Rankin

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Purging inactive accounts with no posts

We'll shortly delete all forum accounts that match the following criteria:

We'll delete all accounts which both have zero posts and have not been logged in to after 2021-04-01.

So users who have posted here at least once will not be affected in any way. Deleting the accounts means getting rid of any and all data relating to the registration --- it'll be as if the account would never have been registered, and there will be absolutely no way to restore the account.

The purpose of this exercise is to get rid of personally identifiable information associated with these dormant accounts such as email addresses and nicknames: Passive readers can always view the boards anonymously without signing up, and the best data protection for user data is simply not having any data on file. Also, if the user is not using the forums (as indicated by the 1+ year absence), we shouldn't continue storing the data as there doesn't seem to be a reason for that anymore (discontinued usage). In the future, purges like this are going to happen regularly, maybe every 3 or every 6 months.

29 April, 2022 12:00AM

April 26, 2022

hackergotchi for Purism PureOS

Purism PureOS

How to Power Your CS Labs with PureOS

With PureOS and Librem hardware, you can build a premium CS lab without premium licensing fees. Using community-driven freedom-respecting software, schools can take learning beyond the classroom, into students’ homes, and ultimately into the industry. Let’s learn how Free and Open Software like PureOS is a perfect choice for educational institutions. Avoid Recurring Costs for Licensing Many schools are […]

The post How to Power Your CS Labs with PureOS appeared first on Purism.

26 April, 2022 05:47PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu MATE: Ubuntu MATE 22.04 LTS Release Notes

Ubuntu MATE 22.04 LTS is the culmination of 2 years of continual improvement 😅 to Ubuntu and MATE Desktop. As is tradition, the LTS development cycle has a keen focus on eliminating paper 🧻 cuts 🔪 but we’ve jammed in some new features and a fresh coat of paint too 🖌 The following is a summary of what’s new since Ubuntu MATE 21.10 and some reminders of how we got here from 20.04. Read on to learn more 🧑‍🎓

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this LTS release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! Thank you all for getting out there and making a difference! 💚

Ubuntu MATE 22.04 LTS Ubuntu MATE 22.04 LTS (Jammy Jellyfish) - Mutiny layout with Yark-MATE-dark

What’s changed?

Here are the highlights of what’s changed recently.

MATE Desktop 1.26.1 🧉

Ubuntu MATE 22.04 features MATE Desktop 1.26.1. MATE Desktop 1.26.0 was introduced in 21.10 and benefits from significant effort 😅 in fixing bugs 🐛 in MATE Desktop, optimising performance ⚡ and plugging memory leaks. MATE Desktop 1.26.1 addresses the bugs we discovered following the initial 1.26.0 release. Our community also fixed some bugs in Plank and Brisk Menu 👍 and also fixed the screen reader during installs for visually impaired users 🥰 In all over 500 bugs have been addressed in this release 🩹

Yaru 🎨

Ubuntu MATE 21.04 was the first release to ship with a MATE variant of the Yaru theme. A year later and we’ve been working hard with members of the Yaru and Ubuntu Desktop teams to bring full MATE compatibility to upstream Yaru, including all the accent colour varieties. All reported bugs 🐞 in the Yaru implementation for MATE have also been fixed 🛠

Yaru Themes Yaru Themes in Ubuntu MATE 22.04 LTS

Ubuntu MATE 22.04 LTS ships with all the Yaru themes, including our own “chelsea cucumber” version 🥒 The legacy Ambiant/Radiant themes are no longer installed by default and neither are the stock MATE Desktop themes. We’ve added an automatic settings migration to transition users who upgrade to an appropriate Yaru MATE theme.

Cherries on top 🍒

In collaboration with Paul Kepinski 🇫🇷 (Yaru team) and Marco Trevisan 🇮🇹 (Ubuntu Desktop team) we’ve added dark/light panels and panel icons to Yaru for MATE Desktop and Unity. I’ve added a collection of new dark/light panel icons to Yaru for popular apps with indicators such as Steam, Dropbox, uLauncher, RedShift, Transmission, Variety, etc.

Light Panel Dark Panel Light and Dark panels

I’ve added patches 🩹 to the Appearance Control Center that applies theme changes to Plank (the dock), Pluma (text editor) and correctly toggles the colour scheme preference for GNOME 42 apps. When you choose a dark theme, everything will go dark in unison 🥷 and vice versa.

So, Ubuntu MATE 22.04 LTS is now using everything Yaru/Suru has to offer. 🎉

AI Generated wallpapers

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. He’s been creating AI 🤖 generated art using bleeding edge CLIP guided diffusion models 🖌 The results are pretty incredible and we’ve included the 3 top voted “Jammy Jellyfish” in our wallpaper selection as their vivid and vibrant styles compliment the Yaru accent colour theme options very nicely indeed 😎

If you want the complete set, here’s a tarball of all 8 wallpapers at 3840x2160:

Ubuntu MATE stuff 🧉

Ubuntu MATE has a few distinctive apps and integrations of it’s own, here’s a run down of what’s new and shiny ✨

MATE Tweak

Switching layouts with MATE Tweak is its most celebrated feature. We’ve improved the reliability of desktop layout switching and restoring custom layouts is now 100% accurate 💯

Ubuntu MATE Desktop Layouts Having your desktop your way in Ubuntu MATE

We’ve removed mate-netbook from the default installation of Ubuntu MATE and as a result the Netbook layout is no longer available. We did this because mate-maximus, a component of mate-netbook, is the cause of some compatibility issues with client side decorated (CSD) windows. There are still several panel layouts that offer efficient resolution use 📐 for those who need it.

MATE Tweak has refreshed its supported for 3rd party compositors. Support for Compton has been dropped, as it is no longer actively maintained and comprehensive support for picom has been added. picom has three compositor options: Xrender, GLX and Hybrid. All three are can be selected via MATE Tweak as the performance and compatibility of each varies depending on your hardware. Some people choose to use picom because they get better gaming performance or screen tearing is reduced. Some just like subtle animation effects picom adds 💖


Recent versions of rofi, the tool used by MATE HUD to visualise menu searches, has a new theme system. MATE HUD has been updated to support this new theme engine and comes with two MATE specific themes (mate-hud and mate-hud-rounded) that automatically adapt to match the currently selected GTK theme.

You can add your own rofi themes to ~/.local/share/rofi/themes. Should you want to, you can use any rofi theme in MATE HUD. Use Alt + F2 to run rofi-theme-selector to try out the different themes, and if there is one you prefer you can set it as default by using running the following in a terminal:

gsettings set org.mate.hud rofi-theme <theme name>

MATE HUD MATE HUD uses the new rofi theme engine

Windows & Shadows

I’ve updated the Metacity/Marco (the MATE Window Manager) themes in Yaru to make sure they match GNOME/CSD/Handy windows for a consistent look and feel across all window types 🪟 and 3rd party compositors like picom. I even patched how Marco and picom render shadows so windows they look cohesive regardless of toolkit or compositor being used.

Ubuntu MATE Welcome & Boutique

The Software Boutqiue has been restocked with software for 22.04 and Firefox 🔥🦊 ESR (.deb) has been added to the Browser Ballot in Ubuntu MATE Welcome.

Ubuntu MATE Welcome Browser Ballot Comprehensive browser options just a click away

41% less fat 🍩

Ubuntu MATE, like it’s lead developer, was starting to get a bit large around the mid section 😊 During the development of 22.04, the image 📀 got to 4.1GB 😮

So, we put Ubuntu MATE on a strict diet 🥗 We’ve removed the proprietary NVIDIA drivers from the local apt pool on the install media and thanks to migrating fully to Yaru (which now features excellent de-duplication of icons) and also removing our legacy themes/icons. And now the Yaru-MATE themes/icons are completely in upstream Yaru, we were able to remove 3 snaps from the default install and the image is now a much more reasonable 2.7GB; 41% smaller. 🗜

This is important to us, because the majority of our users are in countries where Internet bandwidth is not always plentiful. Those of you with NVIDIA GPUs, don’t worry. If you tick the 3rd party software and drivers during the install the appropriate driver for your GPU will be downloaded and installed 👍

Install 3rd party drivers NVIDIA GPU owners should tick Install 3rd party software and drivers during install

While investigating 🕵 a bug in Xorg Server that caused Marco (the MATE window manager) to crash we discovered that Marco has lower frame time latency ⏱ when using Xrender with the NVIDIA proprietary drivers. We’ve published a PPA where NVIDIA GPU users can install a version of Marco that uses Xpresent for optimal performance

sudo apt-add-repository ppa:ubuntu-mate-dev/marco
sudo apt upgrade

Should you want to revert this change you install ppa-purge and run the following from a terminal: sudo ppa-purge -o ubuntu-mate-dev -p marco.

But wait! There’s more! 😲

These reductions in size are after we added three new applications to the default install on Ubuntu MATE: GNOME Clocks, Maps and Weather My family and I 👨‍👩‍👧 have found these applications particularly useful and use them regularly on our laptops without having to reach for a phone or tablet.

GNOME Clocks, Maps & Weather New additions to the default desktop application in Ubuntu MATE 22.04 LTS

For those of you who like a minimal base platform, then the minimal install option is still available which delivers just the essential Ubuntu MATE Desktop and Firefox browser. You can then build up from there 👷

Packages, packages, packages 📦

It doesn’t matter how you like to consume your Linux 🐧 packages, Ubuntu MATE has got you covered with PPA, Snap, AppImage and FlatPak support baked in by default. You’ll find flatpak, snapd and xdg-desktop-portal-gtk to support Snap and FlatPak and the (ageing) libfuse2 to support AppImage are all pre-installed.

Although flatpak is installed, FlatHub is not enabled by default. To enable FlatHub run the following in a terminal:

flatpak remote-add --if-not-exists flathub

We’ve also included snapd-desktop-integration which provides a bridge between the user’s session and snapd to integrate theme preferences 🎨 with snapped apps and can also automatically install snapped themes 👔 All the Yaru themes shipped in Ubuntu MATE are fully snap aware.

Ayatana Indicators

Ubuntu MATE 20.10 transitioned to Ayatana Indicators 🚥 As a quick refresher, Ayatana Indicators are a fork of Ubuntu Indicators that aim to be cross-distro compatible and re-usable for any desktop environment 👌

Ubuntu MATE 22.04 LTS comes with Ayatana Indicators 22.2.0 and sees the return of Messages Indicator 📬 to the default install. Ayatana Indicators now provide improved backwards compatibility to Ubuntu Indicators and no longer requires the installation of two sets of libraries, saving RAM, CPU cycles and improving battery endurance 🔋

Ayatana Indicator Settings Ayatana Indicators Settings

To compliment the BlueZ 5.64 protocol stack in Ubuntu, Ubuntu MATE ships Blueman 2.2.4 which offers comprehensive management of Bluetooth devices and much improved pairing compatibility 💙🦷

I also patched mate-power-manager, ayatana-indicator-power and Yaru to add support for battery powered gaming input devices, such as controllers 🎮 and joysticks 🕹

Active Directory

And in case you missed it, the Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. Ubuntu MATE has supported the same capability since it was first made available in the 20.10 release.

Raspberry Pi image 🥧

  • Should be available very shortly after the release of 22.04.

Major Applications

Accompanying MATE Desktop 1.26.1 and Linux 5.15 are Firefox 99.0, Celluloid 0.20, Evolution 3.44 & LibreOffice

See the Ubuntu 22.04 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 22.04 LTS

This new release will be first available for PC/Mac users.


Upgrading from Ubuntu MATE 20.04 LTS and 21.10

You can upgrade to Ubuntu MATE 22.04 LTS from Ubuntu MATE either 20.04 LTS or 21.10. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For long-term support versions” if you are using 20.04 LTS; set it to “For any new version” if you are using 21.10.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE


Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

26 April, 2022 04:47PM

April 25, 2022

hackergotchi for Purism PureOS

Purism PureOS

Animating Pepper & Carrot with a respectful laptop

I made a 2D traditional animation as part of a project I am working on for Purism, with a goal to demonstrate the power of the Librem 14 as a creative platform. Therefore, as a follow up to my previous post about making hand drawn animations with Librem computers, and as an addition to the […]

The post Animating Pepper & Carrot with a respectful laptop appeared first on Purism.

25 April, 2022 03:16PM by François Téchené

hackergotchi for GreenboneOS


Supply Chains in Open-Source Software

Open source is unceasingly on the rise among the vast majority of companies, software manufacturers and providers. However, this triumphant advance is also increasing the importance of monitoring the supply chain of the software used, which third parties have developed in accordance with open-source traditions. But not everyone using open-source software follows all the tried and true rules. Greenbone can help track down such mistakes. This blog post explains the problem and how to avoid it.

Supply Chains in Open-Source-Software


Vulnerabilities in Log4j, Docker or NPM

At the end of 2021, the German Federal Office for Information Security (BSI) officially sounded the alarm about a remotely exploitable vulnerability in the logging library Log4j. At the time, critics of open-source software promptly spoke out: open-source software like Log4j was implicitly insecure and a practically incalculable risk in the supply chain of other programs.

Although the open-source developers themselves fixed the problem within a few hours, countless commercial products still contain outdated versions of Log4j – with no chance of ever being replaced. This is not an isolated case: recently, the story of a developer for NPM (Node Package Manager, a software package format for the web server Node.js) caused a stir, who massively shook the trust in the open-source supply chain and the development community with his actually well-meant protest against the war in Ukraine.

Open Source in Criticism

It was not the first time that NPM became a target. The package manager was already affected by attacks at the end of 2021. At that time, developer Oleg Drapeza published a bug report on GitHub after finding malicious code to harvest passwords in the UAParser.js library. Piece by piece, the original author of the software, Faisal Salman, was able to reconstruct that someone had hacked into his account in NPM’s package system and placed malicious code there. The problem: UAParser.js is a module for Node.js and is used in millions of setups worldwide. Accordingly, the circle of affected users was enormous.

Again, the open-source critics said that open-source software like UAParser.js is implicitly insecure and a practically incalculable risk in the supply chain of other programs. Even more: open-source developers, according to the explicit accusation, incorporate external components such as libraries or container images far too carelessly and hardly give a thought to the associated security implications. For this reason, their work is inherently vulnerable to security attacks, especially in the open-source supply chain. Alyssa Shames discusses the problem on using the example of containers and describes the dangers in detail.

The Dark Side of the Bazaar

DevOps and Cloud Native have indeed had a major impact on the way we work in development in recent years. Integrating components that exist in the community into one’s own application instead of programming comparable functionality from scratch is part of the self-image of the entire open-source scene. This community and its offer can be compared with a bazaar, with all advantages and disadvantages. Many developers place their programs under an open license, precisely because they value the contributions of the other “bazaar visitors”. In this way, others who have similar problems can benefit – under the same conditions – and do not have to reinvent the wheel. In the past, this applied more or less only to individual components of software, but cloud and containers have now led to developers no longer just adopting individual components, but entire images. These are software packages, possibly even including the operating system, which in the worst case can start untested on the developer’s own infrastructure.

A Growing Risk?

In fact, the potential attack vector is significantly larger than before and is being actively exploited. According to Dev-Insider, for example, in the past year the number of attacks on open-source components of software supply chains increased by 430 percent, according to a study by vendor Sonatype. This is confirmed by Synopsis’ risk-analysis report, which also notes that commercial code today is mostly open-source software. As early as 2020, cloud-native expert Aquasec reported about attacks on the Docker API, which cyber criminals used to cryptomine Docker images.

However, developers who rely on open-source components or come from the open-source community are not nearly as inattentive as such reports suggest. Unlike in proprietary products, for example, where only a company’s employees can keep an eye on the code, many people look at the managed source code in open-source projects. It is obvious that security vulnerabilities regularly come to light, as in the case of Log4j, Docker or NPM. Here, the open-source scene proves that it works well, not that its software is fundamentally (more) insecure.

Not Left Unprotected

A major problem, on the other hand – regardless of whether open source or proprietary software is used – is the lack of foresight in the update and patch strategy of some providers. This is the only reason why many devices are found with outdated, often vulnerable software versions, which can serve as a barn door for attackers. The Greenbone Enterprise Appliance, Greenbone’s professional product line, helps to find such gaps and close them.

In addition, complex security leaks like the ones described above in Log4j or UAParser.js are the exception rather than the rule. Most attacks are carried out using much simpler methods: Malware is regularly found in the ready-made images for Docker containers in Docker Hub, for example, which turns a database into the Bitcoin miner described above. Developers who integrate open-source components are by no means unprotected against these activities. Standards have long been in place to prevent attacks of the kind described, for example to obtain ready-made container images only directly from the manufacturer of a solution or, better still, to build them themselves using the CI/CD pipeline. On top of that, a healthy dose of mistrust is always a good thing for developers, for example when software comes from a source that is clearly not that of the manufacturer.

Supply-Chain Monitoring at Greenbone

Greenbone demonstrates that open-source software is not an incalculable risk in its own program with its products, the Greenbone Enterprise Appliances. The company has a set of guidelines that integrate the supply chain issue in software development into the entire development cycle. In addition to extensive functional tests, Greenbone subjects its products to automated tests with common security tools, for example. Anyone who buys from Greenbone is rightly relying on the strong combination of open-source transparency and the manufacturer’s meticulous quality assurance, an effort that not all open-source projects can generally afford.

25 April, 2022 03:04PM by Markus Feilner

April 24, 2022

hackergotchi for SparkyLinux


Crow Translate

There is a new application available for Sparkers: Crow Translate

What is Crow Translate?

Crow Translate is a simple and lightweight translator written in C++ / Qt that allows you to translate and speak text using Google, Yandex, Bing, LibreTranslate and Lingva translate API.

– Translate and speak text from screen or selection
– Support 125 different languages
– Low memory consumption (~20MB)
– Highly customizable shortcuts
– Command-line interface with rich options
– D-Bus API
– Available for Linux and Windows

Installation (Sparky 6 & 7 amd64/armhf/arm64):

sudo apt update
sudo apt install crow-translate

License: GNU GPL 3.0


24 April, 2022 06:14PM by pavroo

April 23, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Balint Reczey: Firefox on Ubuntu 22.04 from .deb (not from snap)

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at and and I’ve updated the pinning configuration based on them.

23 April, 2022 02:38PM

April 22, 2022

hackergotchi for Volumio


Stream Music via TIDAL Connect and Volumio

It has been exactly one year since we released the TIDAL Connect feature on Volumio, one of the most expected features since TIDAL released it at the end of 2020.  As many of you have known Spotify Connect for a long time, TIDAL Connect is basically the same, but for hi-res audio. And if you are hearing it for the first time, TIDAL Connect streams music from the TIDAL app to a TIDAL Connect-enabled device, in this case, your Volumio device.

As we all heard your feedback in this past year, what is wonderful about this feature, is the convenience and the ease of use and the availability to control your music not only from the Volumio UI but also from your TIDAL app. To get started with TIDAL Connect is pretty simple. You need: The TIDAL app downloaded on your mobile or tablet and your Volumio device with our Premium subscription.

Set up TIDAL Connect on Volumio

First, make sure you have logged in on MyVolumio Virtuoso or Premium on your device. Then on your mobile, open the TIDAL app and choose the track you want to play and head to the “Now Playing” page. You will see on the top right a device icon, click over there to see all the available devices to connect.

Since your Volumio device has TIDAL connect enabled, you will see your device in the option with the TIDAL Connect written underneath. Click on the device and TIDAL Connet will be connected. You will see the changes on the Now Playing page as shown in the image below. Now TIDAL Connect is active on your Volumio device and you are ready to listen to your favorite music!




You can control everything directly from the TIDAL app. Plus, if you have the Volumio UI open, you will see all the information about the track you are playing, as the metadata from TIDAL is all sent to Volumio and you can play, skip, and pause from Volumio.

TIDAL Connect with Volumio Multiroom Sync Playback


Multiroom Option


Tip: with the Multiroom Sync Playback feature on Volumio, you can stream TIDAL Connect to all your Volumio devices available in the same network. Once you enable TIDAL Connect on one device, go on the Volumio playback page and click on the multiroom icon. Group your current device with all the other Volumio devices you want* (*up to 6).


And the final step: play your music at full blast in all your rooms and enjoy!

The post Stream Music via TIDAL Connect and Volumio appeared first on Volumio.

22 April, 2022 01:54PM by Monica Ferreira

hackergotchi for SparkyLinux


Sparky 2022.04 Special Editions

There are new iso images of Sparky 2022.04 Special Editions: GameOver, Multimedia & Rescue ready to go.

Sparky “GameOver” Edition features a lightweight desktop, a very large number of preinstalled games, useful tools and scripts. Built for gamers.

Sparky “Multimedia” Edition uses a lightweight desktop environment and features a large set of tools for creating and editing graphics, audio, video and HTML pages.

Sparky “Rescue” Edition is an operating system which works in a live DVD/USB mode only (no installation on a hard drive). The Live system contains a large set of tools for scanning and fixing files, partitions and operating systems installed on hard drives.

All packages upgraded from Debian and Sparky testing repos as of April 21, 2022.
Firefox has been replaced by Firefox Mozilla Build (‘firefox-sparky’ package)
Linux kernel updated up to 5.16.18; 5.17.4 & 5.18-rc3 available in Sparky unstable repos.
The Calamares installer updated up to 3.2.55.

No reinstallation is required if you have Sparky rolling installed, simply keep it up to date.

New iso images of Sparky semi-rolling can be downloaded from the download/rolling page

Informacja o wydanu w języku polskim:

22 April, 2022 12:35PM by pavroo

April 21, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Xubuntu: Xubuntu 22.04 released!

The Xubuntu team is happy to announce the immediate release of Xubuntu 22.04.

Xubuntu 22.04, codenamed Jammy Jellyfish, is a long-term support (LTS) release and will be supported for 3 years, until 2025.

The Xubuntu and Xfce development teams have made great strides in usability, expanded features, and additional applications in the last two years. Users coming from 20.04 will be delighted with improvements found in Xfce 4.16 and our expanded application set. 21.10 users will appreciate the added stability that comes from the numerous maintenance releases that landed this cycle.

The final release images are available as torrents and direct downloads from

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal ISO edition, is available to download from [torrent]. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues


  • Mousepad 0.5.8, our text editor, broadens its feature set with the addition of session backup and restore, plugin support, and a new gspell plugin.
  • Ristretto 0.12.2, the versatile image viewer, improves thumbnail support and features numerous performance improvements.
  • Whisker Menu Plugin 2.7.1 expands customization options with several new preferences and CSS classes for theme developers.
  • Firefox is now included as a Snap package.
  • Refreshed user documentation, available on the ISO and online.
  • Six new wallpapers from the 22.04 Community Wallpaper Contest.

Known Issues

  • The shutdown prompt may not be displayed at the end of the installation. Instead you might just see a Xubuntu logo, a black screen with an underscore in the upper left hand corner, or just a black screen. Press Enter and the system will reboot into the installed environment. (LP: #1944519)
  • The Firefox Snap is not currently able to open the locally-installed Xubuntu Docs. (LP: #1967109)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover many of the other packages we carry and more generic issues.


For support with the release, navigate to Help & Support for a complete list of methods to get help.

21 April, 2022 10:44PM

Xubuntu: Xubuntu 22.04 Community Wallpaper Contest Winners

The Xubuntu team is happy to announce the results of the 22.04 community wallpaper contest!

As always, we’d like to send out a huge thanks to every contestant. The Xubuntu Community Wallpaper Contest gives us a unique chance to interact with the community and get contributions from members who may otherwise not have had the opportunity to join in before. With around 130 submissions, the contest garnered less interest this time around, but we still had a lot of great work to pick from. All of the submissions are browsable on the 22.04 contest page at

Without further ado, here are the winners:

From left to right, top to bottom. Click on the links for full-size image versions.

Congratulations, and thanks for your wonderful contributions!

21 April, 2022 10:21PM

Podcast Ubuntu Portugal: E191 Podcast Wacom Portugal

Dali, leia-se Diogo, foi – para variar – às compras, com o objectivo de se dotar de ferramentas criativas. O quadrado do Carrondo, viu nesse acto uma perspectiva mais técnica. Numa semana em que a Vodafone volta à conversa e as migrações também, desta feita de WordPress para Hugo, ou simples HTML…
Já sabem: oiçam, subscrevam e partilhem!


### Apoios
Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.
Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

### Atribuição e licenças
Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo [Senhor Podcast](
O website é produzido por Tiago Carrondo e o [código aberto]( está licenciado nos termos da [Licença MIT](
A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](
Este episódio e a imagem utilizada estão licenciados nos termos da licença: [Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](, [cujo texto integral pode ser lido aqui]( Estamos abertos a licenciar para permitir outros tipos de utilização, [contactem-nos]( para validação e autorização.

21 April, 2022 10:03PM

Lubuntu Blog: Lubuntu 22.04 LTS is Released!

Thanks to all the hard work from our contributors, Lubuntu 22.04 LTS has been released. With the codename Jammy Jellyfish, Lubuntu 22.04 is the 22nd release of Lubuntu, the eighth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 22.04 LTS will be supported for 3 years until April 2025. Our […]

21 April, 2022 08:09PM