September 30, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu MATE: Ubuntu MATE 22.10 Release Notes

We are preparing Ubuntu MATE 22.10 (Kinetic Kudu) for distribution on October 20th, 2022. With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

Ubuntu MATE 22.10 is a modest update by recent standards and focused on “quality of life improvements”. And there is good reason why this release of Ubuntu MATE doesn’t feature the usual bucket 🪣 list of changes you’d typically expect, and that’s because I’ve been helping bring the full Ubuntu MATE experience to Debian MATE 🧉

This may raise some questions for Ubuntu MATE users, so let’s try and address them:

  • I’m not stepping away from Ubuntu or Ubuntu MATE. I will continue to use and develop Ubuntu MATE 👍
  • I’ve closely collaborated with the MATE packaging team for Debian for over 8 years 👴
  • Making the MATE experience in Debian and Ubuntu consistent makes maintenance easier for all involved 🛠
  • Ubuntu MATE offers some modernisation of MATE via home-grown apps such as MATE Tweak and Ayatana Indicators. We want Debian users to benefit from those improvements too 💖
  • We’re hopeful the MATE spin in Debian 12 will offer the same (or extremely similar) experience Ubuntu MATE users have enjoyed for some time 🎁

Thank you! 🙇

I’d like to extend my sincere thanks to everyone who has played an active role in improving Ubuntu MATE for this release 👏 From reporting bugs, submitting translations, providing patches, contributing to our crowd funding, developing new features, creating artwork, offering community support, actively testing and providing QA feedback to writing documentation or creating this fabulous website. Thank you! Thank you all for getting out there and making a difference! 💚

Ubuntu MATE 22.10 Ubuntu MATE 22.10 using the Pantheon layout and new centered panel applets and HUD

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers.

What changed since the Ubuntu MATE 22.04?

Here are the highlights of what’s changed since the release of Ubuntu MATE 22.04

MATE Desktop

The usual point release updates to MATE Desktop and Ayatana Indicators have been included that fix 🩹 an assortment on minor bugs 🐛 The main change in MATE Desktop is to MATE Panel, where we’ve included an early snapshot release of mate-panel 1.27.0 along with a patch set that adds centre alignment of panel applets.

This much requested feature comes from Ubuntu MATE community contributor Gordon N. Squash 🇺🇸 and allows panel applets to be centre aligned, as well as the usual left and right alignment. I’m sure you’ll all join me in thanking 🙇 Gordon for working on this feature.

Centre aligning of applet icons will ship with MATE Desktop 1.28, but we’re including it early 🐓 for Ubuntu MATE users. We’ve updated MATE Tweak to correctly save/restore custom layouts that use centre aligned applets and all the panel layouts shipped with Ubuntu MATE 22.10 have been updated so they’re compatible with center alignment of applets ✅

AI Generated wallpapers (again!)

My friend Simon Butcher 🇬🇧 is Head of Research Platforms at Queen Mary University of London managing the Apocrita HPC cluster service. Once again, Simon has created some stunning AI generated 🤖🧠 wallpapers for Ubuntu MATE using bleeding edge diffusion models 🖌 The samples below are 1920x1080 but the versions include in Ubuntu MATE 22.10 are 3840x2160.

Here’s what Simon has to say about about some of the challenges he faced creating these new wallpapers for Kinetic Kudu:

AI image generation is continuing to improve at a mind-boggling rate. Yet, until recently, coherent human faces, hands and anatomically correct animals have proved rather tricky. Fortunately human faces are getting particular attention in the open source community after the release of Stable Diffusion. However, while an anthropomorphic portrait of a Kudu wearing a rather dapper suit will be stylishly rendered, getting consistent results for kudu in their natural habitat proved particularly tricky, exacerbated by their elegant horn structure. Often you will get rather wild interpretations of the horns, 5 legged creatures, or nightmarish output akin to the Pushmi-Pullyu from the Dr Doolittle stories.

Jellyfish, on the other hand, are a mass of tentacles and perhaps benefit aesthetically from the randomness induced by AI-generated images, in the same way that forests, mountains and hobbit villages generated by AI can be produced en-masse to a very satisfying extent. So while 1000 stunning unique images of jellyfish can be produced in a few minutes with a powerful GPU, the kudu was quite a challenge, and I had to experiment a lot with various prompts and styles, and a lot of cherry-picking - throwing away about 99% of the results that weren’t quite right. For the next release, I’m hoping we’ll see further AI innovation in time for the next release, or…maybe the next code name will be a lionfish?

PipeWire

PulseAudio has been replaced with PipeWire and Bluetooth audio codec support has been expanded with the addition of AAC, LDAC, aptX and aptX HD.

As a podcaster and streamer I’m delighted to have PipeWire installed by default in Ubuntu MATE 22.10. The migration to PipeWire has resolved some longstanding minor annoyances I’ve had with audio in that past and all the tools 🧰 I use for audio and video production continue to function correctly.

PipeWire on Ubuntu MATE 22.04

If you like to ride the LTS train 🚆 but want to use PipeWire in Ubuntu MATE 22.04 (as I have been doing for some months) then this is how to make the change:

sudo apt-get install gstreamer1.0-pipewire pipewire-audio-client-libraries wireplumber
sudo apt-get remove pulseaudio-module-bluetooth
sudo apt-get install libfdk-aac2 libldacbt-abr2 libldacbt-enc2 libopenaptx0 libspa-0.2-bluetooth libspa-0.2-jack

Once the installs/removals are complete restart your computer.

Ubuntu MATE Stuff

The “MATE HUD” has seen some significant work from community contributor twa022 🌎. The HUD now supports MATE, XFCE and Budgie, has improved accuracy for HUD placement (taking into account various panel offsets/struts), is highly configurable and includes a new HUD settings app

HUD Settings HUD Settings

MATE User Manager

A new utility, User Manager, has been added to compliment the suite of MATE tools. User Manager replaces the aging gnome-system-tools which was removed from Ubuntu MATE in the 22.04 release and allows you to add/modify/remove user accounts. It also includes the ability to define which users are Administrators, enable/disable auto-login, set profile images and manage group memberships.

MATE User Manager MATE User Manager

Yaru

And last but not least, the Ubuntu MATE Artwork package has been updated to include all the refinements and improvements in the suite of Yaru themes 🎨

Major Applications

Accompanying MATE Desktop 1.26.1 🧉 and Linux 5.19 🐧 are Firefox 105 🔥🦊, Celluloid 0.20 🎥, Evolution 3.46 📧, LibreOffice 7.4 📚

See the Ubuntu 22.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 22.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 22.04

You can upgrade to Ubuntu MATE 22.10 from Ubuntu MATE 22.04. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘22.10’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ubuntu Ubiquity slide shows are missing for OEM installs of Ubuntu MATE
Ubuntu Snaps not preseeded in Ubuntu (and flavours) 22.10 beta
Ubuntu MATE A default wallpaper is not set after installing Ubuntu MATE 22.10 beta

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

30 September, 2022 03:56PM

Kubuntu General News: Kubuntu Kinetic Kudu (22.10) Beta Released

KDE Plasma desktop 5.25 on Kubuntu 22.10 BetaKDE Plasma desktop 5.25 on Kubuntu 22.10 Beta

The beta of Kubuntu Kinetic Kudu (to become 22.10 in October) has now been released, and is available for download.

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of Kubuntu Kinetic Kudu are not recommended for:

  • Anyone needing a stable system
  • Regular users who are not aware of pre-release issues
  • Anyone in a production environment with data or workflows that need to be reliable

They are, however, recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers
  • Other Ubuntu flavour developers

The Beta includes some software updates that are ready for broader testing. However, it is an early set of images, so you should expect some bugs.

We STRONGLY advise testers to read the Kubuntu 22.10 Beta release notes before installing, and in particular the section on ‘Known issues‘.

Kubuntu is taking part in ‘Ubuntu Testing Week’ from September 29th to October 6th. Details for all flavours are available on the Ubuntu Discourse announcement.

You can also find more information about the entire 22.10 release (base, kernel, graphics etc) in the main Ubuntu Beta release notes and announcement.

30 September, 2022 03:42PM

The Fridge: Ubuntu 22.10 (Kinetic Kudu) Final Beta released

The Ubuntu team is pleased to announce the Beta release of the Ubuntu 22.10 Desktop, Server, and Cloud products.

Ubuntu 22.10, codenamed “Kinetic Kudu”, continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This Beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu MATE, Ubuntu Studio, Ubuntu Unity, and Xubuntu flavours.

The Beta images are known to be reasonably free of showstopper image build or installer bugs, while representing a very recent snapshot of 22.10 that should be representative of the features intended to ship with the final release expected on October 20, 2022.

Ubuntu, Ubuntu Server, Cloud Images:

Kinetic Beta includes updated versions of most of our core set of packages, including a current 5.19 kernel, and much more.

To upgrade to Ubuntu 22.10 Beta from Ubuntu 22.04, follow these instructions:

https://help.ubuntu.com/community/KineticUpgrades

The Ubuntu 22.10 Beta images can be downloaded at:

https://releases.ubuntu.com/22.10/ (Ubuntu and Ubuntu Server on x86)

This Ubuntu Server image features the next generation Subiquity server installer, bringing the comfortable live session and speedy install of the Ubuntu Desktop to server users.

Additional images can be found at the following links:

https://cloud-images.ubuntu.com/daily/server/kinetic/current/ (Cloud Images)
https://cdimage.ubuntu.com/releases/22.10/beta/ (Non-x86)

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20220929 or higher) should be considered a Beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 22.10 Beta can be found at:

https://discourse.ubuntu.com/t/kinetic-kudu-release-notes

Kubuntu:

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/kubuntu/releases/22.10/beta/

Lubuntu:

Lubuntu is a flavor of Ubuntu which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu base.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/lubuntu/releases/22.10/beta/

Ubuntu Budgie:

Ubuntu Budgie is community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/ubuntu-budgie/releases/22.10/beta/

Ubuntu MATE:

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/ubuntu-mate/releases/22.10/beta/

Ubuntu Studio:

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflow: audio, graphics, video, photography and publishing.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/ubuntustudio/releases/22.10/beta/

Ubuntu Unity:

Ubuntu Unity is a flavor of Ubuntu featuring the Unity7 desktop environment.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/ubuntu-unity/releases/22.10/beta/

Xubuntu:

Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.

The Beta images can be downloaded at:
https://cdimage.ubuntu.com/xubuntu/releases/22.10/beta/

Regular daily images for Ubuntu, and all flavours, can be found at:
https://cdimage.ubuntu.com

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit https://ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:
https://ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at:
https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this Beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Sep 30 00:08:26 UTC 2022 by Brian Murray, on behalf of the Ubuntu Release Team

30 September, 2022 01:44PM

Ubuntu Blog: Meet Canonical at IoT Tech Expo

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/d_ufNEcmKNnvVQdFlK5gjLUdq7yEcqZvOZwOaronbmLIFb4K7dVyUKXOMJYnr8uEDA3z0MKsll1HSYyumOL7jovudAfdjEj8WHHsCbEzIrBoYGfUCsWv60mWv-MYnHnte11fd7QN_hjZ-q6EsAxPIJy1U70OQKLYTlHKKFKklNIQ7j-OV-i2ZpKkdw" width="720" /> </noscript>

Santa Clara, USA October 5-6 2022

IoT Tech Expo is almost here! With 250+ speakers, 5,000+ attendees and dozens of sessions dedicated to IoT in the enterprise and transformational IoT and 5G, it will be an impactful gathering. Join Canonical there to discuss our IoT solutions with our experts on-site.

Learn about Linux security

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/tZKmyfesEHfrSb-RzKuuvQESKl8ljPab2ZdSZKAYGNuisY3GdhMMjTToJ9Mdmq8PJ6Yx2a_smxnUYxgHE5ga8wrw2_Wc7DlthDWQKfmuwLLsCQmkSmZ1xD9K7deE-VAGTliFDgCVSjWnKVIf8XD484pEFOedJmz2e0UYFG69pZ96Ivprv69MQuLeRw" width="720" /> </noscript>

Linux has found wide adoption in IoT, but traditional Linux distributions weren’t made for this use case. Nathan Hart, Product Manager at Canonical will give a talk about the challenges of using a traditional Linux in IoT and how Canonical keeps IoT devices secure. In this session, attendees will learn about:

  • The vulnerabilities of traditional Linux systems in IoT
  • How traditional security approaches are not designed with IoT in mind
  • How low connectivity and low physical access create additional challenges

Title: Is Linux sufficiently secure for IoT?

Speaker: Nathan Hart, Product Manager at Canonical 

Session time: 15:20 – 15:30 PDT

Track: Transformational IoT & 5G

Experience innovation

At booth 204, we’ll be showcasing some of our IoT technologies. Stop by to talk to us and to see Ubuntu Core, Canonical’s embedded Linux, in action! You’ll hear about benefits and use cases like:

Real-time data analysis in an edge AI device: How do businesses read and manage real-time data produced by their remote devices in the field? We’ll be showing a demonstration of Ubuntu Core as a platform for edge AI/ML using live sensor data.

Reliable operation for long life: Ubuntu Core’s built-in security is not limited to the OS layer via bug fixes and CVE patches. Every application on top of Ubuntu Core sits in a secure, sandboxed environment. Furthermore, Canonical maintains Ubuntu Core throughout devices’ lifetime. Industrial manufacturers receive a decade of ultra-reliable software updates on their low-powered, inaccessible, and often remotely administered embedded devices in the field. 

By using an OS designed for utmost reliability and optimised for security, world-leading suppliers and manufacturers are free to concentrate their efforts and redirect resources towards their value-add activities.

Book a meeting at IoT Tech Expo

If you’re attending IoT Tech Expo and interested in learning how to best secure your Linux for IoT and edge devices, book a meeting with Sierra Fredenrich, Account Development Representative. 

30 September, 2022 11:59AM

Ubuntu Blog: Migrating to an open-source private cloud platform: key considerations

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/75bd/FinServ-blog-Private-cloud-migration-key-considerations.png" width="720" /> </noscript>

Private clouds combine the many benefits of cloud computing, like elasticity, scalability and agility, with the security, access control and resource customisation of on-prem infrastructure. Private clouds allow financial institutions to have greater control over hardware and software choices. They make it easier to enforce compliance with regulatory standards. Private clouds also enable financial institutions to move from a traditional IT engagement model to a DevOps model and transform their IT groups from an infrastructure provider to a service provider (via a SaaS model). But they can also entail high costs.

One strategy that financial institutions can adopt to reduce infrastructure costs for private clouds is to move away from expensive proprietary technologies, like VMWare, to open-source platforms like OpenStack.

OpenStack

OpenStack provides a complete ecosystem for building private clouds. Built from multiple sub-projects as a modular system, OpenStack allows financial institutions to build out a scalable private (or hybrid) cloud architecture that is based on open standards.

OpenStack is one of the most featureful and versatile open-source cloud platforms that enables application portability among private and public clouds, allowing financial institutions to choose the best cloud for their applications and workflows without vendor lock-in. 

Charmed OpenStack is Canonical’s enterprise-grade OpenStack distribution. 

Charmed OpenStack ensures private cloud price-performance, providing full automation around OpenStack deployments and operations. Together with Ubuntu, the Linux operating system preferred by developers,  it meets the highest security, stability and quality standards in the industry.

Migrating to OpenStack

Organisations need to consider several trade-offs when they choose to migrate to open-source private cloud platforms like OpenStack:

  • Economics – Choose the right OpenStack distribution that enables CapEx and OpEx reduction.
  • Technology support – Ensure that various components of the technology stack are supported.
  • Velocity of development and innovation – how quickly upstream features and projects are adopted.
  • Day-N operations – how to efficiently operate OpenStack post-deployment.
  • Upgrades – how predictable the release cadence and upgrade path are.

In the following sections, we discuss these considerations further and explore how Charmed OpenStack helps enterprises build and operate cost effective private clouds.

Consideration 1: Economics – reducing CapEx and OpEx

The success of private cloud infrastructure hinges primarily on economics. Well architected private clouds can be a cost-effective extension to public cloud infrastructure, ensuring maximum multi-cloud cost optimisation. 

Canonical’s enterprise distribution, Charmed OpenStack, is more economical than VMware vRealize and other OpenStack distributions for the following key reasons:

  • While VMware uses a CPU based support and subscription model, Charmed OpenStack uses a per-node support and subscription model, bringing the operational costs down and adding pricing predictability.
  • Charmed OpenStack uses OpenStack Charms for its deployment and operations. A charmed operator (charm) is a software component that contains all of the instructions necessary for deploying and configuring an application. Charms make it easy to reliably and repeatedly deploy applications across many clouds, allowing the user to scale the application with minimal effort. The use of OpenStack charms reduces the entire adoption process to just a few weeks. Also, the operational overhead over the product lifetime is reduced, which in turn reduces OpEx and accelerates return on investment.

Charmed OpenStack can run compute, network and storage services on the same shared hardware. Using a hyper-converged architecture, organisations can use the same hardware across the entire data centre and benefit from a unified, distributed approach to infrastructure provisioning and service orchestration.

Consideration 2: Technology support

No two businesses are the same. Technological choice enables financial institutions to choose the solutions that best suit their needs. OpenStack enjoys the support of major IT vendors, virtualisation hypervisors, and public and hosted private cloud providers, and plug-ins from all major networking and storage vendors.  

With so many companies devoting resources to OpenStack, together with Canonical’s own development efforts, Charmed OpenStack benefits from a wider set of technology options than VMware and other OpenStack distributions.

Consideration 3: Velocity of development and innovation

Proprietary virtualisation platforms like VMware can keep organisations from having the resources to invest in hybrid multi-cloud strategy, containers and automation. By removing vendor lock-in, enterprises can gain freedom, flexibility and resources to build a platform for innovation based on open-source technologies. 

OpenStack accelerates time-to-market by providing business units a self -service portal to access necessary resources on-demand, and an API-driven platform for developing cloud-aware applications. Enterprises dramatically reduce provisioning times from weeks or months to minutes with OpenStack, giving them a significant competitive advantage.

As OpenStack continues to evolve rapidly, one challenge that organisations face is upgrading OpenStack, year after year. Canonical solves this problem with total automation that decouples architectural choices from the operations codebase.Charmed OpenStack uses  MAAS for infrastructure provisioning and Juju for application modelling. MAAS is an open source bare-metal server provisioning tool that turns bare metal into an elastic, cloud-like resource. With Juju and OpenStack charms, daily operations, such as cluster scale out or database backups, are fully automated.

Consideration 4: Enhancing Day N operations (operational efficiencies post-deployment)

From bare metal to the cloud control plane, Canonical’s Charmed OpenStack uses automation everywhere, leveraging model-driven operations. By using a model-driven approach, teams can simply define their deployment requirements and let Juju take care of the rest. 

OpenStack charms can significantly simplify application deployments and accelerate daily operational tasks, such as scaling out the OpenStack cluster, resulting in lower maintenance, less human resource requirements – and, therefore, reduced OpEx.

Consideration 5: Predictable release cadence and upgrade path

With a proprietary solution, financial institutions  are tied to a specific vendor’s release cadence and upgrade model, which can constrain business advancement and innovation. An unpredictable upgrade path also makes it impossible for financial institutions  to plan their IT roadmap. Compared to other OpenStack distributions, Charmed OpenStack users benefit from a highly predictable, transparent release cadence.

Each upstream OpenStack version comes with new features that may bring measurable benefits to your business. Canonical provides full support for every version of OpenStack within two weeks of the upstream release. Every two years, Canonical releases a Long Term Support (LTS) version of Charmed OpenStack that is  supported for five years.OpenStack upgrades are known to be painful due to the complexity of the process. By leveraging the model-driven architecture and using OpenStack Charms for automation purposes, Charmed OpenStack can be easily upgraded between its consecutive versions. This allows organisations to stay up to date with the upstream features, while not putting additional pressure on their operations team.

Conclusion

The diverse and rich OpenStack ecosystem allows for numerous ways to get started on building an open source based private cloud, depending on use case, desired control, and organisational capabilities. Migrating from VMware to OpenStack can have significant economic benefits for any financial institution and improve infrastructure flexibility.  It is often a wise choice to gain business agility and drive innovation while lowering costs.  But to move to an OpenStack distribution, organisations must ensure they select a distribution that is easily deployable, maintainable, upgradable and cost-effective. Charmed OpenStack, an enterprise grade OpenStack distribution from Canonical, reduces overall TCO compared to VMware.Canonical provides up to ten years of security updates for Charmed OpenStack under the Ubuntu Advantage for Infrastructure subscription for customers who value stability above all else. Moreover, the support package includes various EU and US regulatory compliance options. Additional hardening tools and benchmarks ensure the highest level of security.

Explore how Canonical helps financial institutions drive business agility and innovation at lower costs

Want to learn more about migrating your private cloud infrastructure to a more cost effective solution? Download our guide!

Contact Us

Finserv IT infrastructure: Migrating from VMWare to OpenStack

CIOs’ guide to migrating your private cloud infrastructure to a more cost effective solution

Read the whitepaper
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_350/https://assets.ubuntu.com/v1/54de4275-FinServ-RHvsCanonical_2021.svg" width="350" /> </noscript>


30 September, 2022 09:30AM

Ubuntu Studio: Ubuntu Studio 22.10 Beta Released

The Ubuntu Studio team is pleased to announce the beta release of Ubuntu Studio 22.10, codenamed “Kinetic Kudu”.

While this beta is reasonably free of any showstopper installer bugs, you may find some bugs within. This image is, however, mostly representative of what you will find when Ubuntu Studio 22.10 is released on October 20, 2022.

Special notes:

The Ubuntu Studio 22.10 disk image (ISO) exceeds 4 GB and cannot be downloaded to some file systems such as FAT32, and may not be readable when burned to a DVD. For this reason, we recommend downloading to a compatible file system. When creating a boot medium, we recommend creating a bootable USB stick with the ISO image, or burning to a Dual-Layer DVD.

Images can be obtained from this link: https://cdimage.ubuntu.com/ubuntustudio/releases/22.10/beta/

Full updated information, including Upgrade Instructions, are available in the Release Notes.

Regarding Pipewire

One of our goals this release was to create some kind of switch between our traditional PulseAudio/JACK setup and Pipewire, but this did not come to fruition as we had quite a few other bugs to squash as a result of the transition to ffmpeg 5. Additionally, we had a lot of clean-up after the transition to Python 3.10 in 22.04 LTS among other bugs. Sadly, that’s where our attention went and Pipewire support had to be deprioritized for this release.

New Features This Release

  • Ubuntu Studio Installer now includes Ubuntu Studio Feature Uninstaller to remove features of Ubuntu Studio that you don’t need. This is a long-requested feature that will be detailed in the official release announcement when Ubuntu Studio 22.10 releases on October 20th.
  • Q Light Controller Plus version 4.12.5
  • Freeshow version 0.5.6
  • openLP version 2.9.5

Major Package Upgrades

  • Darktable version 4.0.0
  • OBS Studio version 28.0.1
  • Audacity version 3.1.3
  • digiKam version 8.0.0 development snapshot (pre-release, see notes below)
  • Kdenlive version 22.08.1
  • Krita version 5.1.1

There are many other improvements, too numerous to list here. We encourage you to take a look around the freely-downloadable ISO image.

Known Issues

  • digiKam is a development snapshot of 8.0.0. As such, it likely has undocumented bugs. We hope these bugs get ironed out by the time 8.0.0 beta comes out, but we are not sure when that will be as the digiKam developers have not released a timeline or release date. When the 8.0.0 beta or stable release of digiKam becomes available, we hope to provide these to you as Stable Release Updates. This came from the transition to ffmpeg 5 as prior versions of digiKam do not support ffmpeg 5. If you would like a stable version of digiKam now, a snap of 7.8.0 is available.

Official Ubuntu Studio release notes can be found at https://ubuntustudio.org/ubuntu-studio-22-10-release-notes/

Further known issues, mostly pertaining to the desktop environment, can be found at https://wiki.ubuntu.com/KineticKudu/ReleaseNotes/Kubuntu

Additionally, the main Ubuntu release notes contain more generic issues: https://discourse.ubuntu.com/t/kinetic-kudu-release-notes/27976

Frequently Asked Questions

Q: Does KDE Plasma use more resources than your former desktop environment (Xfce)?
A: In our testing, the increase in resource usage is negligible, and our optimizations were never tied to the desktop environment.

Q: Does Ubuntu Studio contain snaps?
A: Yes. Mozilla’s distribution agreement with Canonical changed, and Ubuntu was forced to no-longer distribute Firefox in a native .deb package. We have found that, after numerous improvements, Firefox now performs just as well as the native .deb package did.

Additionally, Audacity 2.4.2, due to incompatibilities with ffmpeg 5, had to be removed from the official Ubuntu repositories this cycle. For that reason, we worked hard with the snap packager to include it in Ubuntu Studio 22.10. Therefore, Audacity 3.1.3 is included as a snap. Watch this bug to track Audacity’s reintroduction into the Ubuntu repositories. Right now, it is on-pace to happen before the release of Ubuntu 22.10. When this happens, we fully intend to drop the snap and re-include the .deb package in Ubuntu Studio. Watch Ubuntu Studio News for updates.

Finally, Freeshow is an Electron-based application. Electron-based applications cannot be packaged in the Ubuntu repositories in that they cannot be packaged in a traditional Debian source package. While such apps do have a build system to create a .deb binary package, it circumvents the source package build system in Launchpad, which is required when packaging for Ubuntu. However, Electron apps also have a facility for creating snaps, which can be uploaded included. Therefore, for Freeshow to be included in Ubuntu Studio, it had to be packaged as a snap.

Q: If I install this Beta release, will I have to reinstall when the final release comes out?
A: No. If you keep it updated, your installation will automatically become the final release. However, if Audacity returns to the Ubuntu repositories before final release, then you might end-up with a double-installation of Audacity. Removal instructions of one or the other will be made available in a future post.

Q: Will you make an ISO with {my favorite desktop environment}?
A: To do so would require creating an entirely new flavor of Ubuntu, which would require going through the Official Ubuntu Flavor application process. Since we’re completely volunteer-run, we don’t have the time or resources to do this. Instead, we recommend you download the official flavor for the desktop environment of your choice and use Ubuntu Studio Installer to get Ubuntu Studio.

Q: What if I don’t want all of these packages installed on my machine?
A: Simply use the Ubuntu Studio Feature Uninstaller to remove the features of Ubuntu Studio you don’t want or need!

Please Test!

30 September, 2022 01:18AM

Ubuntu Blog: That’s a wrap! Canonical attends the very first OpenSearchCon in Seattle

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/e6mRgoD7y3pjeLulcFFq-CffxmiGv7chXqXd285_StW4WFFl_NnlO2l8GivbamdaHyVyJMhmRkMDFal-InAVbG5F2gAzN1Nt1opcbao4QrbP2rcxPvJAm_nFXeI5bm6ZTY6oznq-dWCNHLyQDlKDkxL_Pnn3cZ9CUEpHHGxjkxqlIox1rs7foFGzcg" width="720" /> </noscript>
Photo from OpenSearch.org

A few of us at Canonical travelled from Europe to the beautiful Pacific Northwest, Seattle, Washington and attended the very first OpenSearchCon. Huge congratulations to the OpenSearch team for the well-organised, first ever in-person conference held on 21 September 2022 at Fremont Studios.

Best in class open-source search and analytics suite

OpenSearch brings to the forefront the next wave of search and analytics technology. OpenSearch makes it easy to ingest, search, visualise, and analyse data. Developers build with OpenSearch for use cases such as application search, log analytics, data observability, data ingestion, and more.

A common use case is log analytics. You take the logs from an application, feed them into OpenSearch, and use the rich search and visualisation functionality to identify issues. For example, a malfunctioning web server might throw a 500 error 0.5% of the time, which can be hard to notice unless you have a real-time graph of all HTTP status codes the server has thrown in the past four hours. You can use OpenSearch Dashboards to build these sorts of visualisations from data in OpenSearch.

OpenSearch is Apache 2.0 licensed software, which means it’s open source and maintained by the community. OpenSearch includes a network of partners and is open to contribution. OpenSearch also has principles for development, as the organisation believes that great open-source software is built with a diverse community of contributors. Canonical, the publisher of Ubuntu, is proud to be a member of this expanding community.

OpenSearchCon – fully packed with talks and community engagements

OpenSearchCon is an event for the community. This first edition successfully gathered users, developers, and technologists from across the open-source world to learn, collaborate and innovate. The event consists of multiple talks about the past, present and future of the OpenSearch project

OpenSearchCon keynote speaker and Product Manager Eli Fisher, highlighted the importance of OpenSearch and the project’s successes since it started in 2021. OpenSearch has seen rapid and sustainable growth. Currently, there are 19 open source community projects, 5.8K stars on GitHub. The OpenSearch project is also part of the top 5 search engines in DB engine rankings.

Presentations at the event covered a variety of topics, from OpenSearch technology and architecture to use cases, community empowerment, operations and security. The OpenSearch  roadmap was also discussed. 

Use cases presented included anomaly detection and observability. Speakers also shared how OpenSearch addresses data and analytics needs in both small and large-scale businesses. 

OpenSearch community leaders also joined the event. In line with this, the OpenSearch core team recognised multiple community contributors and maintainers of the open source project. In addition, community-driven talks highlighted some topics on how to contribute and how organisations can benefit by joining the project.

OpenSearch operators, powered by Juju

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/RTy0sLNpIPwCwsBfgAOXMZUztFYcJ1UTVURm9pdh0dpFFb9NYMf_vuvjjy9IJC8SkRRQM40erl8VQHaSCz85uZMJ1G_emgDBzgMUloTUtWXXFrPkEMZ48yCQXhbvMFxxfyCkG8BRx4Gjhh6Lz5p2lOWDrBieGG8ukPBnv_DCQJQRf4ADiUAWuO_qdw" width="720" /> </noscript>

Mehdi Bendriss, Senior Engineer at Canonical, gave a presentation about deploying OpenSearch solutions on hybrid multi-cloud environments. This topic includes Canonical’s plans to collaborate and work with the community on the creation of an OpenSearch operator.  By using operators, app administrators and analysts who run workloads in various infrastructures are able to automate tasks to take care of repetitive operational work. Software operators codify the knowledge of an organisation’s operational team to manage, operate and secure applications in the production environment. 

Canonical has developed multiple application operators, known as charms, which are published in Charmhub.io. The charms use Juju, the Charmed Operator Framework. Canonical is working on OpenSearch operators and will publish them in Charmhub soon to make them available to the community. 

In addition, Canonical plans to publish the OpenSearch snap package in the Snapcraft Store. Snaps are a simple packaging format that is distributed as a single file (squashfs) similar to a dmg on OSX. This capability can give simple installation instructions for any snap-enabled Linux system. Snaps are also created with multiple channels that can be leveraged for the different states of the development workflow. This feature would provide a quick way to test and keep track of the latest changes in the product in an easy way.

The combination of OpenSearch search and analytics technology and Canonical security, packaging and automation expertise will deliver a human-friendly, highly secure and robust OpenSearch on any cloud – be it public cloud, private cloud and bare metal.

Be part of the ‘search’ and open source innovation

At Canonical we support innovations such as OpenSearch, which carry the values of open-source technology and community.

Would you like to contribute to OpenSearch and other open-source projects? Here are a few things you should check out:. 

Kudos to the OpenSearch team, and see you again at OpenSearchCon 2023!

30 September, 2022 01:00AM

September 29, 2022

Ubuntu Blog: Ubuntu Arrives on Amazon WorkSpaces: The First Fully Managed Ubuntu VDI on a Public Cloud

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ba5d/Ubuntu-on-Amazon-WS-3.png" width="720" /> </noscript>

29th September 2022 – Canonical is proud to announce the availability of Ubuntu WorkSpaces on AWS, a fully managed virtual desktop infrastructure (VDI) on the public cloud and the first third-party Linux OS available on the platform. Ubuntu Desktop’s availability on Amazon WorkSpaces was announced today at the AWS End User Computing Innovation Day in Seattle, WA.

Amazon WorkSpaces offers a fully managed and highly secure cloud desktop solution across a broad hardware spectrum, without the upfront costs of deployment and configuration. They enable remote developers to access high-performance desktops from any location, preserving security.

Until now, Amazon WorkSpaces offered the option of either Windows or Amazon Linux machines. With the addition of Ubuntu WorkSpaces, developers can use their preferred Linux operating system, with access to a wealth of open source tools and libraries in cutting-edge fields like data science, artificial intelligence / machine learning (AI/ML), cloud and internet-of-things (IoT).

“We’ve brought Ubuntu Desktop to Amazon WorkSpaces so developers can streamline the design, coding, pipelines, and deployment of Ubuntu-based workloads, whether instances or containers, all within the AWS environment,” said Alex Gallagher, VP Cloud for Canonical. “Also, Ubuntu virtual desktops on WorkSpaces enable IT organisations to quickly and easily provision high-performance Ubuntu Desktop instances, delivered as a fully managed AWS service. In the face of constant and increasing pressure to support the security and productivity needs of hybrid workers, that’s a win for IT organizations and their end users.”

Secure, performant Linux on demand

The flexibility provided by Amazon WorkSpaces means developers can spin up and tear down high-end development machines for resource-intensive workloads as and when they are required, without the overhead of purchasing and maintaining additional hardware. This increased efficiency can represent a significant cost saving for academic institutions and enterprises of all sizes.

DevOps engineers can switch between complex OS configurations to test multiple environments without needing to reconfigure their local system or provision multiple physical devices

Remote workers can be provided with the tools, environment and permissions needed to enable their role without increasing security risks. Ubuntu workstations benefit from all AWS security controls, have data encrypted at rest and connection secured by AWS pixel streaming technology

Company administrators gain comprehensive control over their developer’s desktop environments, from security policies to installed applications as well as user access control through Active Directory. All Ubuntu WorkSpaces benefit from Ubuntu Pro, which includes support for expanded security patching for 10 years

By leveraging Ubuntu WorkSpaces on AWS, admins can also avoid the overhead of mixed OS device management among developers.

Get started with Ubuntu WorkSpaces

Amazon Workspaces, Ubuntu WorkSpaces offers a wealth of best practice guides and tutorials, including how to assign Amazon WorkSpaces to users from a directory service

More information is available in the AWS guide to getting started on Amazon WorkSpaces.

Pricing for Ubuntu WorkSpaces is connected to users’ hardware requirements. More information and detailed pricing can be found on the Amazon WorkSpaces.

Read more

If you’re interested in how Ubuntu Desktop can empower developers in your organisation, please get in touch.

About Canonical 

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

29 September, 2022 04:50PM

Ubuntu Blog: Build Ubuntu Pro Golden Image on Google Cloud

What is Golden Image

A Golden image is a base image that is used as a template for your organization’s various virtual machines either on-prem or in the public cloud. It streamlines software development processes since mission-critical applications are dependent on a certified environment. Using Golden Images saves numerous hours and resources as they create consistent environments for your developers and operation teams. Golden Images not only help prevent human errors but also standardize VM configurations.

Why we use Ubuntu Pro to create Golden Images

Among many other benefits, Ubuntu Pro adds security coverage for the most important open-source applications like Apache Kafka, NGINX, MongoDB, Redis, and PostgreSQL. I believe this security assurance does align with your purpose of building Golden Images.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/C_CmF6gUqphdIVSjC4w_QfnSIzZ0y1-R2RcWRJ3dE5idmqYB8y421r-zyA0XQlK_OCeEuha6KWTVPUbofo-ZAyitxYzOFrqt1gOhCWWicQ0L055cdON4L8aUjZeEHD-ENLCMA_wNoah2Va6eHOhrMXiCntflAwIt-qZ1yYK5hcedg01UWkKJ8mkP0w" width="720" /> </noscript>

Create Ubuntu Pro Golden Image on Google Cloud

We will use Cloud Shell to create a Golden Image. Of course, you can use other tools, such as Packer, to create Golden Images. We may discuss those tools in another article. We will use Ubuntu Pro 22.04 as the base image for the image. You can use any Ubuntu Pro images that you find in your Google Cloud Console.

Once we logged in to Google Cloud Console, in Cloud Shell, we input:

gcloud compute images list --project=ubuntu-os-pro-cloud | grep ubuntu-pro
NAME: ubuntu-pro-1604-xenial-v20220810
FAMILY: ubuntu-pro-1604-lts
NAME: ubuntu-pro-1804-bionic-v20220902
FAMILY: ubuntu-pro-1804-lts
NAME: ubuntu-pro-2004-focal-v20220905
FAMILY: ubuntu-pro-2004-lts
NAME: ubuntu-pro-2204-jammy-v20220923
FAMILY: ubuntu-pro-2204-lts
NAME: ubuntu-pro-fips-1804-bionic-v20220829
FAMILY: ubuntu-pro-fips-1804-lts
NAME: ubuntu-pro-fips-2004-focal-v20220829
FAMILY: ubuntu-pro-fips-2004-lts

We find 6 different versions of Ubuntu Pro images. We will pick Ubuntu Pro 22.04 for this demo. Let’s create a Gloden Image from this Ubuntu Pro 22.04 official image:

gcloud compute images create golden-image3 --source-image-family=ubuntu-pro-2204-lts --source-image-project=ubuntu-os-pro-cloud
Created [https://www.googleapis.com/compute/v1/projects/[YOUR_PROJECT]/global/images/golden-image].
NAME: golden-image
PROJECT: [YOUR_PROJECT]
FAMILY:
DEPRECATED:
STATUS: READY

Done. We have created a Golden Image. You will find it in your image gallery.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/zxDs-1ycOLXjvZ82oIY0g62qK8WjQ5ZFdIb-DHnjiRjStojCJUdMyPwlKLGD_AiHkVYjAJcWaL3j1uBUjufxtno0Il57j6_nHXpxfxXjBJ5S6qTZMUjR-kjHJvVeExg5fo3XqsfTr8Kb0Zr4r_KKRsARJzWG3SjCLcD0EtzEpvcvldL2albVIPrlJA" width="720" /> </noscript>

Let’s check if this Golden Image contains the Ubuntu Pro license:

gcloud compute images describe golden-image
architecture: X86_64
archiveSizeBytes: '1000068480'
creationTimestamp: '2022-09-28T15:24:56.705-07:00'
diskSizeGb: '10'
guestOsFeatures:
- type: VIRTIO_SCSI_MULTIQUEUE
- type: SEV_CAPABLE
- type: UEFI_COMPATIBLE
- type: GVNIC
id: '550225037951072087'
kind: compute#image
labelFingerprint: 42WmSpB8rSM=
licenseCodes:
- '2592866803419978320'
licenses:
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-pro-cloud/global/licenses/ubuntu-pro-2204-lts
name: golden-image
selfLink: https://www.googleapis.com/compute/v1/projects/confident-sweep-285415/global/images/golden-image3
shieldedInstanceInitialState:
[...]

The license block “licenses: – https://www.googleapis.com/compute/v1/projects/ubuntu-os-pro-cloud/global/licenses/ubuntu-pro-2204-lts” shows that this image contains the Ubuntu Pro license.

Let’s use this Golden Image to create an instance:

gcloud compute instances create instance-from-golden-image --image=golden-image
Created [https://www.googleapis.com/compute/v1/projects/[YOUR_PROJECT]/zones/us-east1-b/instances/instance-from-golden-image].
NAME: instance-from-golden-image
ZONE: us-east1-b
MACHINE_TYPE: n1-standard-1
PREEMPTIBLE:
INTERNAL_IP: 10.142.0.45
EXTERNAL_IP: 34.139.200.39
STATUS: RUNNING

Then SSH into this machine to check its license:

gcloud compute ssh instance-from-golden-image
ua status
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Gq3NYMsBqlX90FEJ5wd0qhkJgXrjyAGFHT7UjJeQZzmAbL0PPRkRpH8STVZq5Fn89TCSmO9M8OK6G2Ad83oSXoTLAwA1fYTNofaN5WDVB6--mV0Y5I6MsgfFx_tHwyQkNq4LQN75EOTcmHUmtvuxMZ4k8jMBOeJE1Pn-aztcElZQxpXWzB8QDVGWLg" width="720" /> </noscript>

This machine is entitled to all the Ubuntu Pro features, such as ESM and Livepatch.

We have successfully created a Ubuntu Pro Golden Image. It’s time for the whole organization to use this Golden Image.

Share Golden Image

In order for other users in my organization to use this Golden Image, I need to authorize them to Compute Image User role (roles/compute.imageUser). So they will have permission to list, read, and use images. This practice follows the Least Privilege principle, so those image users don’t have other permissions to modify this Golden Image.

We select the Golden Image in the Image Gallery, and click ADD PRINCIPAL in the INFO PANEL:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/1MD6lldv9HZV6M__4oEN1Cps5hINlst9O6V8ulb6roHSwnbxN2e4ByS_HEu5PwyOR0yHikel0bpSFc7E2Txsi4AT8Mmus3vXqMMWNEfPUmOedC1fQnRmCy9MfbNGQvxr3EWNGTvlodHaLH5iJ3Nqn2sfL3a2RTh43hc4L_VnJYaHp5XLRTFZ5L8n2g" width="720" /> </noscript>

Then enter the email address of the identity I want to share the image with (I entered my email address for this demo). And I select Image User in the Role list.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/HNgEbBfSPTSkj_gZyvpibyLf88lhq_yquJG10Oxym2uhPCGWqcchx9p5afUb22atkr77kmxEtL8FtWpiiX-rXijA9D89AlB6575bSjaTsrwlY0e9e3Sivtz4161jxSl0jDFkOdXX7Hnf4VSw-Y9IYL8JUaWJ6VqoCTPouaiIY2Wd3nsFhw3dUuvRQA" width="720" /> </noscript>

We may also grant users the Viewer IAM role (roles/viewer) for the image project to ensure that the shared image appears in the image selection list.

That’s it. We created a Golden Image on Google Cloud and shared it with the users who need to use it. We may discuss how to use Packer to create Golden Image, how to create a Golden Image with the preinstalled application, and how to create a Golden Image from a running virtual machine next time. Stay tuned!

29 September, 2022 12:07AM

hackergotchi for Qubes

Qubes

The Qubes OS Project is now accepting donations on Ethereum!

We are pleased to announce that the Qubes OS Project is now accepting donations on Ethereum (Mainnet) at the following address:

0xDaa04647e8ecb616801F9bE89712771F6D291a0C
Warning: This Gnosis Safe Ethereum address supports ether (ETH) and all assets that fully comply with the ERC-20 standard (e.g., USDT, USDC, and DAI), but only on Ethereum Mainnet. Please do not send assets on any other network to this address, or else your donation may be lost. For example, please do not send assets on any Ethereum Layer 2 solution (e.g., Arbitrum, Optimism) or any sidechain (e.g., Polygon, xDai) to this address.

We have recently observed an increase in demand for an Ethereum donation option, both for ETH itself and for stablecoins like USDT, USDC, and DAI. As the largest smart-contract blockchain, largest proof-of-stake blockchain, and second-largest cryptocurrency by market capitalization, the Ethereum network and its native currency ETH are natural additions to our growing list of donation methods. Moreover, this new option allows users to donate any token they choose (including non-stablecoins!) so long as (1) the token fully complies with the ERC-20 standard and (2) the transaction is done on Ethereum Mainnet (as opposed to a Layer 2 solution or a sidechain). Please double-check that both of these conditions hold before sending anything to our Ethereum address, or else your donation may be lost!

As with our bitcoin (BTC) and monero (XMR) donation addresses, you can verify the authenticity of our Ethereum donation address via the Qubes Security Pack in the fund directory. We also provide detailed instructions for verifying the digital signatures.

As with all other donations, your donations on Ethereum will directly fund the Qubes OS Project. Since Qubes is free and open-source software, we do not earn any revenue by selling it. Instead, we rely on your financial support. If you rely on Qubes for secure computing in your work or personal life or see the value in our efforts, we would greatly appreciate your donation. Thank you!

29 September, 2022 12:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E214 Rentrée, Mais Uma!

Voltámos, cheios de saudades e, como todos os miúdos que regressam das férias, com um saco cheio de histórias. O Miguel está ansioso para mudar de casa, e pelas obras… Vai ser tão bom. O Constantino esteve nos locais mais inóspitos a gravar podcasts enquanto afina agulhas para o que virá a ser um centro Linux em Lisboa. Já o Carrondo fartou-se de passear e descobriu o Wakeboard!!! Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

29 September, 2022 12:00AM

September 28, 2022

hackergotchi for Purism PureOS

Purism PureOS

Reclaiming Digital Privacy

The advancements of the digital age has gifted us with the ability to search and access world’s information sources within a few seconds. However, Big Tech companies still make it hard for users to control their own privacy when accessing digital products and services. The problem is deeper for our kids or elderly parents, who […]

The post Reclaiming Digital Privacy appeared first on Purism.

28 September, 2022 05:32PM by Yavnika Khanna

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Le soluzioni enterprise di Canonical Ubuntu nel 2022

Ubuntu continua ad essere uno dei sistemi operativi più popolari sia nel cloud pubblico, con oltre 100 mila macchine virtuali Ubuntu create ogni giorno, sia nel cloud on-prem, dove Canonical è responsabile per circa il 40% dei nuovi deployment di Openstack dell’ultimo anno. Canonical è anche protagonista del mondo cloud native; infatti oltre 800 mila Ubuntu images vengono scaricate su Docker Hub ogni giorno. 

Dato l’imminente release di Ubuntu 22.10 Kinetic Kudu e l’Ubuntu Developer Summit a pochi mesi di distanza, abbiamo fatto un excursus sulle soluzioni e le feature che Canonical ha lanciato nel 2022.

Oltre a seguire questo détour dell’ultimo anno, vi invitiamo ad unirvi a noi per il webinar del 12 ottobre per conoscere le novità in Canonical per il 2023. 

Ubuntu 22.04 Jammy Jellyfish

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/3Wi3qO8UZ4b2i96OKKdtSj91WdXY1qPZLQIm2cGv5a8p_MaM8Yir1JXyLuJIEaBAGNp0G30Oy-Gw_duvh5K-65cqECH1pjpBiTc7y_oKFt_IouKkMT_KnLK_RTzaMgpnSUE-HFrGoUbXBlinOr1xvK--grQg-aIlsTo6_dJOsoX-zyvGJDYd5sAxVg" width="720" /> </noscript>
Ubuntu 22.04 Jammy Jellyfish

Ad aprile Canonical ha rilasciato Ubuntu 22.04 Jammy Jellyfish, l’ultima versione del popolare sistema operativo Linux per desktop, server e IoT. La 22.04 è una versione con supporto a lungo termine (LTS o Long Term SUpport); ciò significa che i pacchetti di base e le applicazioni supportate riceveranno aggiornamenti di sicurezza e kernel Livepatch fino a 10 anni grazie all’Extended Security Maintenance (ESM).

In ambito server, la caratteristica principale è il miglioramento delle prestazioni e il supporto per i server ARM; questo permette alle aziende di ottenere prestazioni ancora più elevate utilizzando macchine virtuali AWS Graviton o Oracle Ampere. Ubuntu è anche la piattaforma preferita per le applicazioni che tutelano la privacy, grazie al fatto che Ubuntu Pro supporta i servizi di confidential computing da parte dei principali cloud provider.

Sul fronte enterprise Ubuntu Desktop 22.04 ha introdotto un nuovo Active Directory client che supporta le funzionalità Group Policy Object, Privilege Management e l’esecuzione di remote script, oltre che ad un supporto iniziale all’autenticazione tramite Azure Active Directory.

Per quanto riguarda IoT, la sicurezza è stata la priorità nella nuova release Ubuntu Core con l’aggiunta di funzionalità come la Full-Disk Encryption tramite TPM e Secure Boot. Ubuntu Core 22 introduce anche il remodeling, una nuova funzionalità che consente di modificare qualsiasi elemento della model assertion. Brand, modello, IoT App Store ID o versione sono alcuni degli aspetti che possono essere modificati, consentendo ai rivenditori di fare il rebranding dei dispositivi.  

Openstack e Kubernetes

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/c93ad4MxzfaeyUoksXD4PDpjlVO5GbrdUkjwsXQWFa7KtgwTdFKqy2ysya4AzmICcQOvq1VTQMK7V4xB8I4Ym1fGNyZLy_8JJYNLz85hCqSBF2nreq2jI93bRA45tk4NclS-NBndPPZytTdPgSISb2wxr_gQ6EJcRI-oUZoQ0XmpO5hgXfcwreCS6Q" width="720" /> </noscript>
Canonical Kuberntes e Microk8s

Canonical continua ad essere uno dei fornitori leader di cluster Openstack e Kubernetes, e la nostra soluzione è attualmente utilizzata da molte aziende di medie e grandi dimensioni del settore pubblico e privato come il Consortium GARR, Fastweb e molti altri.  

Alla fine di marzo Canonical ha rilasciato Openstack Yoga per Ubuntu 22.04 e 20.04. Questa nuova versione di OpenStack pone le basi per un’infrastruttura di nuova generazione altamente performante, utilizzando le schede SmartNIC e integrandole con il driver Neutron Open Virtual Network (OVN). Con i componenti di rete OpenStack che operano su SmartNIC, le aziende che necessitano di un high performance computing cluster potranno beneficiare di una latenza inferiore, di un throughput più elevato e di una migliore Quality of Service (QoS).

Da tempo Ubuntu è il sistema operativo host più popolare per le distribuzioni Kubernetes dei principali cloud pubblici. Inoltre, il 2022 è stato un anno ricco di annunci sia per Microk8s (la nostra distribuzione “low touch” e non customizzabile per Edge ed IoT) sia per Charmed Kubernetes (la nostra distribuzione customizzabile e operator-based). L’attenzione del team si è concentrata sul miglioramento dei controlli di sicurezza e sull’ampliamento dell’ecosistema di partner: infatti, Microk8s ha visto l’aggiunta dello strict confinment e l’Addons system da ora aperto a tutti, mentre Charmed Kubernetes offre confomità con gli standard di controllo di sicurezza CIS e supporto per l’hardware NVIDIA DGX per le soluzioni personalizzabili AI-aaS.

Container, IA e Machine Learning

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/e79i53jOEOsNTwHDdRS56B_2vqtwWHoxWpnp_djFe5HhlPCeg0Lhq_iGKqf94WSwbKQgRk6xuExSvvZYvhHS0BKon65JS_3WamMvPNeyeFCUh1AbtAMsJwCKSS0rMmvY34XnTtZx87BFWpvFvp3PIAgOzP7cdsjXbpp0gm-IyBsG2U7NZi9gZv8cPw" width="720" /> </noscript>
.NET container su Ubuntu

Ubuntu è ormai la piattaforma preferita dai data scientist e dagli sviluppatori di machine learning grazie al gran numero di librerie disponibili e al nostro supporto per le GPU Nvidia, al momento leader del settore. Quest’anno Canonical si è concentrata nel fornire una soluzione open-source per sfruttare i Big Data con Charmed Kubeflow, la principale soluzione per machine learning operations (MLOps) automatizzate grazie a Juju.

Canonical continua inoltre a essere all’avanguardia nel mondo cloud native, pubblicando un’ampia gamma di immagini OCI sul Docker Container Image Repository, le quali beneficiano di aggiornamenti di sicurezza tempestivi, correzioni di CVE critici in 24 ore e un supporto LTS opzionale fino a 10 anni.

Il 2022 ha visto anche l’annuncio insieme a Microsoft della disponibilità di ASP.NET e .NET SDK e relativi runtime su Ubuntu 22.04 attraverso pacchetti nativi, oltre ad immagini OCI ultra ottimizzate. Queste immagini “chiselled” (chiamate così perché è stato eliminato tutto ciò che non è necessario per fornire un’immagine OCI Ubuntu minima) rispondono al feedback degli sviluppatori riguardo la riduzione della superficie di attacco e le dimensioni dell’immagine, senza sacrificare la stabilità e la familiarità di Ubuntu.

Vuoi saperne di più?

Per sapere di più sulle soluzioni Canonical dell’ultimo anno e su ciò che è in programma per il 2023, vi invitiamo al webinar online con il nostro Team Italiano che si terrà il 12 ottobre.   

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/GL1w-2yNDt-Qj6QbTTGoNJIRbNOs-X3vsFvTAbupOtnwrFjmHo5319_V7UdaxXGML2E7XMzkwiZXfXxMtzsvxOx36fmCSmXR26c-uF06B4p-2JK98HNcxexDG4m4HCypSrHDC39lPHsS_ZyfE7ytomBokalpRusIFSvkPyeSYjzmew-ZlOMfLiH33w" width="720" /> </noscript>


28 September, 2022 05:01PM

Ubuntu Blog: The benefits of running Microsoft SQL Server on Ubuntu Pro

Since November 2021, Canonical and Microsoft have been offering a jointly supported Microsoft SQL Server on Ubuntu Pro solution. With this offering, you can set up an optimised configuration of SQL Server on Ubuntu in a few steps.

As database professionals, we should ensure the highest possible standards for database security and availability. In this blog, we will detail how the combination of SQL Server and Ubuntu Pro can help you achieve those goals.

Improved security and easier certification path

According to IBM’s 2022 report, a single breach costs around 4.35 million dollars. Equally worrying, the same report outlines an average of 277 days to discover and contain a data breach. Therefore, it’s becoming increasingly urgent for organisations to secure their most valuable asset.

Microsoft SQL Server is one of the most secure databases ever. The following graph shows the number of vulnerabilities found in different database engines over a 9-year period. Microsoft SQL Server clearly holds the crown of the least vulnerable database:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Pi5J1YWFERW7Jv8pemqSUJHHIkSiI8gO73V6Yigeklmn6zI2OzTu8uySGglsRD_UzwNUieLfAlOeqf5nmeiYiRTUrdLCb50PuKa7uNrJ1Si6m6yTJK0af_Dl0vNVbn-7XLGLZnvYwQGjqFsxm0LeFn78odfChfWAQDkw7BSTCu21e28Pwr2qxEE" width="720" /> </noscript>

However, having a secure database is not enough to ensure the security of the whole deployment. Once a malicious user gains root access to the database-hosting operating system, it’s only a question of time before they gain high privileges to the database itself.

According to the CVE details database, the most common type of vulnerabilities is code execution, representing around 25% of the total reported ones. This vulnerability type allows the attacker to execute arbitrary code on the target machine or process. 

Ubuntu Pro helps you improve the security of your whole deployment by enhancing your security posture on different fronts.

First, Ubuntu Pro widens your patch coverage to more than 25,000 packages (up from 2,300 packages in Ubuntu LTS). Second, it provides you with Expanded Security Maintenance for an additional 5 years (so 10 in total). Therefore, Ubuntu Pro helps organizations reduce their surface attacks and gives them the freedom to choose when to upgrade.

Ubuntu Pro offers tools to harden your OS following the most stringent compliance regimes and security standards like ISO 27k, PCI, CIS, DISA-STIG and FedRAMP. If your company is running database workloads in regulated environments, then using Ubuntu Pro will help you pass the relevant audits and acquire the needed certifications.

With zero-day exploits nearly doubling in 2021 and with 80% of public exploits being published before their CVEs, it’s becoming critical to patch vulnerabilities as soon as they are known. Through kernel live patch, Ubuntu Pro ensures a timely roll-out of critical patches without needing a reboot that might impact your database availability.

Besides improving security and compliance, Microsoft SQL Server deployments on Ubuntu Pro offer support and availability enhancements.

Improved support and availability

Together, Canonical and Microsoft provide supported configurations to run a highly available Microsoft SQL Server on Ubuntu Pro. Both companies commit to providing 24/7 support for those configurations.

You can use the same flow when opening support tickets in Azure to get help. Behind the scenes, Microsoft and Canonical coordinate to promptly provide you with the needed support.

When running Microsoft SQL Server on Ubuntu LTS, you will need to seek support from the community without SLAs on resolution time.

Conclusion 

In summary, if you are running a production workload using Microsoft SQL Server on Ubuntu then you should definitely consider using Ubuntu Pro. If you are running a regulated workload then Ubuntu Pro is a great fit.

The good news is that you can start using SQL Server on Ubuntu Pro with just a few clicks.

28 September, 2022 09:48AM

hackergotchi for Pardus

Pardus

Diyanet İşleri Başkanlığı Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

Diyanet İşleri Başkanlığı merkez ve taşra teşkilatının neredeyse tamamını teşkil eden 10.468 adet bilgisayarda Pardus İşletim Sistemi ve LibreOffice uygulamasını kullanıyor. Profelis işbirliği ile Pardus dönüşümü sürecini yürüten Diyanet İşleri Başkanlığı, kendi ihtiyaç ve kullanım alışkanlıklarına göre özelleştirdiği Diyanet Pardus’u kurum içerisinde kullanıma sunuyor.

Diyanet İşleri Başkanlığı Bilgi İşlem Dairesi Başkanı İbrahim KÖSE Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

1. Açık kaynak yazılım kullanma kararını almanız nasıl oldu?

2019 yılında İnsan Kaynakları departmanımızın her kurum personeline özel birer e-posta hesabı açılması yönünde bit talebi oldu. Bizler de daha önce kullanmakta olduğumuz yabancı menşeli ticari e-posta uygulaması üzerinden tüm personele e-posta hesabı tanımlama işine başladık. Bu sırada söz konusu ticari firma, her bir e-posta hesabı için belirli bir maliyete katlanmamız gerektiğini hatırlattı. Bu durum da bizi açık kaynak kodlu e-posta uygulamalarını araştırmaya sevk etti.

Açık kaynak yazılım çözümleri noktasında ciddi olarak ilk aldığımız karar e-posta uygulaması oldu. E-posta hizmetlerimizi 2019 yılı Temmuz ayından bu yana açık kaynak kodlu ve ücretsiz çözümlerle yürütmekteyiz. Bir uygulamanın bu şekilde açık kaynak alternatifleriyle de yürütülebildiğini deneyimlemek, bizleri sonraki zamanlarda diğer uygulamalar ve işletim sistemi için de açık kaynak alternatif arayışına sevk etti.

Özellikle e-posta hesaplarının yoğun bir biçimde kullanılması kararı ile tüm kurum personeline e-posta hesabı açılması ve maliyetinin gündeme gelmesiyle birlikte pek çok uygulama ve yazılımın da maliyeti öncelikle değişik çözüm arayışlarına sevk etti. Gerek stratejik gerek güvenlik gerekse maliyet açısından pek çok uygulamanın kullanılmasında yarar olduğu konusunda kurum olarak fikir birliğine varıldı.

2. Diyanet İşleri Başkanlığı’nda nasıl bir sistem topolojiniz var? Pardus sunucu ve istemcileriyle bu topolojinin neresinde duruyor?

Sunucularımızın %80’i Linux (Ubuntu, Debian) tabanlıdır. Yaklaşık 3 yılı aşkın zamandır e-posta sunucusu olarak Zimbra uygulamasını, aktif dizin yönetimi olarak da yerli bir firmamız tarafından geliştirilmiş olan Samba Box ürününü kullanmaktayız.

3. Diyanet İşleri Başkanlığı’nda açık kaynak yazılımlara geçişi hangi iç süreçlerinizde (uygulama sunucusu, terminaller, ofis yazılımları, firewall vs) gerçekleştirdiniz?

Hali hazırda 3 yılı aşkın süredir sunucularımızın %80’inde Linux tabanlı çözümler kullanmaktayız. Bunun dışında 2 yılı aşkın süredir Başkanlığımız merkez ve taşra teşkilatımızın neredeyse tamamını teşkil eden 10.468 adet bilgisayarda Pardus İşletim Sistemi, LibreOffice uygulaması kullanılmaktadır.

Bunların dışında Kurumumuzda ayrıca Nextcloud, Jitsimeet, Git, Redmine, Zabbix, Zimbra, Lime Survey ve Moodle gibi pek çok açık kaynak uygulama yaklaşık 3 yıldır aktif olarak kullanılmaktadır.

4. Diyanet İşleri Başkanlığı içinde Pardus ve açık kaynak yazılımlara geçiş sürecinin hangi aşamasındasınız? Dönüşüm sürecinde ilerleyen yıllara dair yeni planlarınız var mı?

Merkez ve taşra teşkilatımızda 10.468 adet bilgisayarda Pardus İşletim Sistemi kurulumu tamamlanmış ve kullanılmaktadır. Yaygınlaştırma süreci tamamlandı.

Önümüzdeki aylarda Kubernetes konteyner mimarisine geçiş yapılması, eski yazılmış uygulamalarda kullanılan MsSql veritabanı PostgeSql’e göç ettirilecek.

5. Diyanet İşleri Başkanlığı’nın bu dönüşüm sürecinde Pardus iş/göç ortağı firmalarla çalıştınız mı?

Evet TÜBİTAK ekosisteminde olan ve TÜBİTAK yetkililerinin tavsiyesi ile tanıştığımız Profelis firması ile çalıştık. Çalışmaya devam ediyoruz. TÜBİTAK tarafından geliştirilen Pardus’u kurumumuzun kullanıcı alışkanlıklarına ve kurumsal ihtiyaçlarına tam olarak cevap verecek şekilde Profelis ile birlikte geliştirerek Diyanet PARDUS’u kullanıma sunduk. Kurum ihtiyaçlarına uygun 197 adet çevre birimi (tarayıcı, yazıcı vb) Diyanet Pardus içerisine gömüldü.

6. Açık kaynak yazılımlara geçişle ne tür faydalar sağladınız? Toplam Sahip Olma Maliyeti’nde (TCO) sağlanan tasarruf tutarı nedir?

Diğer açık kaynak uygulamalar hariç, yalnızca ofis ve Windows lisanslarının alınmamasından yıllık 62 Milyon TL tasarruf sağlandı.

7. Açık kaynak yazılımlara geçişte zorluk yaşadınız mı? Çalışan tarafında bir direnç gerçekleşti mi? Bu direnci nasıl aştınız?

Evet kullanıcılardan bir kısmı eski alışkanlıklarını unutmak kaygısı ve yeni bir arayüz ve yazılıma geçişi ilk etapta konfor kaybı olarak gördüler. Ancak eğitimler vererek aslında pratikte sadece ara yüz ve düğmelerin yeri, ismi, şeklinin farklı olduğunu, aynı kullanım kolaylığına sahip olduğunu görünce direnç kırıldı. Bu eğitimlerde Pardus’a göçün güvenlik, stratejik ve maliyet açısından ne kadar önemli olduğu da vurgulanınca geçişlerde ikna oranı arttı.

Birim ve kurum amirlerinin göç kararında net olması ve geri dönüşün asla olmayacağını belirtmesi de bu konudaki direnci oldukça azalttı.

8. Tek açık kaynak kullanımınız Pardus mu yoksa başka açık kaynak çözümler de kullanıyor musunuz?

Next Cloud, Jitsimeet, Git, Redmine, Zabbix, Samba Box, Zimbra Limesurvey ve Moodle gibi pek çok açık kaynak uygulama yaklaşık 3 yıldır aktif olarak kullanılmaktadır.

9. Pardus özelinde bakacak olursak, hem yerel hem de açık kaynak bir yazılımı kullanmanın avantajları neler?

Geliştirme, güncelleme ve istenilen özelleştirmelerin yapılabilmesi, maliyet azlığı, stratejik olarak daha doğru ve güvenli olması.

Dönüşüm sürecimiz hakkında doküman ve videolara erişerek daha detaylı bilgi almak için https://pardus.diyanet.gov.tr adresimizi ziyaret edebilirisiniz.

28 September, 2022 08:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Launch Ubuntu 22.04 Desktop on Google Cloud

Ubuntu 22.04 Jammy Jellyfish was published in April this year. Ubuntu users are excited about its advanced desktop features, such as support for Wayland and GNOME 42 (I will use SliM this time since I won’t play any game in this demo). But recently, some Ubuntu users say that they can’t launch the Jammy Jellyfish desktop in Google Cloud by following my previous blog: Launch Ubuntu Desktop on Google Cloud. That’s too bad because Google Cloud gives Ubuntu users such a seamless experience through Chrome. Don’t worry, you can continue using Chrome access to your Ubuntu Desktop on Google Cloud. Just follow this article.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/RsKMsgPupSewjRFBatU3I7vdxNaEzN5XvMHeTrXNcKuDxlaj_wEfoffmtjdVcpdKOW7j-wyuWveXkskxltZoAzlHGTMA8jlW4DscErI6h87TOhBFrjYTt8GYEIAL8T1927KUI4JCMJSygi6wOzWVTDvFVo0sjj2ESx_sxzsKd2NgB6zPrUAzGA2G9w" width="720" /> </noscript>

We need four steps to set up a Ubuntu virtual machine with a graphic interface, just like your desktop environment on your own computer:

  1. Create a Ubuntu VM instance on Google Cloud.
  2. Install and configure the Chrome Remote Desktop service on the VM instance.
  3. Set up a Ubuntu desktop environment in the VM instance
  4. Connect from your Chrome web browser to the desktop environment on the VM instance.

Before you begin:

  • Make sure that you selected a Google Cloud project for running this VM instance.
  • If you don’t have the Google Chrome browser installed, you can install it from the Google Chrome homepage.

Create a Ubuntu VM instance

In this step, we will launch a VM instance in Google Cloud. The default e2-medium (2 vCPU, 4 GB memory) machine type works fine for a tutorial purpose. If you want a more performant machine, there are a variety of choices in Google Cloud.

1. In the Google Cloud Console, go to the VM Instances page:

2. Click CREATE INSTANCE.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/nvl8i4SpHPYXIcmo42Ni3C_33ia_B1FrYDyylJFdXU8ziXbv85WGfKa6r9VdP294UqVF39i5sqBl9mxGP39Yrcq7icKEEt3KuLBVTG_tn9TkGaHvX8YE1uVv5lANBJ2AnioGuNpmNtXAhNjRzbKscw_7wPgy3sP5riLIc_BYBEZ_ps-QAbijKKLGrQ" width="720" /> </noscript>

3. Set the instance name to a unique name you like. If you don’t have any instance in your project, Google Cloud will assign you a default name “instance-1”, which I am fine with this time.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/rfbWZabWGfczoDLeBNgqc-euGhTX8LcbR9ZkyN-eTJ2LxFM5lWi4EE3y9jW2lJeWvtLHaWTPcWj0bqmrIfikfl8bZy4lNtJLM9s2mbc8rBe8iUzpyEeqOMMQAcfr95ri-GMo2VAOWCPSGYR1TGwjy-dGqrZaGB0IrfOfbZIvxE-aK2LHW2mVVQvfgA" width="720" /> </noscript>

4. Select a region and zone you want to run your instance.

5. Scroll down to the Boot disk options and click Change

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/qqZXGqKXm9p9kaLm1k6BwcDPnA1Xm6URaHUv_KNkKIXeHOos9PMsFN8hWzC66DfsVAlA_mFE91hXoSH1hd_h4LEtj9MKfz5IiCUf-iZ6O-vb0ECmOUwU9_4o-eF_UiybcBvpx_2tFmOmGd4iCjFI3F0_imK43-w1wfGnflkLmATY6kEfCRTddJJfbw" width="720" /> </noscript>

6. In the Boot disk pop-up window, in Operating System, select Ubuntu Pro from the drop-down; in Version, select Ubuntu 22.04 LTS Pro Server; keep the rest options as default value and click SELECT. Ubuntu Pro ensures the latest security update, which will be useful when we install production applications.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/mCpU3qANn10IAu63bRmh7uwKAD7XxYP8MsQ4doUwussEdta1xs-j5shQgq-PUAzzaKaSwI2C_F9MB6lGd55lJ0hcj33WiRH8RgmdaRFJBQZu5eF2hfL5i-IT1ib-bRoDgbPmwJancS_EfPXxAaCS4-pkXGjD7vHEk4YDl3KzD723fSRFRxMDeO-RBA" width="720" /> </noscript>

7. Click CREATE to create the instance.

8. In less than one minute, you will be able to see your Ubuntu instance in RUNNING status. You can click the SSH button in the instance list to connect to your new instance.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/nPRJwYxeRgtYVIM-mZAHoLOcWusPeSPkJvjo4IYGjQSr2Kp2Ge-zx-BwWn97ls5hJyhTZqHKaVTyuwsM1BOAev7fq5RBpHheilxXp6V_8rCZdy89qxvGqgDofikZImpFnZvzSnW-nZGcTN9WQmj33IzDAb94RIXT48PBF54dCsCOVy0rNy3HbCw6mw" width="720" /> </noscript>

If you prefer to start a VM through Google Cloud Shell, you can use this command to achieve the same result:

gcloud compute instances create instance-1 --zone=us-central1-a --machine-type=e2-medium --image=projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-2204-jammy-v20220923

Install Chrome Remote Desktop on the VM instance

The next step is to install Chrome Remote Desktop on the VM instance. Download and install the Debian Linux Chrome Remote Desktop installation package:

wget https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb
sudo apt-get install --assume-yes ./chrome-remote-desktop_current_amd64.deb

Set up a Ubuntu desktop environment in the VM instance

Now we need to install a desktop environment and window manager for Chrome Remote Desktop to communicate with the VM instance.

1. In the SSH window connected to your VM instance, refresh the repository and package lists, and perform the necessary upgrades with the following command:

sudo apt update && sudo apt upgrade

2. Install and set up Display Manager. Here I use SLiM for its lightweight feature.

sudo apt install slim

3. Install Ubuntu desktop environment (the installation process may need around 20 minutes):

sudo apt install ubuntu-desktop

4. Once finishes installation, reboot the machine:

sudo reboot

5. You will lose connection when you reboot. SSH into the virtual machine again and start SLiM:

sudo service slim start

Configure the Chrome Remote Desktop service and Connect to your Ubuntu Desktop

To start the remote desktop connection, you need to have an authorization key for your Google account.

1. On your local computer, using the Chrome browser, go to the Chrome Remote Desktop command line setup page:
https://remotedesktop.google.com/headless

2. On the Set up another computer page, click Begin.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/79AvTd7Psz1WZRoZamyqfAZrmlkCxvUuKFqImdhmM3hCNTGw7_fv_tF74k8kKHVOLj72Ak6oV6qKqdK0SLN23c6pgq7gpx_Nan-T3ycnOE1J9-waB1PO718w6fNq2C8NTy8lrZCKtHQXLGovZSCAUBZxeM-yYPPpMSInS6FBRU1HMFQ57aUqioMx1Q" width="720" /> </noscript>

3. Click Next. You already installed the Chrome Remote Desktop on the remote computer in STEP2.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/giK9o7LtkFv1CF8PmnudNohqLKgltm0kWg5onofGUkaFtq0_EYeCNVKfrI2K5lT8EjxKduXDFHYvznMSyrgRF9mFZdm5gjLt_K0BDlz08PudExggQyu__NeUQfXaqiapiraZderDQlOGL5rMdjLe-Le94-6JWdTVREyVQQmgTIDMbJ2KjsVDhHp2bA" width="720" /> </noscript>

4. Click Authorize.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/_y0WksSEASG5TK8npT3TyKkszNIFP7Mu3NMSzo28BYZrNvAc9SOzzPRLs0PznHmeaLP0D5VswZ3w_tXaRMbdkmphg3OuXafFR1uBBivm-4s8SnsBRufxYhsXQDMSE7mzQKgfpTcYNtlJRKtVGTdT9QvXxgaft-fVUb4PRh-tANw3h49TRBGC6nC8zw" width="720" /> </noscript>

5. Now you get the command to set up and start the Chrome Remote Desktop service on your VM instance. Copy the command for Debian Linux.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/yhlf11afI1SfASSwmbE9JDBcwkGiWnfhdRONHYdf5Dz8_15zmvMtKyqwW9idAYt1K6ksEHbXNHKGes60nFlkmXSo9ynA_3V5CNo8hquwIK_ZbiW3IHGzT1rwGtycI8y-1L59hHnrfvGCIIid0NnkHiIJ4o1Gk6_BYypJEZ5DyTrxQn47ovMbvdIH5w" width="720" /> </noscript>

6. Paste the command to the SSH window that connects to your VM instance. Run the command.

7. Enter a 6-digit PIN when prompted. This PIN will be used when you log into the VM instance from your Chrome.

8. On your local computer, go to the Chrome Remote Desktop web site. You will find your Ubuntu Desktop shows up in the portal.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/xHMZfc8egButCfgyO98mz6e1x-YHk4_-Jd6xGQ-yjxxnGfGCpW-Pz84tA99QCeefmFPA-hGTnc4SFQZ5qz3oNIrx5SzuSLB0NFK6dCpvhj6XBOEtdCuj5TLQdF4CvfUmS249AInpom4ujbPy4zEIiFXfn73giYvW4VZ3GjdHy8x8hJqzuRPEvg46Lg" width="720" /> </noscript>

9. Click the remote desktop instance. In this case, it is “instance-1”. And you will be prompted to input the 6-digit PIN you set in step 7.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/qv190b-HUpqV0XM-5iKZN34pM3bDCpOaBkpS7-VLJUX7GKmKhO47mTbC6zU48bEnHtXLwYc8fp9SgWuvxWNu5f9UP6UwRInO7tYPcjDqQu4o-v2YDiBctNuExvqaqTYTWN4bO5Pi05iDQbDmaMN4YHvx6XAU3B0gp5r3V4vtmuo1lJjcJzr77sEv6A" width="720" /> </noscript>

10. You will see this page, just cancel it.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/UApLzW1DTkVDMxprz0c4GprCMIVhqmcwueXeNNC1-YFJgE-jDZsvrxxizgHNfRgIuWyfLBefBoAMZybbFGSFkdBwVB41f8jCwJKSyB1hGUfloeZwelma-K6nyJLKppScWt0x6xygByPyu01o72fELCJtOpChRr0f_0f9JcdBrbNoATJLGh6eUcXleA" width="720" /> </noscript>

Now, Ubuntu desktop is fully functional in your Chrome web browser:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/rUXYOApbN8gPCWnZFN3QI524yfE49JFDquR5uxIF6tlnIRz0LgrB5CRMpZibeCrNcKcIyYIcJxNn8RktaWJpmBCalcKUrxckK58MH35CMJJulZcOH2ycWN7zTFftU70RbS4iyNgkxd_QaDb8Urq8twk0XOrh1jLyRKoN3Txy7N_WbvQZU_m4VqiTFQ" width="720" /> </noscript>

Install an app on your Ubuntu Desktop

Let’s install the Chromium web browser, the open-source version of Chrome, from the Snap Store. In your terminal, run:

sudo snap install chromium
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/0r2GzrJPimvj1syuojZ7d7L8AM_Qs4I0sdC8Pkb-bM4LWEbzYg1IzZ_V6kiVaGLuDriUBo6U6HS57bq7fqBqCmmf2JDW3At36gkOfv3QSVPcRg_A8mxOxjrt2e6tkIccP1DLUKiuWRK_ZaY0scS1mHwEQ5X3fxx0UUabrdwwjBnR1WNKV5z6hve0Xg" width="720" /> </noscript>

You can use Chromium “inside” Chrome in less than a minute:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/P8nywYkP9wPEeEIllLNYQbcd2sJQN4ll1QF9GU3Q4-wUbjV0gGEyDr_qI-QAZZN-xp2ExyV8U65ZENyw6Ee9bYXpOOgNszLGpYgRnor33mF6Q4UnWIvI_1-pbx5q6vsqm-bozUKVNO1W-hqQZttjWO4SLacT8YcG4JHTbGnU1oxkEa7urp483oenvA" width="720" /> </noscript>

Clean up

To avoid incurring charges to your Google Cloud account for the resources used by this tutorial, you may delete the VM instance.

In the Cloud Console, DELETE this VM instance:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/sAN0e0LUpyYXkPU1ykFudWqwTIkD24jHtRLrgdS3Md24oNRpDycRS0yovsGzwRgXixJOiOl3xLUBoJd1vEbh1Swidc6mwOGNtz4_yjxrnkTz7MjvYha7TFcybOqPupTE83scUR1XxflL_fiVY9wLBAXxz_kJMte-bhVlSljdp4Pxu6eIlPdBf6VLLA" width="720" /> </noscript>

Then on your Google Chrome Remote Desktop, DELETE the “Remote device” (“instance-1” in my case) from your Remote Devices list.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/O7h8KnAntHxJS0jg0__8u6s94L-vTyAOxN9XHr2Z0gkuK9WC0KM52QVoxZhUI6w-5FXL2Mi8ZABHUy6WZD6hzqUO3_Bgksp40v0I_nisE3bmvdzTog2eQU4toaEZ7Gvud0Cc6wr-m8kAaXGR5AT9dNebdnQ1ooUfHqdmrNElKTjAlzoV3sy9OneUbg" width="720" /> </noscript>

That’s it. Enjoy your Ubuntu 22.04 Jammy Jellyfish Desktop anywhere anytime!

28 September, 2022 02:10AM

September 27, 2022

hackergotchi for Purism PureOS

Purism PureOS

Introducing PureBoot Basic

PureBoot is our high-security, tamper-detecting boot firmware. With the release of version 22 we have added a new feature called “PureBoot Basic” that lets you optionally disable the tamper detection, leaving you with a clean, simple, and still powerful boot firmware with more recovery options than a traditional coreboot BIOS and GRUB. Normally when you boot Pureboot, the Librem Key […]

The post Introducing PureBoot Basic appeared first on Purism.

27 September, 2022 07:57PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical Announces Ubuntu 22.04 LTS Support for FlexRAN Reference Software

Real-time kernel support delivers performance for 5G network architectures

Canonical is thrilled to announce today the availability of Ubuntu 22.04 LTS with real-time kernel support and optimizations for Intel’s latest FlexRAN Reference Software. Designed to meet telecom network transformation needs for 5G, Ubuntu 22.04 LTS real-time kernel delivers performance, guaranteed ultra-low latency and security for critical infrastructure. With this release, Canonical strengthens its commitment to help telecommunications service providers benefit from the technological and economic benefits of open source-software in a secure and supported manner. 

Building the 5G networks of the future

Telecom operators need to rapidly evolve their network architectures to process high-bandwidth, real-time data. Intel’s FlexRAN software is at the heart of the access network transformation and is used to deploy 4G and 5G cloud-native virtualised radio access networks (vRANs) .

RAN virtualisation provides operators with increased scalability and flexibility for their radio access networks. With Ubuntu, companies serving the OpenRAN ecosystem can get the latest kernel versions (5.x) to achieve more speed compared to other supported Linux distributions. OpenRAN solutions require low latency and high throughput and are built based on custom Layer 1 or Layer 2 (L1/L2) implementations or using specialised SDKs. 

Ubuntu 22.04 LTS, with Real-Time Kernel, was engineered from the ground up to enable unrivalled performance and efficiency gains for Communications Service Providers. I’d like to thank our OpenRAN ecosystem partners, especially Intel, for their continued support to drive Cloud and Edge infrastructure innovation on our most recent journey to enabling Intel FlexRAN Reference Software on Ubuntu,” said Arno Van Huyssteen, CTO – Communication Service Provider, Canonical.

“The optimisations we made to increase OpenRAN performance are a major milestone for our team. Running FlexRAN on Ubuntu allows mobile operators to use familiar upstream Kubernetes, automation frameworks, and open-source tools, and see performance gains from the latest real-time kernel package tuned for telco. Now they can get a full stack from bare metal to RU, CU, DU and RIC which is tested, validated and supported”,  says Maciej Mazur, Product Strategy Manager for Telco at Canonical.

A full stack for telco

Besides offering real-time-kernel capabilities in Ubuntu which are suited for today’s 5G networks, Canonical offers a full stack of supported open-source technology building blocks, providing a unified approach for telco transformation. 

Canonical conducted detailed performance testing, with OpenRAN on bare metal as well as on Canonical’s MicroK8s and Charmed Kubernetes distributions. The tests showed that Ubuntu delivers better results than any other operating system thanks to the company’s tight collaboration with upstream kernel development and extremely fast patching and release cycles. Telecom providers can deploy containers without compromising on performance or security. 

Canonical also offers support for various accelerators, such as FPGA, SmartNICs, GPUs. Telecom operators can manage thousands of sites in a fully automated fashion with server provisioning tools like Canonical MAAS (Metal as a Service). MAAS supports API-driven re-building for sites with various configurations. The solution provides tenant separation for MORAN and MOCN use cases.

Get a demo at MWC Las Vegas

Canonical and Intel will provide a joint demo of the Intel FlexRAN SDK at the upcoming Mobile World Congress in Las Vegas. Request a meeting at the booth. 

Additional resources

27 September, 2022 01:25PM

Ubuntu Blog: IoT project lifecycle: App-centric software development [Part II]

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/vqq_iSYXlFuirpXYNtUHUU3MwvrTIHIX76AcYmyk7tIZJDdeYVgsMjck5Wq51gIf9PoRNKriwmyivnOAaBpKaOfQKr_48IzD6xaSfUxzFTI7VeQ2rXBYZBH-LabNk5AYmF7RXOeujLc_olh7BZOpaEqg2jAIUr0WzL7DTLhT57henYXnwHEwKbnceA" width="720" /> </noscript>

The traditional embedded Linux development model ties applications to the OS. Such a constraint means apps have to target a specific release, which lowers development velocity. Furthermore, broken upgrades in one part of the device may affect refreshes in the rest of the OS.

On the other hand, embedded developers are increasingly looking at open-source software to enable rapid app-centric software deployment and global collaboration. 

Does Ubuntu offer a production-grade platform suited for the modern app-centric world of IoT devices? Let’s find out.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/7Hp3nt5tZOWPr_qyl2dt18E46Hk2YCcSo9Rou5ffTkXQmRjQs6Cxf1YiQmhyfSHAmJr7UO9v8yIKXeTdqlbaDtFkoSUfUlrEDkhcOLwE9WVLP4hTRmB7bwO5Rk9fselguOZOSgpJCjjb6X3WnWCCnS6fMYrrAK4pYr-dEU1fTB7PptZbcD16NC2rqw" width="720" /> </noscript>
Ubuntu is the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots.

Software in the age of app stores

Software was scarce and expensive in the early 90s. Today, the nature of software keeps changing. It’s vastly more complex but also more common and it moves faster than ever. Whereas six months used to be a fast cadence for software releases, it’s grossly inadequate compared to today’s app-centric standards.

Developing robotics and intelligent devices entails apt-get or yum to get curated software from a distro. But this is only a minor part of what developers do today in IoT, as they often build the latest version from upstream or GitHub. As the pace and complexity of software have levelled up in the app store era, software moves too fast for distros and is happening at the speed of Git.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/Ej5_CZlBaqGZXuW-Trxh2iRdQKBR1abvGgP8QKSLym5KWuZnyIOlEZxYgFJviYqeuF4gDNqLJi0hWmMbcIf5w6o0qdqH1DM_rzQDGkAm1ZLBP6KjioA6svXgtX-7UJjyHQy6gT0k1cskAcC6RvCC4NQN2ons2jYaqRJk4xw8ObrFD3zzjgafW6Z38g" width="720" /> </noscript>

The advent of app stores further changed personal computing: we now consume software from many more parties and have strict expectations around trust. For instance, when downloading a game, we don’t want it to read our address book or access our microphone. Similarly, software development in the personal computing world is application-focused, as engineers need not worry about the underlying processors when developing a mobile app.

For too long, we didn’t have a Linux equivalent in the IoT world. Developing an industrial or embedded app the same way you create one for mobile required a leap of imagination.

As standard Ubuntu can curate only a portion of what those innovators need at the speed of a distribution, these questions informed our vision to accelerate app-centric software development by reducing the work required to package and publish software in the IoT world. The challenge we set out to solve is how to support this new wave of software via Ubuntu while preserving integration, trust and maintenance.

The snaps packaging system was born out of the vision of delivering software at the speed of GitHub, with the convenience of apt-get, while preserving a strong security posture. With snaps, developers can deliver software in a cloud-native way to IoT devices running Ubuntu.

Security in the app-centric age

In the traditional Linux, Windows, and Mac OS-type environments, we treat every piece of software the same. When we apt-get install something, any package can write to any file. 

In line with the modern era of app stores, however, your machine should only trust a piece of software for the data you feed. With snaps, each piece of software sits in a box and can only see and write in places inside its confined environment, unless explicitly instructed otherwise. 

Rather than limiting the benefits of containerisation at the app level, we brought snaps to the OS layer. Ubuntu Core is a snap-only flavour of Ubuntu, containerised and shipped using the new packaging format. The system segregates into software blobs using kernel primitives for isolation and confinement. Ubuntu Core containerises the Linux kernel and run-time environments, cleanly decoupling the base system and OS from the installed applications. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/4lLnDbFZUufo3noTs7mPHPHxoemuIkisxjDAN6MmwDNR_FqVYWgLO5AFYCc8atsgCnmHL-PJsDf3Hjx3Good4u6Jajk-fdQJkAFrXz2vZIggbWU2tJ2_2ljyf3Ypc9Jq8_MrLrlbY6Tt3t4nLLPQDKBSiN904CAjknttaWWrriYajM9trApFRWBbqw" width="720" /> </noscript>

Apps running on top of Ubuntu Core go into boxes and become single files compressed with everything they depend on with the ability to write to strictly-assigned space. Container primitives lock down and isolate the different features, with applications running in a security sandbox by default, secured by kernel primitives like cgroups and AppArmor

An app-centric operating system

As the intelligence of a device is ultimately a function of the software it runs, Ubuntu Core makes every device effectively app-enabled. The device’s primary function is an app, and developers can ship other apps next to that primary function. Ubuntu Core is an app-centric, instead of distribution or archive-centric, operating system. 

According to that vision, Ubuntu Core decouples apps from the OS, acting as an underlying platform running on virtually every hardware on top of which developers may wish to ship apps. Snapping applications further decouples the hardware from the software, enabling software reusability and composability. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/rN3x_5LFerOQjUL30hO_ZA_-dMUsHqcvPskYSqg0CjqGAN83zUSVLpebXogL06dkYb_NYgNd6IicExT63LqkaTy89VzWBbU6i_FWlbgs6KbBnpVoAYClCDyPH_jchUy0jrynSbAcH-MHYBMKu-qq8rJpOMhXFkhTd_sOomE3Fbp_VGAB63NkpafIng" width="720" /> </noscript>

The app-centric nature of Ubuntu Core allows publishers to update applications independently of the OS. Software publishers can decide which updates are signed, certified and delivered to devices. Furthermore, every embedded device running Ubuntu Core has guaranteed platform security and an app store, underpinning the new wave of app-centric software development.

Deploy secure IoT devices at scale with your own app store

Enterprises can set up App Stores representing their specific brand or devices with complete control over their store content, review process and authorisation. Hosted on Canonical’s cloud infrastructure, the App Stores are private application stores tailored to software distribution across fleets of devices. 

<noscript> <img alt="" height="242" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_394,h_242/https://lh4.googleusercontent.com/NivFiOLXUIuN6_d0t_BLbQMytTNkzbWJwWYssSM21FKks0_zalc-bdAw3nanRgqdrJ9h0BKH7AS_OlGUvpFGh2LqMXQ_4oPhuKduuqity9dmIcyYey4M3w4dCYY31TmVWonhUEN0ye4Fma-3W7JdZbLEgOnv4URFIuFF8821Pty7f9MC5ZWqMmOogQ" width="394" /> </noscript>
Every embedded device running Ubuntu Core has guaranteed platform security and an App Store, underpinning the new wave of connected device business models

These custom enterprise stores enable developers to cherry-pick the optimal combination of applications for their devices, including software published in the global Snap Store and custom software developed internally for a specific use case. Device manufacturers can leverage over 6500+ snaps freely available in the global Snap Store to accelerate their time to market.

Further reading

Why is Linux the OS of choice for IoT devices? Find out with the official guide to Linux for embedded applications

Working on a new IoT project, but unsure which OS to pick? Learn about the trade-offs between Yocto and Ubuntu Core.

Did you hear the news? Real-time Ubuntu 22.04 LTS Ubuntu is now available. Check out the latest webinar on real-time Linux to find out more.

Read our whitepaper on IoT lifecycle management for more insights. 

Join the conversation on IoT Discourse to discuss everything related to IoT and tightly connected, embedded devices. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/sujf6xWpTceQdHnqv4ivb0fRgraR-EexOH5b5hlxMLPeWF-U5SzGTwjNAZn0g5osivP36byelkohRznE1VxG3VKYJONx-8O3UYAxammEJ1xh9KPRoB5zYMumGp62Q-fxKRbjZiTQVIKwbANyrDU2TetPnGC7FIjQ364KqWqLz7_ikhfTEnASc2C4Rg" width="720" /> </noscript>


27 September, 2022 09:31AM

September 26, 2022

The Fridge: Ubuntu Weekly Newsletter Issue 754

Welcome to the Ubuntu Weekly Newsletter, Issue 754 for the week of September 18 – 24, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

26 September, 2022 11:10PM

Ubuntu Blog: Open-source cloud for beginners with OpenStack

In the beginning, there was Amazon Web Services (AWS). And AWS set a standard for cloud computing. AWS was fast, flexible, convenient to use and geo-redundant. Definitely much better than legacy IT infrastructure or VMware. A lot of enterprises all over the world started migrating their business applications to AWS.

Over the next few years, Microsoft and Google joined the party with their Azure and Google Cloud Platform (GCP) solutions, and today 99% of cloud workloads run in the three leading public clouds, right?

Well, not exactly …

High costs, security and compliance concerns, and vendor lock-in have deterred several large and mid-size enterprises from fully going public cloud. Some of them have even ended up repatriating their workloads back to an on-prem infrastructure. But does it really mean that they got back to their mainframes, blade servers and all other kinds of legacy “pets”?

Of course not. They went open source and built their own clouds instead!

The use of an open-source cloud

The modern cloud computing landscape is much wider than AWS, Azure and GCP. Sure, these three giants are leading the way, but you must have heard about Oracle Cloud, Alibaba Cloud or OVHcloud too. In fact, hundreds of smaller public cloud providers exist all over the world, delivering cloud services to local markets. A number of them used open source to build those clouds.

On the other side of the house, there is an enterprise sector with many companies running their own data centres. While both Amazon, Microsoft and Google provide public cloud extension capabilities, enabling businesses to build proprietary private clouds on their premises, this approach leads to the same challenges described above. It results in high costs and vendor dependency. For enterprises willing to avoid such issues, open-source cloud platforms proved to be a reasonable alternative.

<noscript> <img alt="" height="160" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_280,h_160/https://lh5.googleusercontent.com/S4Tz3BoMQ-15fmx_xJToTSP8_bHusqoEvlEAHzYP0BewQSVD1W4eR4JmRk79lpmJ6fGv8VOurn0H4RCjGJvItCrNh95VJk6IlARzlui5U2uvBvR4X84nujjz6adyyDpSakLuycQUWA9mIkMRk3F8vNUxYURmyDxvQClLm4nsZUs7ow4XWMW6u_WFMg" width="280" /> </noscript>

All of the presented solutions meet on a common ground called hybrid multi-cloud. With the vast majority of organisations using more than one cloud platform at the same time these days, the hybrid multi-cloud architecture simply reflects their daily reality. Open-source cloud platforms fit very well in this broader cloud computing spectrum, providing a cost-effective extension to the hyperscaler infrastructure and helping organisations optimise their infrastructure costs.

Open-source cloud with OpenStack

All right! That all sounds reasonable, but where do I start?

You probably know this feeling very well. You want an app for tracking your fitness activities, you search for an app like that in the Apple Store or Google Store, and suddenly you realise that there are hundreds of fitness tracking apps that are available out there. And you quickly get lost …

This is not much different in the open-source cloud computing space. Over the years, developers worldwide have created several open-source cloud platforms. Each of them had its ups and downs. Some of those projects are still alive. Some are not. While some are available with optional enterprise support, others aren’t. It doesn’t really matter if you’re just willing to learn. But if your open-source cloud is going to power a production environment, you better choose the winning one, right? 

OpenStack is the world’s leading open-source cloud platform. It is used by hundreds of local public cloud providers, telcos and thousands of enterprises, with over 25 million cores running in production, according to the OpenStack User Survey 2021. OpenStack has undoubtedly dominated the market and become the de facto standard for open-source cloud infrastructure implementations. Its adoption continues to grow, and its market share is expected to reach $8B in 2023.

How does OpenStack work?

OpenStack was originally launched as an open-source implementation of the AWS Elastic Compute Cloud (EC2) service, and it mostly resembles its behaviour. A lot of typical cloud concepts, such as image catalogue, ephemeral storage or security groups, are present in OpenStack too. As a result, anyone with AWS, Azure or GCP experience can get up to speed with OpenStack relatively quickly.

OpenStack has a modular architecture and consists of several interconnected services. Each service handles some principal cloud functions, such as image catalogue management, instance provisioning or storage snapshotting. This approach makes OpenStack’s code base much more scalable as each module is developed independently by a dedicated team of developers. But this is, again, the nature of open-source software.

<noscript> <img alt="" height="379" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_558,h_379/https://lh5.googleusercontent.com/foZMhylZym_BzDnVIovC22RFDyoCVyjrKR3eJVk6_LfMRMk2Xn56Ji9nxxVePPLzZAGZsXyBsJ34IX4bd4nGuiB-NhX61aXGle__lIHK9CNU4lxdhaYbc52zY0JW-lMO9eHv8pLt8JEfTCBeYdpvaHrt9xuL-oiiyHxqLrUCu0Tbgj-G_KduTKVIFQ" width="558" /> </noscript>
OpenStack architecture

OpenStack provides both a web dashboard and a command line interface (CLI). Moreover, each service exposes application programming interfaces (APIs) endpoints. Those are used by other services to communicate with each other. OpenStack APIs can also be used by any third-party software that’s plugged into the OpenStack ecosystem. Examples of such software include cloud management platforms (CMPs), proprietary backup solutions, etc.

Read the whitepaper – “An introduction to OpenStack”

Getting started with your open-source cloud

Even though OpenStack is a pretty complex ecosystem, there are tools that tame its complexity, enabling straightforward installation and post-deployment operations.

Refer to the official installation instructions on Ubuntu for the most up-to-date instructions on how to get started with OpenStack today. The website covers several use cases, from single-node installations to large-scale cluster deployments. The most simplistic scenario shouldn’t take longer than 20 minutes!

Another good way to get familiar with OpenStack is to learn it through a series of tutorials. Since getting OpenStack up and running is just the beginning of the journey, these tutorials will walk you through some basic steps, such as how to interact with OpenStack services, how to launch your first instance, etc.

Conclusions

Whether you’re working for a service provider who wants to build its own public cloud or you’re working for an enterprise that is looking for cost optimisation in hybrid multi-cloud environments, OpenStack is your way to go. It’s the best option in the open-source cloud computing market these days. Being 12 years old, OpenStack is stable and mature enough to power large-scale production environments in all market sectors worldwide.

Getting OpenStack up and running on Ubuntu takes less than half an hour. Is there any reason why you couldn’t try it during your lunch break today?

Additional resources

Watch the webinar – “Intro to OpenStack: How open-source private clouds are changing the game”

Visit our website – OpenStack on Ubuntu

Get in touch with Canonical

26 September, 2022 07:00AM

September 24, 2022

Full Circle Magazine: Full Circle Weekly News #280


Continuation of GNOME Shell development for mobile devices:
https://blogs.gnome.org/shell-dev/2022/09/09/gnome-shell-on-mobile-an-update/

Performance and Retbleed:
https://lkml.org/lkml/2022/9/9/617

Release of GNU Emacs 28.2:
https://lists.gnu.org/archive/html/emacs-devel/2022-09/msg00730.html

Cross-platform Ladybird web-browser:
https://awesomekling.github.io/Ladybird-a-new-cross-platform-browser-project/

WD is developing a NVMe driver in Rust:
https://twitter.com/josh_triplett/status/1569363148985233414

Fedora Linux 37 has moved to beta testing:
https://fedoramagazine.org/announcing-fedora-37-beta/

SME Server 10.1 is available:
https://forums.koozali.org/index.php/topic,54884.0.html

Ubuntu has implemented the ability to dynamically obtain debugging information:
https://www.mail-archive.com/ubuntu-devel-announce@lists.ubuntu.com/msg01081.html

Release of EndeavourOS 22.9:
https://endeavouros.com/news/artemis-nova-is-here/

Vulnerability in the Enlightenment user environment:
https://www.enlightenment.org/news/2022-09-15-enlightenment-0.25.4

KDE Plasma 5.26 desktop testing for TV use:
https://kde.org/announcements/plasma/5/5.25.90/

Ubuntu 22.10 intends to provide support for  RISC-V Sipeed Lichee RV:
https://bugs.launchpad.net/ubuntu/+bug/1989595

Release of WebKitGTK 2.38.0 Epiphany 43:
https://webkitgtk.org/2022/09/16/webkitgtk2.38.0-released.html

Floorp web browser 10.5.0:
https://blog.ablaze.one/2425/2022-09-17/



Credits:
Full Circle Magazine
@fullcirclemag
Host: bardmoss@pm.me, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

24 September, 2022 02:54PM

September 23, 2022

hackergotchi for VyOS

VyOS

VyOS Sundays #2: join us on September the 25th, 16:00 UTC

Hello Community!

In our live stream, we'll discuss the following topics:

  • OpenCollective and its role in our project.
  • The distinction between "free as in price" and "free as in freedom" and the importance of being free as in freedom and open-source.
  • The future VyOS GUI — with a sneak peek at screenshots!

We will also determine the next winner in our hardware support survey competition! Join us on Twitch and ask your questions if you want. If you can't attend, that's fine — we'll keep a recording.

See you on Sunday, and have a great weekend!

23 September, 2022 03:26PM by Yuriy Andamasov (yuriy@sentrium.io)

hackergotchi for Univention Corporate Server

Univention Corporate Server

UCS@school for a Modern Realignment of Jena’s School IT

In the independent city of Jena in Thuringia there are 27 schools run by the municipality. Our IT department at the Jena Media Center supports around 12,500 students and 1,700 teachers in elementary schools, community schools, high schools, three vocational schools and a special-needs center. Ten employees now work at the media center, including three school supervisors and individual supervisors responsible for infrastructure, Linux administration, mobile device management (MDM) and software distribution. They take care of the information technology supply and equipment of all schools run by the municipality.

Digital Pact as Starting Point

Although the demand for such support has increased noticeably in recent years, there was not enough money to fund it for a long time. That is why the Digital Pact spilled in the country at exactly the right time for us. This was the long-awaited starting signal for our project, a reorganization of school IT in Jena, and the cooperation with Univention.

In this blog post, I would like to describe in more detail how we proceeded in Jena, what requirements we had, what the corresponding solution looked like, and why we ultimately chose the open source solution UCS@school. In doing so, I would also like to address minor and major challenges that occurred during the course of the project and outline how we met them.

Initial Situation and Concept Development

Before deciding to use UCS@school from Univention, we were struggling with a decentralized IT infrastructure with separate school servers and a uniform IP network structure at all locations without central software management. Individual configurations with additional accounts for each user were necessary for central offerings such as groupware, Radius and the Moodle learning management system (LMS). Temporarily, the two of us were responsible for thousands of devices at the 27 school locations – a truly colossal task that could only be managed through uniform systems with a school server at each location as well as high workloads.

Despite the good equipment in Jena, the time and staff expenditure for separate server administration were too high in the long run. We had been aware for some time that we needed to switch to a different system if we wanted to have a modern school IT system. Until then, this change always failed due to the lack of investment funds, which were then available to us thanks to the Digital Pact. Compared to other subsidies, this not only included investments for the expansion of the IT infrastructure and the WLAN but also state and federal funds to purchase technical support.

To apply for funds from the Digital Pact (for the individual schools and on a global scale for Jena), it was necessary to develop a meaningful concept, which we presented to the city council. Part of the concept, besides refurbishing and re-equipping the school sites, was the establishment of a larger support team that would reliably take over the maintenance of IT at all sites. In addition to the basic concept, we developed sound planning including required capacities and concrete implementation steps, which, together with the financial resources, now enables us to start afresh in Jena’s schools.

Requirements for the New IT Solution

In the run-up to the project, we asked ourselves what the new network structure should look like, which IP addresses we need for which systems, how students and teachers use the WLAN (via vouchers or authentication by radius) and how the user names should be structured. This is something one should consider before launching a project, and it would be best to discuss it with the teachers as well. We decided to use IP addresses structured in sections of 10 and 15, a “surname.first name” structure for the user names, and to identify teachers responsible for media at all school sites. The latter work together with us to complete the respective application for the Digital Pact, are available as contact persons for simple IT questions and, ideally, also provide first-level support on site.

It was then essential to decide in favor of continuing to operate school servers and against setting up a central server. The key factor here was to enable a certain level of fail-safety to withstand the traffic of the approximately 2,000 additional mobile end devices (iPads, notebooks) with a 1-gigabit connection to the Internet.

From these preliminary considerations and other specifications, several requirements for the new IT solution emerged:

  • Open source software (OSS) instead of a proprietary offering
  • Uniformity of the system at all locations
  • Low training requirements for teachers and students
  • ID management and LDAP functionality for all devices
  • Stability and possible extensions
  • Covering needs for school server functionality, cloud, groupware, Radius, Office 365, Apple ID, etc.

The Decision for Univention and UCS@school


UCS@school Portal Jena

UCS@school met all these requirements and also convinced us with a large number of open source extension options as well as an appropriate price-performance ratio. Since we now had the funds for the project implementation thanks to the Digital Pact, we did not hesitate any longer and eventually embarked on the path towards a modern IT for Jena’s schools with Univention in 2019.

To be more specific, we chose a school server domain for the school board, groupware with Open-Xchange (OX) as well as Nextcloud with a connection to the school network drives for students and teachers, and the use of the ID management of the domain for the LMS Moodle. We also implemented a helpdesk portal for tickets, PC and device logins, LDAP usage for managing printers, clients as well as active components including central software deployment at all school locations. Additionally, we provide students and teachers with a FAQ with answers to frequently asked questions together with user rules and privacy policy. Our Ukrainian students will find links to help them get started in a German school.

General Benefits of UCS@school for Schools and Teachers:

  • Centralized management of accounts, schools, classes, networks and permissions, as well as integration of additional third-party solutions for file sharing, office or email programs, teaching-learning software and other educational applications
  • Secure use of private smartphones and tablets (BYOD)
  • Focus on ID management in the pedagogical environment and an intelligent rights concept for access to digital learning platforms, IT services and digital media

Challenges in the Course of the Project

To some extent, our initial euphoria was thwarted by the many decisions we had to make. After all, a new IT solution means a lot of work – despite the overwhelming advantages. With UCS@school, there is no prefabricated system. Instead, we were able to freely choose from various options and design the IT infrastructure entirely according to our personal gusto. Another challenge concerned the changes the conversion brought with it, especially for teachers and students, who had to get used to new procedures at first.

Before the conversion, teachers could use an interface to create student accounts and students could log in themselves. The new IT solution, however, only provides for importing data via the school administration programs. Instead of a global DHCP server in each school with IP addresses for all the devices used, only defined devices that we purchase and manage ourselves are now permitted. All other devices are not assigned an IP address or are redirected to separate networks. Such conversions presented rather minor challenges that were easily overcome with support from our side. So today we can say: The decision for network separation has proven itself.

Elsewhere, we were able to keep the familiar components for teachers, such as OX for the e-mail system. Overall, the conversion to a network spanning the entire school and a new connection for the school’s IT proceeded without any major problems. Although we experienced some delays due to disrupted supply chains, we had no doubts about our decision to use UCS@school. Nowadays, we can go live with additional services with much less effort, making them immediately accessible to teachers and learners. Nevertheless, we recommend planning sufficient time for such a large project and employing people who are already well versed in Linux.

Conclusion & Outlook

Using UCS@school, the costs for the school board are predictable and reasonable. It is now easier to make new applications centrally available and usable for everyone. Likewise, the uniformity of our chosen solution reduces training costs.

At present, we support 4,000 PCs, 1,500 notebooks and 1,500 iPads as well as servers, switches and APs at school sites. Since more and more end devices are pushing into the infrastructure and performance is suffering as a result, we plan to switch from a 1-gigabit line to a 10-gigabit line in the next few months. In the future, server capacities will also be adjusted and user trainings will be intensified. Furthermore, the Apple School Manager is to be connected and a BigBlueButton integration, as well as authentication with Office 365, are to be implemented.

Taken as a whole, this project has taken the entire school IT in Jena to a new level. The system is now professional, scalable and centrally manageable and monitorable. The range of services offered to schools has been significantly expanded. At the same time, new and improved offerings create greater demand among schools, which in turn benefits the desired digitalization. All in all, it is a process that involves a great deal of work, places high demands on the expertise of the colleagues at the Media Center, and provides the opportunity to equip school IT securely, with high availability, and in a modern way.

Der Beitrag UCS@school for a Modern Realignment of Jena’s School IT erschien zuerst auf Univention.

23 September, 2022 12:59PM by Ann-Kathrin Jekel

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: IoT Project Lifecycle: Efficient prototyping with Snaps and Ubuntu Core [Part I]

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/4x7l5i7YRkidAo5jm5YZ87gjx5XAsjrzUpMqAKTc9KYcgEXxBO1N-h1-bizgGtxD5Hx8RAA43eZZTdFf01EBSG5qroQaJm-uOXaHK06VB7j05V8rEInE3jji7lVQZPaNh3CwKj8_59OvruDTPgNt802Pna9BZW02vxZaNJthjsOCcMv_QThrEyLEWQ" width="720" /> </noscript>

There is nothing more exciting than creating something new. Whether developing a new idea, coding a new software feature or creating a new product altogether. After a lot of hard work, we can finally see users interacting with the system and sharing their feedback. The quickest way to reach that point in a project is to start prototyping early.

With an abundance of hardware and software available, it has never been easier to create a working prototype. If a picture is worth a thousand words, then a prototype is worth a thousand meetings. Providing an interactive solution to stakeholders and potential users is the easiest and fastest way to convey and validate your ideas.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/qXgzHoGHkdDjxaMka9RUhJPC0jyvMFXyBwymunYDGl4u4oyN526ElRYhRDi142iZTnUfaBikKFsDsF2EXdO84RU2CvtXwkW0b_abjU5IYjhQX8qmYp5DenYPVtSs-bhdJUUMTDOSFk8-R2j22YPIFJQmETxF9HvxLFeGDQ8KD5SxZkIZ68GsqSUaag" width="720" /> </noscript>

Being efficient

As far as IoT prototypes are concerned, creating a technical prototype is both overwhelming and fun. There are so many applications, services, software repositories and containers that can be tried and used directly. It is somehow easy to get distracted by all the options. And we haven’t even talked about hardware. The easiest thing to do is pick a few proven or promising solutions, mash them up together and quickly adapt them to suit your needs. With deadlines on your back, getting something done is the main priority.

But how transferable is that to production? It can be tempting not to think about it. Just make it work and worry about it later. With a mindset like that, the first thing to do after playing around with the prototype would be to throw it away and start from scratch. To make this process more efficient you would have to create something closer to the real solution. A better prototype would be something to iterate and build upon.

A good way to do that is to start with a blank sheet. Pick a sturdy software platform that you know is going to be there for decades to come. You can use existing hardware to validate your solution. Packaging your applications in an OS native format gives you the piece of mind that your solution will be directly deployable to production hardware. 

Snaps and Ubuntu Core give you exactly this. Learning how to package applications as snaps is an investment that is well worth it in the long run.

What do you need to get started?

A laptop or desktop of your choice and some free time! Let’s say an IoT developer has created a simple script for sensor acquisition or a more complex application with multiple microservices. It works fine on Ubuntu Desktop or in a VM and you are curious how it would work on an industrial device or a whole fleet of different devices. A manager would ask: how feasible would it be to support that application across a diverse range of platforms and how would it scale to thousands of devices? The answer is simple. Packaged as a snap, an application will run on all supported Ubuntu Flavours and versions. All snaps are distributed through the Snap Store, which can serve regular updates to millions of devices.

To get started you can check out the guide on snap creation. Snaps are packaged using the Snapcraft tool. Once installed, Snapcraft will help you package your application as defined with a YAML file. Development experience is great on Ubuntu Desktop, but snaps can be built on both MacOS and Windows too. If you want to speed up building and iterating snaps, it’s best to build using LXD.

To learn how to manage snaps locally on your system, you can check out the quickstart guide on snapcraft.io. There are a lot of features to play with and things to learn, but probably the most important concept is Snap confinement

Snap confinement level and permissions define how isolated an application is from the host operating system and the rest of its applications. Thinking early about security can be very helpful in avoiding any challenges further down a project. Knowing what permissions your application needs in advance can be difficult. To get started it’s best to check that your application is working properly and then apply additional security controls to it. The documentation page on debugging snaps can be really helpful. 

It’s amazing how much can be done after spending some quality time developing. However, an IoT prototype would not be complete without actual hardware running your brand new snaps. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/nFdYqXQutFc2pGgfcfFSUFzPFwVFmwMTZUtBb3rECragw5BQokfT20vYYUDtmx7E1bCswIa2124p4eqUJv_z7uUWVZhGigSD8ZjrCceBNrlmEDb0aNbrLQpO4szsGEGwRDYjIsu8n5rwVfufoPZmmZ2zZZt0ctGHlblXQCJnv4npR8qP2tm_Z0H0oQ" width="720" /> </noscript>

Photo by Tool., Inc on Unsplash

Try your prototype on real hardware

One of the best operating systems for running IoT applications is Ubuntu Core. Ubuntu Core consists entirely of snaps. This is great, because you can get the most out of the snap security and reliability features. A snap or a collection of snaps can be easily deployed to an Ubuntu Core installation, not just in a VM, but also on a range of supported platforms. The getting started documentation of Ubuntu Core is a good place to start. If you want to jump straight to using it, you can grab a Raspberry Pi or an Intel NUC or even an old desktop or a laptop and install Ubuntu Core on it. If you would like to see the whole process this video on getting started with Ubuntu Core on Raspberry Pi shows it all.

Once installed, you can log in to your Ubuntu Core device by using the SSH key associated with your Ubuntu SSO account. Since the operating system comes with a strict security configuration, password access over SSH is disabled by default. After login you can copy your snaps over SSH or install a range of wide available applications directly from the Snap Store.

In the field of IoT, the interaction between devices and the physical world is essential. This could happen through a variety of technologies like sensors, actuators or interfaces that monitor and control industrial machines. It’s a good idea to test such interfaces if they are available and make sure that your snap has permissions to use them. To see a real-life example of how custom hardware interfaces can be used securely,  check out this guide for using Raspberry Pi’s GPIO to control an external fan

Ready for production and scaling up

Prototyping can be hard work. Being able to transfer that to production can greatly speed up your development process. Snaps can be really helpful in that aspect. All effort spent prototyping snaps will be directly applicable to your production system. 

The best way to reach millions of devices with your application is to publish it on the Snap Store. Even a complex system of microservices can run securely on Ubuntu Core. A good example for this is the IoT framework EdgeX. Your applications, packaged as snaps, can be built remotely or as part of a CI/CD pipeline. This can streamline the delivery of new code straight to all devices that need it. 

In situations where you don’t want to expose your application publicly you can use a dedicated IoT Snap Store. The dedicated snap store gives you private software distribution at a global scale. You have the ability to manage the flow of software between your developers, devices, customers and partners. With it you can power a whole ecosystem of applications specifically suited for your needs. Sometimes, software needs to be deployed in challenging or sensitive environments. In such cases global distribution might not be the best solution. With the air gapped mode of the enterprise snap store you can distribute private applications that never leave your perimeter. 

Whatever the use case, snaps make great sense when creating easy to maintain IoT projects. With a strong security posture, wide platform compatibility and an integrated software distribution system, they make delivering IoT projects efficient and scalable. 

Learn more about other phases in the IoT project lifecycle

This blog post is the first in the series of blog posts on IoT lifecycle management. Stay tuned for the next parts to find out how to move to production hardware and beyond. Make sure to check out the guide on IoT lifecycle management

Further reading

Ready to get started with your IoT project and curious to find out what challenges might lie ahead? Read the white paper: Top 5 IoT challenges and how to solve them.

Why is Linux the OS of choice for IoT devices? Find out with the official guide to Linux for embedded applications

Working on a new IoT project, but unsure which OS to pick? Learn about the trade-offs between Yocto and Ubuntu Core.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/sujf6xWpTceQdHnqv4ivb0fRgraR-EexOH5b5hlxMLPeWF-U5SzGTwjNAZn0g5osivP36byelkohRznE1VxG3VKYJONx-8O3UYAxammEJ1xh9KPRoB5zYMumGp62Q-fxKRbjZiTQVIKwbANyrDU2TetPnGC7FIjQ364KqWqLz7_ikhfTEnASc2C4Rg" width="720" /> </noscript>


23 September, 2022 09:00AM

Ubuntu Blog: Design and Web team summary – 29 July 2022

The Web and design team at Canonical runs in two-week iterations building and maintaining all of the Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

Sites

The Web team develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

We moved the press centre section from the Ubuntu blog to a new press centre platform on canonical.com permanently as part of the ongoing work of Canonical. All blogs across Canonical are also currently displayed on the new blog section of canonical.com.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/pIC0PTvjlCeWAVynf9K3GnuegxWqf0rDgrT2ohId-r0grLsTT5Ha_PYvhVFq4suoSSuG2wCOqd3zt49eoLgzu4COhIB5OL0Fv3GQaB1qHUC_dWyC9wWiO6j3T3b5JL3q40q7xAwmo9LnmeVxLTkw2BJNeTroB4Xk9x3NZJ0cGpouy2pbiOcauCQXfQ" width="720" /> </noscript>

Server guide PDF

We have historically provided a server guide in PDF form, which used to be automatically built from the live server documentation. For a while now this has been broken, and we just managed to find the time to fix it. This required a complete rewrite.

Marketplace

The Marketplace team works closely with the Store, Snapcraft, Snapd and Desktop teams to develop and maintain the Snap Store and Charmhub.

Store Alignment

We have performed an audit and documented the differences and similarities between Snapcraft store and Charmhub store.
For many of the features, we highlighted the inconsistencies and agreed on the direction to take in an effort to align the store’s interactions and functionality. We also agreed on the next steps to make progress on unifying the flows and code when relevant.

User research

The User Research team works with the design and product management, to provide insights into our users’ needs and tools for teams to conduct their own research. 

Iteration on the UA subscribe view

The UA subscribe portal is where you can select and purchase your UA subscription from the website. We have tested the latest version of the design, giving our users challenges to complete by subscribing to the optimal plan for each scenario. We found that users generally understood our new designs and could purchase their desired products. We got some actionable feedback like revisiting the order of the steps, and the UX team will work on making sure users are presented with a clearer distinction for their choices for security coverage and support.

We are hiring!

Come and join the team. If you would like to find out more about the team please read our blog and description of the team.

With ♥ from Canonical web team.

23 September, 2022 08:54AM

Ubuntu Blog: Design and Web team summary – 9 September 2022

The Web and design team at Canonical runs in two-week iterations building and maintaining all of the Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

Sites

The Web team develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

Why OpenStack page

The why OpenStack page has been created and released in this iteration. On this page, you will be able to get an idea of why you need to use OpenStack. Also, you can find some case studies from across the industry.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/uYj8zqpe0QioLH6TZcnrsGWiWYTUkHXwe0D6bWbr4WiQmJTd2YS11S_CPXxiFUupddjFSb9c09wIjrQKtrgftr7nK8M31cYczJbLmVagce3EViRJW--OCalIWmWsiSkd0QlqHo1fq7LaXMqDmeyZp1VL6b3pcd66x0d3YmBPCQVLgTKp8ZdnRddRow" width="720" /> </noscript>

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

New storyboard for Automotive 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/7E58lP2ibuYGTT3kdT6NreTl6hX2RbXfxwPTFeRO-95vGVyfM4RjqQZAkovu3GYDBkx8cHe5W0tUPsMFhPbxcpJDyX1x-UCVM51GY54zY4IrHUAySgNN3GORxaPohC7_Q0X-PPaSIqNXNAUr501lW9-KbUSEpqixkn9Qa37albPwSFaBLACdGdBN5w" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Xd5zceUWWGpGmJcn7ZcCAhL6q1-dhi-YEVEg0tcn1pnh6unjlQqv-oyhJ6yqzBhbJawPeRvhMfmgSlpv63mxUSiUPeJ6HpUj1h_qN_UnT8N92lPas-X0kkxye6LqMeiutLA3VFHe2shiXv5XWUBzPk4v6KyvJFhtHzJt5NwWbFYunJt0luzvm28LaA" width="720" /> </noscript>

New campaign for Kubernetes 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/i2lq6c-Lz7KZ-WjN1zrZZL3x9fo4QFk1RdATOVeF7yw2DS9tTK5hHikUyjzDuCYGEbMUUnOWZTC4XlRkmpiWJPfnGOLuY4h98H0s3sHp0qVQnjufQu3b49aZnUZuc3b-ZqEjBqzMvCsC8y_LDrv6yM5OOsi_rO-GoXdo4jzZxYusxssvIAEtV3wDUg" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Hj9bmCQ10KnvYV-4inWQmdy5uLrkRT8jgaIoAyEpzkPma4kTT4fIdTpz0xck6ENCMbnh7hbN2EzU80Ji5Q_kpW9LH8waQw9SgbG9ZFGP7SLpldnmIwOuID8GJAaMAXPwR0UFKS6AQkGpgP_6u6ObO9sw_pvY-LV4w8YnhxVCdzFr9iMuRk7bgMpxaA" width="720" /> </noscript>

Ubuntu Summit logo 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/MVtA-jadyNqXLDeYMPC-mJjnK2_QPABWv1SsQkIWtFgvpUgboON_DiM_WEm8fD4_RZAa6hyA5Rg-lfr84aXV58NK3BcU30sPB5_O0QjtxS_LtL83Pd9EaRwShr_Rq7ctDzJLAEHg84pfSxvzplVb8IERe7Aum2mYz6PT-gLiq3PudmkNYJO9KbrKjw" width="720" /> </noscript>

Commercial 

The Commercial team maintains and develops all commercial service U’s provided by Canonical. Including the Ubuntu Advantage store and our CUE certification.

After introducing online renewals for Ubuntu Advantage subscriptions, we are now working on updating the Ubuntu Advantage dashboard to make it easier for users to find out the status of their subscriptions and how to renew them more easily. On top of information like expiry date or included services, the dashboard will also feature information on the available renewal options as well as the subscription renewal status. We are also working on upgrading our expiry notifications to facilitate the recovery of expired subscriptions.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/b5xuzQ0LDCEcUV22mX4VthkSBCL2TeuVznDDE_taxBDL8dllkP-Ae4ThREtPLZWXD2yreCn03IN9oo9VNNtgEn2R9yRZyXGXAhwfW8gHb0yOrEvcd7guLKbG7-SWGsXbNS4-mNStEEOCJEJrNhlm-2ynQMOWnn9_v7HHcy88BfVHWg6_rD7R5FDV4Q" width="720" /> </noscript>

MAAS

The MAAS team develops the UI for the MAAS project.

NVIDIA DPUs – Child-parent machines

The MAAS core engineering team has been working on the ability to support accelerators, with NVIDIA Bluefield 2 DPUs as a starting MVP. We want to expose these new SoCs in MAAS and the relationship with their host machine, which was not available by default. 

That led us to understand that there was a more generic problem to solve: child-parent relationships between machines were not explicit and clear. 

We defined the entire set of machines MAAS can show and how are their relationships reflected in the DB. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/8Ve8ybELyf048Hih7dXGDjGQMmFYGnvQkNGNFCt1v0R_HND6pnIslh4u39ouuPcRu1Rt2FvbfwstDckkjhAikWW-bYheLB5_A2te5QfPJDmBySarcLEZL7n2onTNa3Bdg85bN_cP44Ttem0W33Kv6o7Ao7oYLoY4G6iG3ugkIrwQR5UI1KgM8nkhCg" width="720" /> </noscript>

With that in mind, next iteration we will test the first wireframe and gather feedback from different users to refine our idea.

DHCP and Discovery forms with server-side machine search

We made progress in implementing the server-side machine filtering and integrated it with the DHCP and Discovery forms. Now, only the machines that are required for the particular view will be requested and can be searched more easily via the new list box UI component.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Cl8NWmdPc4U7EZZfdno31QMEndUsPtVu1UYVgCmZvOhFBbGVTaljrmquqp8K87rhm8ymNtVu4Vq-GDUQrh92DiAaBbxG8bh-uB4dchX97uXcnoiSrVlnMOjGtSIlcFE3bK1FX6K48DHBTRkdPMOk31yf12yS6rdtUse3Xx6iVgYU6VAVvD5r2ixbrA" width="720" /> </noscript>

Cleaning up the node summary page

The node summary page is a core part of the MAAS UI – it shows the essential information a user may need to know about any given machine or controller. Showing this information in a clean and concise way is absolutely critical, and after many iterations, the design of the page was in need of some cleaning up.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/xLMTD_wGDQF2nsYSEOwmMODrGRi5hArhT2tl8fqUewUu8cUzx2ifvK4jN8EMibm_q9ZsaR2DmuTTglcaGk-OqQrhcTmQhAhZa7WO4vV5GhmU1ZvT2oHhrcozjEMgrbRmadlCNNlK2JmvksIHoNNUyVyQAYWIFW5WTpSiU5TfpbYxUhXZFgnx2y3Cwg" width="720" /> </noscript>

We made the decision to remove “Domain” from the bottom row, since this is already shown in the page header next to the node’s name. There’s also a lot of unused space in this row, which could be used to allow the tags assigned to a node to be displayed in full, rather than just being truncated. Some of the alignment of different components was off as well, and there was a lot of unnecessary vertical whitespace which could be gotten rid of to make the whole page a lot cleaner and more concise.

The bottom half of the page also had some issues. With all of the other text on the page being left-aligned, there was no reason to have the labels for hardware information and Numa nodes aligned to the centre. As with the overview card above, there is also a lot of vertical whitespace that isn’t needed, and the boot mode of the node also needed to be displayed.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/wO2jHqlxTg2DahOCRFU1tPjJgtOFn8KnTrcbnYNLHOD9dfh1BmqINvLwK45J7RT6ps36lN75lFnQxXKwsE2QqagVeIIHSXZs873-g9tdzAc0iT5BjacA5rIHf2jVFn2CJ2b3MIGNJvZAiqhCXmpdWtQqIU5ob8uI9k3kTjsSKNGNpCrRVRYVI8LPvg" width="720" /> </noscript>

A lot of back and forth between developers and visual designers was carried out in order to achieve the best possible look for the page, and you can see the final product below.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/lqKg6pAPkKCEApJMwSRXRvsasl9WZR3aUm2OftNk44jMc0doeIkGrbDydQCAKRrVZc8A6KZ9t1QT7SL5lU7vQXhpVtdlorhus4IQehof-4CuhgeagA3G0G6dyAjgSl_EwCFI74ai35rfRVDD9yiClgZQDRTkWpp57jRRpKdZiWvlGkUmmKGmq7Lenw" width="720" /> </noscript>

Design system

The Vanilla team designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

New website homepage and main pages

In this iteration, we completed the work of designing and writing content for the main sections of the new design system website. These pages include resource pages with downloadable files for designers and developers; a page explaining how the public can contribute to our design system; a getting started guide; and the new homepage. 

Accordion side navigation

We also continued work on expandable accordion side navigation, improving the code, polishing some edge cases and preparing it for release with the next version of Vanilla.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/6kR4RikEtlnC9WodAGLT4nDBoBTfNk-mLnN_eURTB2IT4HMHnd0Z4d_310-YRz7H7TNVOBJa4k3u6roJ3dVc8NNYYueH9nMBGkAXZxPzv1VpUBguRnOct3y2pfSXBOTFJSrSyu5MpEYKdzVkSu-LifUvTd3JHhSiAzhPIUoaBVNmLLnjdd3XUOkxcg" width="720" /> </noscript>

Marketplace

The Marketplace team works closely with the Store, Snapcraft, Snapd and Desktop teams to develop and maintain the Snap Store and Charmhub.

Snapcraft

Collaborator

As part of the Snapcraft dashboard migration to the core snapcraft website, we have completed the UX design for the collaborator role.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/MhFqX9vl2xPXAvt0i-kmfuRV5YcvMOmFvSgJXvze-7wDV49Ly4jj_wPFTzsoKJ4BySHdlJMpVYvAxLMxQp6N6Kch9JR1VzWtdCk73lpnCSaRV8-zyttIRXLC8B25gNNeRpymUqZ52UoxHrJPfK9qNrvWD1tC6vopMUjg8qb21ncJSiyttsIJYRXUcg" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/qnINdgwwuZox-VEuW6ryrR5tz275tUthC0MaeA39WQ5bCX_lq6Tn-k_GOpm90bRzoZMqC-rxXQh_Ae1_ck1AD5A047U9Y5t-EaL5TLAn5emtW1h4U4AlZewY73pTwujrkKpaiKguM19qkeGDdGTGZYs0plFuTj3LbNwPfVFVZHVAIb_6GSitjCIB6Q" width="720" /> </noscript>

The visual design work is in progress.

Team posts:

We are hiring!

Come and join the team. If you would like to find out more about the team please read our blog and description of the team.

With ♥ from Canonical web team.

23 September, 2022 08:45AM

September 22, 2022

Ubuntu Blog: Public cloud for telco – Part 2: Google Cloud Platform

This is the second blog in a series focusing on how telecom operators can leverage public clouds to meet their business demands. In a previous blog, we talked about Amazon Web Services (AWS) and how its services made it possible for telcos to shift towards public clouds. In this blog, you’ll get to know about Google Cloud Platform (GCP) and its role in enabling the telecommunications industry to leverage the cloud’s capabilities. 

Telcos are evolving each day as per the need of the era, especially with the arrival of 5G. Communication Service Providers (CSPs) rely on traditional network infrastructures and face challenges both in growth and reliability. The question is, how can telcos effectively transform and meet scalability and performance demands?

The answer lies in the adoption of digitisation and cloud-native trends. GCP provides an on-demand platform that can scale as requirements grow. It facilitates high service availability to meet disruptions. It also ensures improved performance with enhanced platform awareness capabilities.  

GCP for Telcos

Google Cloud Platform (GCP) is enabling telecom operators and Network Equipment Providers (NEPs) to capitalise 5G and network-centric businesses. Promises of 5G with faster internet speed and lower latencies have increased expectations for users. Therefore, telcos are adopting public clouds to run their applications and services closer to end customers.

In the last few years, GCP has engaged with the telecom industry to help accelerate real time data-driven analytics using Artificial Intelligence (AI) and Machine Learning (ML). GCP also offers a variety of services to telcos with a pay-as-you-go billing model. These services include managed containerised microservices, network load balancing, scalability and fault tolerance across multiple zones and regions.  The following services support multi-cloud and edge deployments in particular:

  • Google’s Anthos manages containerised workloads. Anthos not only supports multi-cloud deployments but also facilitates the migration of existing workloads to microservices on top of Kubernetes.
  • Google also released Global Mobile Edge Cloud (GMEC), a centralised platform to provide 5G solutions. It was built as a joint venture between telecommunication partners to ease 5G adoption, cloud trends and support edge deployments. GMEC delivers more compute power at edge sites to provide reliability for latency-sensitive applications.

Telecommunication companies leveraging GCP’s infrastructure, platform and solutions for their enterprise-grade workloads include Telenor, AT&T and Jio. Figure below represents the microservice reference architecture of 5G components deployed on GCP with ROCKS Ubuntu images.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/12b1/Asset-12.jpg" width="720" /> </noscript>
GCP reference architecture for 5G with Rocks Ubuntu Images

Telcos can also use GCP’s next-generation platform capabilities in terms of network, storage, and compute.

  • Google Virtual Private Cloud (VPC): ensures network connectivity between cloud resources created on top of Compute Engine virtual machine (VM) instances and Google Kubernetes Engine (GKE) clusters. Telcos are utilising this service for secure and reliable communication over private networks between inter and intra tenants, gaining flexibility and scalability. Andromeda is another service which reduces intra-zone network latency between compute VMs by 40%. Telcos can leverage this Software Defined Networking (SDN) stack for latency-sensitive applications.
  • Google Compute Engine (CE) : Virtual Machine (VM) Instances have different types including compute optimised, network optimised and memory optimised. The type should be selected based on the nature of the workload. For example,  5G core services like the Access and Mobility Management Function (AMF)  and Session Management Function (SMF) can leverage general purpose VM instances as they are not latency or throughput-sensitive. The access network components including radio unit (RU), distributed unit (DU), centralised unit (CU), and the user plane function (UPF) could leverage either compute or network optimised VM instances, as they are latency and throughput sensitive.
  • Hybrid Connectivity offers security for hybrid environments. Telcos are using this service to connect to any region around the globe with lower latencies and improved performance. One of the major reasons for its adoption is a guaranteed uptime of 99.99%. It acts like a dedicated interconnect or cloud virtual private network (VPN) ensuring better security for critical workloads and operations.
  • Google Virtual NIC (GVNIC) is a specialised interface attached to Compute Engine VM instances as an alternative to VirtIO-based ethernet drivers. Telcos can leverage this interface for higher throughputs and lower latencies.
  • Anthos – GKE  is a managed platform for application deployments both in VMs and containers. It lets you not only build and manage applications but also ensures operational consistency across them. Telcos are using Anthos for managing GKE clusters across different environments.
  • Network Connectivity Centre (NCC)  enables enterprise networks that can inter-link between multi-clouds. Telcos could benefit from it to manage and run applications across multiple cloud platforms. 
  • Cloud Run is a serverless GCP offering that enables telcos to build applications across edge sites.  Telcos use Cloud Run for implementing edge logic across different locations in a region. 
  • Cloud Load Balancing (CLB) manages and distributes the incoming load across multiple instances of an enterprise workload in the same or different availability zones. CLB ensures the security of telco workloads, as they are not directly exposed to the internet. It also enables scalability and security is guaranteed.  

The following figure represents GCP services used by telcos for their enterprise-grade workloads. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/f8c0/Asset-11.jpg" width="720" /> </noscript>
GCP services for telco workloads

Running telco workloads on Ubuntu Pro for GCP

Google cloud and Canonical have developed multiple solutions ranging from VMs to K8s clusters and AI. Both companies have jointly created cloud server images for enterprises to accelerate their cloud adoption. 

Ubuntu Pro for GCP is a specialised and premium server image developed by Canonical for production workloads. Telcos leverage GCP and Ubuntu Pro altogether with pay-as-you-go billing to minimise their operational expenses. Ubuntu Pro images are optimised for critical telco operations and pricing is proportional to the utilisation of underlying GCP compute resources.

Ubuntu Pro server images are secure, cost-effective and performance optimised. Ubuntu Pro images come with additional security, live patching and compliance to industry standards required for enterprise grade and mission critical workloads. Gojek is one of the leading companies running their workloads on GCP with Ubuntu Pro as underlying Operating System (OS). 

Ubuntu Pro images come with added support for enhanced platform awareness (EPA) features including DPDK, SR-IOV, NUMA and HugePages. Canonical also offers base images for containers, which are also compliant with the Open Container Initiative (OCI). Telcos running sensitive workloads on containers leverage GKE and Rocks Ubuntu container images.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/e14d/Asset-1.jpg" width="720" /> </noscript>
Ubuntu Pro based VM instance (s)

Ubuntu Pro is ideal for telcos to run critical workloads on due to its integration with Google Cloud and the following features: 

  • Reliable Maintenance period – one decade:  Canonical provides long-term support (LTS) for ten years to Ubuntu Pro customers, with regular security updates and reliable upgrades. 
  • Open-source security:  security patches for hundreds of applications from the open source community, not limited to Apache Kafka, MongoDB, RabbitMQ, Redis, and NodeJS.
  • Multi-version offerings: Canonical offers multiple versions of Ubuntu Pro on GCP  including 16.04 LTS, 18.04 LTS and 20.04 LTS.
  • Optimised cloud based billing: GCP offers a variety of compute instance types and pricing is purely dependent on the compute resource usage.
  • FIPS and CC2 ELA certificates: Ubuntu Pro comes with support for FIPS 140-2 and Common Criteria EAL2-certified components that meet requirements for the Federal Risk and Authorization Management Program (FedRAMP), the Health Insurance Portability and Accountability Act (HIPAA),  the International Organisation for Standardisation (ISO), and Payment card industry compliance (PCI).
  • Portability:  Canonical ensures cloud server images are portable and their mirrors are available to different regions, lowering latency for end customers.
  • Live Kernel Patching:  GCP live kernel patching is enabled to avoid reboots during routine operations. 
  • Industry benchmark standards: Optional support for profiles including CIS and DISA STIG to meet industry benchmark standards.

Summing up

The path to digitisation in telecom has always been challenging. But public clouds are providing much-needed flexibility and agility. Telcos need a trusted platform to build on in order to ensure compliance and security as complexity increases.

While GCP takes care of managing the underlying infrastructure, ensuring security and scalability for critical telco workloads as the network grows,  Canonical provides secure, compliant and confidential server images to run workloads and an extensive offering to bolster telcos’ security and compliance. 

Canonical offers images for both VMs and containerised images, providing flexibility for telcos evaluating environments to run their applications Ubuntu server images have paved an ideal path for the adoption of public clouds. 

Looking to increase agility and resilience to focus on your core business? Contact us to learn more about Canonical in telco today. 



22 September, 2022 02:12PM

hackergotchi for Purism PureOS

Purism PureOS

How We Fixed Reboot Loops on the Librem Mini

Firmware debugging is uniquely challenging, because most conventional software debugging tools aren’t available.  With coreboot’s specialized tooling, support from the amazing community, and a little bit of creativity, we fixed a regression in coreboot 4.17 that caused reboot loops on the Librem Mini. When coreboot makes a new release, I rebase our Librem-specific patches onto […]

The post How We Fixed Reboot Loops on the Librem Mini appeared first on Purism.

22 September, 2022 02:11PM by Jonathon Hall

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: ROS orchestration with snaps

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/0f75/conductor.jpg" width="720" /> </noscript>

Application orchestration is the process of integrating applications together to automate and synchronise processes. In robotics, this is essential, especially on complex systems that involve a lot of different processes working together. But, ROS applications are usually launched all at once from one top-level launch file.

With orchestration, smaller launch files could be launched and synchronised to start one after the other to make sure everything is in the right state. Orchestration can also hold processes and insert some process logic. This is what ROS orchestration should be about.

This way, for instance, you could make your localisation node start only once your map_server made the map available.

Snaps offer orchestration features that might come handy for your ROS orchestration.

In this post, we will demonstrate how to start a snap automatically at boot and how to monitor it. Then, through some examples, we will explore the different orchestration features that snaps offer. We thus assume that you are familiar with snaps for ROS; if you aren’t, or need a refresher, head over to the documentation page.

Let’s get started

Let us first build and install the snap we will use in this step-by-step tutorial

git clone https://github.com/ubuntu-robotics/ros-snaps-examples.git -b ros_orchestration_with_snaps_blog
cd ros-snaps-examples/orchestration_humble_core22
SNAPCRAFT_ENABLE_EXPERIMENTAL_EXTENSIONS=1 snapcraft
sudo snap install talker-listener_0.1_amd64.snap --dangerous 

Note that all the steps described hereafter are already implemented in this git repository. However, they are commented for you to easily follow along.

Start a ROS application automatically at boot

Once you have tested and snapped your robotics software, you can start it from the shell. For an autonomous robot, starting your applications automatically at boot is preferable than starting manually every single time. It obviously saves time and most importantly makes your robot truly autonomous.

Snaps offer a simple way to turn your snap command into services and daemons, so that they will either start automatically at boot time and end when the machine is shut down, or start and stop on demand through socket activation.

Here, we will work with a simple ROS 2 Humble talker-listener that is already snapped (strictly confined). If you want to know how the talker-listener was snapped, you can visit the How to build a snap using ROS 2 Humble blog post.

Turn your snap command into a daemon

Once you have snapped your application, you can not only expose commands, but also create daemons

  • Daemons are commands that can be started automatically at boot, which is a must-have for your robot software.

For now, our snap is exposing two commands – talker and listener. They respectively start the node publishing message and the node subscribing and listening to the message.

You can test the snap by launching each of the following commands in their own terminal:

$ talker-listener.talker
$ talker-listener.listener

In order to start them both automatically in the background, we must turn them into daemons. Snap daemons can be of different types, but the most common one is “simple”. It will simply run as long as the service is enabled.

To turn our application into daemons, we only have to add ‘daemon: simple’ to both our snap applications:

apps: 
  listener:
    command: opt/ros/humble/bin/ros2 run demo_nodes_cpp listener 
    + daemon: simple
    plugs: [network, network-bind]
    extensions: [ros2-humble]

  talker: command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker
    + daemon: simple
    plugs: [network, network-bind]
    extensions: [ros2-humble]

All there’s left to do is to rebuild and re-install the snap. Upon installation, both daemons are going to be automatically started in no particular order

Now we can build and install this snap.

Check your daemons

Now our talker and listener are running in the background. Snaps offer a way to monitor and interact with your snap daemons.

Snap daemons are actually plain SystemD daemons so if you are familiar with SystemD commands and tools (systemctl, journalctl, etc.) you can use them for snap daemons too.

For this post, we are going to focus on snap commands to interact with our daemons and monitor them.

Check our service status

The very first thing would be to verify the status of our daemons, making sure they are running. The snap info command gives us a summary of the status of our daemons,

$ snap info talker-listener

name: talker-listener
summary: ROS 2 Talker/Listener Example
publisher: –
license: unset
description: | 
 This example launches a ROS 2 talker and listener.
Services:
 talker-listener.listener: simple, enabled, active
 talker-listener.talker: simple, enabled, active
refresh-date: today at 18:00 CEST 
installed: 0.1 (x35) 69MB -

Here we see our two services listed. They are both simple, enabled and active.

Simple is the type of daemon we specified. Enabled, means that our service is meant to start automatically (at boot, upon snap install etc). Active, means that our service is currently running.

So here, both our talker and listener services are up and running. Let’s browse the logs.

Browsing the logs

The snap command also offers a way to browse our service logs.

Since our services are already running in the background, we can type:

$ sudo snap logs talker-listener

2022-08-23T11:13:08+02:00 talker-listener.listener [2833606]: [INFO] [1661245988.120676423] [talker]: Publishing: 'Hello World: 123' 
[...]
2022-08-23T11:13:12+02:00 talker-listener.talker[2833607]: [INFO] [1661245992.121411564] [listener]: I heard: [Hello World: 123]

This command will fetch the logs of our services and display the last 10 lines by default. In case you want the command to continuously run and print new logs as they come in, you can use the “-f” option:

sudo snap logs talker-listener -f

Note that so far we have been fetching the logs of our whole snap (both services). We can also get the logs of a specific service. To continuously fetch the listener logs, type:

sudo snap logs talker-listener.listener -f

Interact with snap daemons

The snap command also offers ways to control services. As we saw, our services are currently enabled and active.

Enabled, means our service will start automatically at boot. We can change this by “disabling” it, so it won’t start automatically any more

sudo snap disable talker-listener.talker

Note that disabling the service won’t stop the current running process.

We can also stop the current process altogether with:

sudo snap stop talker-listener.talker

Conversely, we can enable/start a service with:

sudo snap enable talker-listener.talker
sudo snap start talker-listener.talker

Make sure to re-enable everything to keep following this post along:

sudo snap enable talker-listener

ROS orchestration

So far, our talker and listener start up without any specific orchestration; or in layman’s terms, in no specific order. Fortunately, snaps offer different ways to orchestrate services.

To spice up our experience, let’s add some script to our snap to showcase the orchestration features:

Parts:
[...]
  + # copy local scripts to the snap usr/bin 
  + local-files:
    + plugin: dump
      + source: snap/local/
      + organize: '*.sh': usr/bin/

This is a collection of bash scripts that have been conveniently prepared to demonstrate orchestration hereafter.

We will also add another app:

Apps:
  [...]
  + listener-waiter:
    + command: usr/bin/listener-waiter.sh
    + daemon: oneshot
    + plugs: [network, network-bind]
    + extensions: [ros2-humble]
    + # Necessary for python3 ROS app
    + environment: "LD_LIBRARY_PATH": "$LD_LIBRARY_PATH:$SNAP/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/blas:$SNAP/usr/lib/$SNAPCRAFT_ARCH_TRIPLET/lapack"

This app is simply waiting for the node /listener to be present. This daemon is created as a “oneshot”, another type of daemon that is meant to only be run once at start and then exit after completion.

After and before for ROS orchestration

The very first thing we can do is to change the start order. The after/before keywords are valid only for daemons and allow us to specify if a specific daemon should be started after or before one (or several) other service(s). Note that for oneshot daemons, the before/after keywords are waiting for the completion of the oneshot service.

The scenario here goes as follows: start the listener, make sure it’s properly started, then and only then, start the talker. To make sure our listener is properly started, we will use the listener-waiter app we introduced in the previous section. Remember, it waits for a node to be listed.

Here, we define the orchestration only at the listener-waiter application level to keep it simple. So, we want it to start after the listener and before the talker so the talker will start only once the listener is ready.

To do so, let’s use the before/after keywords:

listener-waiter:
      command: usr/bin/listener-waiter.sh
      daemon: oneshot
      + after: [listener]
      + before: [talker]
      plugs: [network, network-bind]
      extensions: [ros2-humble]

This is rather explicit, it must be started after the listener but before the talker. After rebuilding the snap, we can reinstall it and look at the log again. Here is a shortened version of the output logs:

systemd[1]: Started Service for snap application talker-listener.listener. 
systemd[1]: Starting Service for snap application talker-listener.listener-waiter... talker-listener.listener-waiter[76329]: Making sure the listener is started 
systemd[1]: snap.talker-listener.listener-waiter.service: Succeeded. 
systemd[1]: Finished Service for snap application talker-listener.listener-waiter. systemd[1]: Started Service for snap application
talker-listener.talker. [talker]: Publishing: 'Hello World: 1'
talker-listener.listener[76439]: [INFO] [1661266809.685248681] [listener]: I heard: [Hello World: 1]

We can see in this log that everything went as expected. The talker has been started only once the listener was available.

In this example, we specified the before/after field within the listener-waiter for the sake of simplicity. However, any daemon can specify a before/after as long as the specified applications are from the same snap, allowing for pretty complex orchestration.

Stop-command for ROS orchestration 

Another interesting feature for snap orchestration is the stop-command. It allows one to specify a script, or a command, to be called right before the stop signal is sent to a program when running snap stop. With this, we could make sure, for instance, that everything is synchronised or saved before exiting. Let’s look at a quick example: running an echo of a string.

A script called stop-command.sh has already been added to the snap usr/bin.

All we need to do here is to specify the path to the said script as a stop-command.

talker:
  command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker
  plugs: [network, network-bind]
  daemon: simple
  + stop-command: usr/bin/stop-command.sh
  extensions: [ros2-humble]

After rebuilding and reinstalling the snap, we can trigger a stop manually with the snap stop command:

sudo snap stop talker-listener # stop all the services of this snap
sudo snap logs talker-listener -f # visualize the logs

We should see an output similar to:

systemd[1]: Stopping Service for snap application talker-listener.listener... 
systemd[1]: snap.talker-listener.listener.service: Succeeded.
systemd[1]: Stopped Service for snap application talker-listener.listener.
systemd[1]: Stopping Service for snap application talker-listener.talker... 
talker-listener.talker[86863]: About to stop the service
systemd[1]: snap.talker-listener.talker.service: Succeeded. 2022-08-23T17:23:57+02:00 systemd[1]: Stopped Service for snap application talker-listener.talker.

From the logs, we can see that before exiting the service talker, the stop-command script was executed and printed a message: “About to stop the service”. Then, only after the stop-command script finished, was the talker terminated.

Post-stop-command for ROS orchestration

Similarly to the stop-command entry, the post-stop-command is also calling a command, but this time, only after the service is stopped.

The use case could be to run some data clean-up or even notify a server that your system just stopped. Again, let us try this feature with a conveniently pre-baked script logging a message

talker:
  command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker
  plugs: [network, network-bind]
  daemon: simple
  stop-command: usr/bin/stop-command.sh
  + post-stop-command: usr/bin/post-stop-command.sh
  extensions: [ros2-humble]

Rebuild, re-install, and without much surprise, we get the following output:

systemd[1]: Stopping Service for snap application talker-listener.talker...
talker-listener.talker[90660]: About to stop the service
talker-listener.talker[90548]: [INFO] [1661269138.094854527] [rclcpp]: signal_handler(signum=15)
talker-listener.talker[90710]: Goodbye from the post-stop-command!
systemd[1]: snap.talker-listener.talker.service: Succeeded.
systemd[1]: Stopped Service for snap application talker-listener.talker.

From the logs we can see that our talker application executed the stop-command script then received the termination signal and only after our post-command script logged the message: ”Goodbye from the post-stop-command!”.

Command-chain

So far, we have seen how to call additional commands around the moment we stop our service. The command-chain keyword allows us to list commands to be executed before our main command. The characteristic use case is to set up your environment. The ros2-humble extension that we are using in our snap example is actually using this mechanism. Thanks to it, we don’t have to worry about sourcing the ROS environment in the snap. If you are curious, here is the said command-chain script. The best part is that the command-chain entry is not only available for daemons, but also for services and regular commands.

The scripts listed in the command-chain are not called one by one automatically. Instead, they are called as arguments of each others, resulting in a final command similar to:

./command-chain-script1.sh command-chain-script2.sh main-script.sh

So you must make sure that your command-chain-scripts are calling passed arguments as executables. For example, here, command-chain-script1.sh is responsible for calling command-chain-script2.sh.

Let’s see what our command-chain-talker.sh script looks like

#!/usr/bin/sh
echo "Hello from the talker command-chain!"

# Necessary to start the main command 
exec $@

The only thing to pay attention to is the exec $@ which is simply calling the next command. If we don’t specify this, our main snap command won’t be called.

Let’s add yet another script to our talker command-chain:

talker:
  + command-chain: [usr/bin/command-chain-talker.sh]
  command: opt/ros/humble/bin/ros2 run demo_nodes_cpp talker
  plugs: [network, network-bind]
  daemon: simple
  stop-command: usr/bin/stop-command.sh
  post-stop-command: usr/bin/post-stop-command.sh
  extensions: [ros2-humble]

After building and installing, we can see that now the logs are:

systemd[1]: Starting Service for snap application talker-listener.listener-waiter... talker-listener.listener-waiter[96438]: Making sure the listener is started 
systemd[1]: snap.talker-listener.listener-waiter.service: Succeeded. 
systemd[1]: Finished Service for snap application talker-listener.listener-waiter.
systemd[1]: Started Service for snap application talker-listener.talker.
talker-listener.talker[96538]: Hello from the talker command-chain!
talker-listener.talker[96594]: [INFO] [1661271361.139378609] [talker]: Publishing: 'Hello World: 1'

We can see from the logs that once our listener was available, the talker part was started. The command-chain-talker.sh script was called and printed the message:  “Hello from the talker command-chain!”, and only after that our talker started publishing.

Conclusion

I hope that reading this article helps you understand the snap daemon features a bit more and inspires you to use them for ROS orchestration. For now, orchestration can only be done within the same snap, since strictly confined snaps are not allowed to launch other applications outside their sandbox. Of course, you could also combine the snap orchestration features with other orchestration software. Most notably, ROS 2 nodes lifecycle allows you to control the state of your nodes, so that you can orchestrate your node’s initialisation, for instance.
If you have any feedback, questions or ideas regarding ROS snap orchestration with snaps, please join our forum and let us know what you think. Furthermore, have a look at the snap documentation if you want to learn more about snaps for robotics applications.

22 September, 2022 12:28PM

hackergotchi for GreenboneOS

GreenboneOS

Docker Container for Greenbone Community Edition

Greenbone is stepping up its commitment to open source and the community edition of its vulnerability management software. In addition to the open source code on Github, Greenbone now also provides pre-configured and tested Docker containers.

Official containers from the manufacturer itself

The Greenbone Community Containers are regularly built automatically and are also available for ARM and Raspberry Pi.

Björn Ricks, Senior Software Developer at Greenbone, sees this as a “big improvement for admins who just want to give Greenbone a try. Our official containers replace the many different Docker images that exist on the web with an official, always up-to-date, always-maintained version of Greenbone.”

Official Docker Container for Greenbone Community Edition

Hi Björn, what is your role at Greenbone?

Björn Ricks: One of my current tasks is to provide community container builds at Greenbone. Taking care of the community has always been a big concern of mine and for a long time I wanted to make sure that we also provide “official” Docker images of Greenbone. I’m very pleased that this has now worked out.

What is the benefit of the images for the community?

Björn Ricks: We make it much easier for administrators and users who want to test Greenbone. The installation now works completely independent of the operating system used: just download and run the Docker compose file that describes the services, open the browser and scan the local network. I think that’s a much lower barrier to entry, ideal even for anyone who doesn’t yet know the details and capabilities of our products.

Why does Greenbone now provide containers itself? There were already some on the net, weren’t there?

Björn Ricks: Yes, that’s right, but we found out that some people were unsure about the content, legitimacy and maintenance of these images. That’s why we decided to offer Docker images signed by us with verified and secured content.
All the container images existing on the network have different version status and even more so different quality grade. It is often impossible to tell from the outside whether an image is “any good” or not. Of course, you also have to trust the external authors and maintainers that they know what they are doing and that their images do not contain any additional security vulnerabilities. Only we, as producers of our own software, can guarantee that the published container images have the current version status and the desired quality grade.

Does Greenbone also plan to provide Docker images for its commercial product line, Greenbone Enterprise Appliances?

Björn Ricks: That depends on requests from our commercial customers. The Greenbone Community Edition includes access to the community feed with around 100,000 vulnerability tests. Our commercial feed contains even more tests, including those for many proprietary products that our customers use.

We have found that our customers are happy with our appliances, our virtual appliances, and our cloud solution – all of which qualify for use of the commercial feed subscription. However, this could change, and if it does, we will consider offering Docker containers to commercial customers.

How often are the images updated and what feed is included?

Björn Ricks: The images are built and published directly from the source code repositories. So they are always up to date and contain all patches. At the moment only the community feed is available for the images, but this might change in the future.

Where can I get the images and the documentation?

Björn Ricks: The Docker compose file for orchestrating the services is linked in the documentation, The Dockerfiles for building the Docker images can also be found on Github in the corresponding repositories, and are quite easy to download, for example: here.


22 September, 2022 09:11AM by Markus Feilner

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E206 Bruno Miguel

Ainda em férias, estivemos à conversa com o Bruno Miguel, pela segunda vez… Conforme prometido, eis a repetição da conversa com fantástico Bruno. Falámos sobre múltiplos assuntos mas destacamos a sua participação no projecto Fosshost, bem como todas as boas práticas de um experimentado utilizador do software Logseq. Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

22 September, 2022 12:00AM

September 21, 2022

Ubuntu Blog: Systemd support lands in WSL – unleash the full power of Ubuntu today

Systemd support has arrived in WSL! This long-awaited upgrade to WSL unlocks a huge number of quality of life features for managing processes and services. This includes snapd support, which enables users to take advantage of all of the tools and apps available on snapcraft.io.

Systemd support is particularly useful for web developers who want to set up and develop service applications inside WSL before deploying them to the cloud. In this post we take you through some best practices on getting started with systemd with this in mind.

For more information on systemd support, including demos of the projects in this post, check out the video above or visit aka.ms/wslsystemd.

How to enable systemd in Ubuntu WSL

Make sure you are running the Microsoft Store version of WSL (version 0.67.6 and higher) to get access to systemd. This is currently available on the latest Windows 11 insiders build ahead of general release later this year.

Inside your Ubuntu instance, add the following modification to /etc/wsl.conf.

[boot]
systemd=true

Then restart your instance by running wsl --shutdown in PowerShell and relaunching Ubuntu.

Note: If you are running Ubuntu Preview, this option will be enabled by default within the next few days.

With everything set up you can now start exploring all of the new functionality enabled by systemd!

Use snap to create a Nextcloud instance in minutes on WSL

Nextcloud is a suite of client-server software for creating and using file hosting services. Think of it as an open source, self-hosted OneDrive or Google Drive and a great way to see the new potential of systemd and snapd in Ubuntu WSL.

With the Nextcloud snap you can have a working instance up and running in under 3 minutes. Don’t believe me? Try it for yourself!

First, install the Nextcloud snap.

$ sudo snap install nextcloud

Then create a username and password.

$ sudo nextcloud.manual-install USERNAME PASSWORD

That’s it! To check that it’s running, use:

$ snap services

This command will list all the snap processes running on the instance and whether they’re enabled to run on startup.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/w3IyO2PIc7h3LFSs7rEXAoqXH4hyv-LceK4JIUPrqMoTpbJo8QUJ70e-Nlq8O01gpEVlLULk_7XrBtsEoxjYi2uuEtnqYcADjt1ReMUXdjnhLZsXB_DHmK21axmPobPhYd9fvzqTqIbyNW3f9y4iolgJiOMIHFTKPgJdDC5zt7jLziHlBy7ZZJ3y8Q" width="720" /> </noscript>

We can also examine specific services using systemctl.

$ systemctl status snap.nextcloud.apache
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/38nIp-WwreQQwfJoX27U6jr5BfZEN_4kzsF80rDMm7W2wJmYxcYefqFPW-ktQnSeAsQACtGcSw6ItOlBhSg3vkSP4GOPu1K9GsL9Tu_ad7ufz6LF87HjrNXcXS9EvZSWC0NLaNaYIt7M8sjkkp8zc9rEn-_LVES-VY_9NN0zspZbJ70Fn1y3tppnNQ" width="720" /> </noscript>

We can see everything appears to be running correctly. You can access your Nextcloud instance by going to http://localhost in your native Windows browser and entering your username and password to check out your new Nextcloud!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/gZyPqwXZ-r-3Id3tIIs_uR19PbUsk91uxtQqpDbklcYt-J0ShGp2D8b0-8NwQPym1wl4T5EHx_UAQfXhB1oWsGCfofiTGduwcj4HTTJ3eSKrb7-drDMhGRTzLe2x-rD_jfFdWmTOw_7uG4oM4wzHISnBpTQc1GqKE_NtsIQlaKZPYmHcokmasJeW-g" width="720" /> </noscript>

Next let’s get down to business. Working directly in WSL is fine when you’re only running one web app. But when you’re working on multiple projects simultaneously you need a way to easily switch between them. Using containers in these instances can improve your workflow considerably.

Manage your web projects with LXD

LXD is a next generation system container that supports images for a wide number of Linux distributions, not just Ubuntu. LXD is designed to provide a better user experience on top of LXC containers, which are lightweight and easy to get started with.

In Ubuntu WSL, LXD should be installed by default but you can check this by running:

$ snap list

And if it’s not there simply snap install lxd.

To make sure it’s up to date run:

$ snap refresh lxd

You can then initialise LXD for the first time by running:

$ lxd init --auto

This sets up LXD with some sensible defaults. Run this without the –auto flag if you want to play with the configuration options.

Let’s start by setting up an example project. Projects are a way of grouping LXC containers to make them easier to manage. You can read more about working with projects in LXD here.

First, create a project called ‘client-website’:

$ lxc project create client-website -c features.images=false -c features.profiles=false

Then switch to work in that project:

$ lxc project switch client-website

Finally we create two LXC containers, one for a database and another for a webserver:

$ lxc launch ubuntu:22.04 webserver
$ lxc launch ubuntu:22.04 dbserver

We can run:

$ lxc ls

To see our projects and their IP addresses.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/ZkBcHiJobz8yydrkKiXqsnwyGapJvUFMGqDPb6ya_BxvP6TUKtwAbKHo4C9Hep5jaQJlVme1VR9MS9iQXYJowCO24tGwjOJseC_A1V5kh_UXlpC3Ap6MbGz05RfJMpGf007cRhW3xX9xjGjPJq61Kz-tFn6PL8T8SSVdQFP09uuEeHd4uPK3bgp99Q" width="720" /> </noscript>

Install your webserver

Next we’ll install a basic Apache webserver in our webserver container, we can enter a bash terminal in the container by running:

$ lxc exec webserver bash

And install apache2:

$ apt install apache2

Let’s use systemctl again to confirm everything is running correctly:

$ systemctl status apache2.service

And then exit the container.

$ exit

Accessing containers in your native Windows browser is a little complex, however with WSLg we can easily use a web browser inside WSL to view all of our projects directly from the container IP.

$ sudo snap install firefox

Then run:

$ firefox &

And navigate to your container IP inside your web browser running on Ubuntu.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/iEP3BDw-qnQvo9EQpD7l4l2IX29QdP-bLITvKJWkSRYjH3cATdMxUB4gYWddkPMxH9mcsa2yRln5nDlXrwnmjH9wx7K1fBuhwAm62OtE22tUUNt9mq903YhReZCtQWNSdyJK-7pA19fJQidZPepW7kuLdxoFFL7PNH2Tq1rn4WC187Fzz7tQUCqgkQ" width="720" /> </noscript>

This method allows us to work on multiple web projects simultaneously and access them all via the browser with no fuss.

If you would like additional information on how to access containers directly in your Windows browser then check out our new video covering the use of LXD profiles and devices.

With a basic front-end in place, let’s also set up a back-end database for our website using MySQL.

Install MySQL

To install MySQL we need to exit the webserver and transition to the dbserver container:

$ lxc exec dbserver bash

And then run the following command:

$ apt install mysql-server

Again everything should be immediately up and running but we can check with:

$ systemctl status mysql

And if not we can start the database with:

$ systemctl start mysql

To start interacting with our new MySQL database, it’s as simple as running:

$ mysql

And then running some simple commands:

mysql> show databases;
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/jD99rDg_W4ilWfG6Z3lh-xb7ygnQpZtvjX7m1WZMmo2-EZQ_7EQyl8PwjG8_fXeBbfFuo6H8iaMz2h32jb_kH_Awbhw8OWlZuGG_cAIBQoxp3SVfUoooPbASgFTVzCp00trgkg1kUeSgp5vTmRW0tRoz4sFwxde04etLj0ZH8I1Tn1A1xzKwpez04g" width="720" /> </noscript>

More advanced projects

Hopefully these simple examples have given you a taste of the new workflows and features that are now enabled thanks to systemd on WSL.

For those of you who want to go further we’ve also shared a couple of more advanced projects for you to take a look at.

Deploy your Website with MicroK8s and Docker

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/vsYKu_Lz94tQxI5HbnxUdwzaNWBY6YPfe9Vu241dS-nBZm3Pm4RF1P7eX9dn4A38DfhZ1OurDN1EVYItJUMWJhlfUZuyGtU0CvBV3th8bczDpBaQObUo3RVYcd77_nXt7kV45g4DUMT6KtG0vZaU_NmVlBNpijd2q1CXvJ8be3NuLZHF7hJ8HZwb_A" width="720" /> </noscript>

In the developer video at the top of this page, WSL PM Craig Loewen demonstrated the power of MicroK8s, a lightweight, performant Kubernetes with sensible defaults that makes it quick and easy to get Kubernetes up and running on your machine.

You can try out Craig’s demo for yourself by visiting his github here.

Build a service application with .Net

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/E_2Gb-l84ZLqWGpVUQgZTzZsd99HEEjs8H-ZfdRIWivAX_dAtGdkA5TUP3sShVjKiGSVqJYqw2Ov9Ki9Z-21DpprdjLGNRtXNg0ewfoBYb4po97fyfFhEj6fjMZ-npyP8fz-92DiS9k_8H0HZY8TX6HVGf5b3dRa-8w1Nes43YY2aSIAcLp70mfCTw" width="720" /> </noscript>

The .NET development platform was one of Microsoft’s earliest contributions to open-source projects. Its developer community consists of more than 5 million .NET developers, with many adopting Linux and Linux-based OCI containers at runtime. Earlier this year we announced that .NET 6 is now available from the Canonical repositories.

With Ubuntu WSL it’s never been easier to use .Net to create and test cross platform applications. In our new tutorial we show you how to create a simple chatbot with .Net and run it as a systemd service.

Read our full tutorial

Get started with Ubuntu WSL today!

If you’re just getting started with WSL, check out our Ubuntu installation guides:

Install Ubuntu on WSL for Windows 10 ›

Install Ubuntu on WSL for Windows 11 ›

And visit ubuntu.com/wsl for more tutorials.

21 September, 2022 05:06PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Blog Series about Digital Sovereignty

Part 2: Most Important Drivers of the Digitally Sovereign Emergence in Germany and Europe

A few weeks ago, our working student Ann-Kathrin dedicated her series on Digital Sovereignty to the what & why. To learn more about the meaning of this now almost overused term and the role of open source in strengthening Digital Sovereignty, take a look at part 1 of the blog series.

Today, I will focus on the actors: Who is involved in strengthening and spreading Digital Sovereignty – the state, its citizens, companies, the open source community or institutions? Who is preventing hard-to-resolve dependencies, which are accompanied by security risks as well as growing economic challenges? And who are the key players that are making the genuine improvements and significantly driving the digital turnaround? I hope to find answers to these questions in the second part of this blog series.

The State as Key Player in Strengthening Digital Sovereignty

A comprehensive digital emergence that turns the 2020s into a digital decade requires long-term (financial) commitment on the part of the federal government, business and civil society. In its digital strategy published at the end of August, the German government formulates the targeted digital progress: nationwide fiber-optic connections, digital administrative services and innovations from business and research. Recognizing digitalization as an urgent cross-cutting task is important. However, it is equally important for the German government to take action when it comes explicitly to strengthening Digital Sovereignty and implementing concrete measures such as the establishment of the Sovereign Workplace. Finally, it is the states themselves that, as purchasers of open source software (OSS), could demonstrate the high value they place on sustainable digital independence and freedom of design. Does the state support Digital Sovereignty or rather individual committed parliamentarians? Does the state rely on open source or exclusively on proprietary software from abroad? Is it vulnerable to blackmail and endangering digital security, or does it act as an equal partner to allies?

These and other decisions made by the state shape the image of Digital Sovereignty in civil society and the perception of potentially numerous users. Prioritizing open source and giving it visibility through the state is therefore crucial for an overall social classification and evaluation of OSS. This is why it is so important that the state, with its large IT budget, uses this purchasing power and these regulatory opportunities to reliably achieve strategic goals such as the reduction of dependencies, faster digitalization, on-site competence and capacity building.

 

Souveräner Arbeitsplatz: Quelle BMI

Quelle: Bundesministerium des Innern und für Heimat. (30.08.2022). Open CoDE. https://gitlab.opencode.de/bmi/souveraener_arbeitsplatz/info

 

In Germany, the federal government provides the framework for improving Digital Sovereignty with laws such as the Online Access Act. As the largest customer with enormous purchasing power, it plays a decisive role in sustainably strengthening the open source economy and establishing a sovereign ICT location. This is due not least to the large number of workplaces in the public sector that can be equipped with OSS. With the “sovereign administrative workplace” there is already a project in which Univention, Dataport and other manufacturers from the open source ecosystem are developing the software for the administrative workplace of the future.

Nevertheless, many administrations still rely on Microsoft and are thus bound to regular security updates and the goodwill of the corporation. To prevent the administration from depriving itself of its creative and innovative opportunities in the long term, the German government must take its intentions, which are set out in the digital strategy, more seriously. It must take concrete steps. Otherwise, things do not look very promising for the only slowly progressing digitalization in Germany.

Sovereignty Potential for the Open Source Economy

A study published by the Konrad Adenauer Foundation in May 2022 shows how far Germany still has to go on the road to Digital Sovereignty. This study by researchers Maximilian Mayer and Yen-Chi Lu shows that the EU is nowhere as dependent on non-European countries as it is in the digital economy. The Digital Dependence Index (DDI) they developed provides information on the relationship between domestic demand and foreign supply of digital technologies. The U. S., China and South Korea perform best, achieving a DDI below 0.70. Germany and other EU countries, on the other hand, all exceed the threshold of 0.75, which indicates a high vulnerability of the digital economy. The two researchers suggest gradually lowering the high digital vulnerability in order to gain autonomy and take on a more active steering role. But aren’t many companies already doing that?

Both companies from the IT sector that have been offering tried-and-tested products and services for years and “industry newbies” know what is at stake for them. They want to break away from proprietary providers to reduce external dependencies. It doesn’t matter if it is a pharmaceutical company or industrial manufacturing. Digital Sovereignty affects all industries, including automotive. The increasing complexity is already being countered in many companies with open source software. This was the finding of a study by Kugler Maag Cie, in which leading automotive manufacturers and suppliers were asked about the use of OSS in qualitative interviews. Nevertheless, the authors of the study conclude that not every company is yet fully aware of the many advantages of OSS.

Überblick UCS EN

Identity Management with UCS und UCS@school

In addition to the so-called industry newbies, who have discovered open source for themselves in recent years with the accelerating digitalization, it is the open source companies mentioned above, including many medium-sized companies, which have been (further) developing open, digitally sovereign platforms and solutions for a long time. With UCS and UCS@school, Univention provides products that enable simple and open-source IT operations. On our platform, various technology and software manufacturers can offer their solutions, allowing users to choose for themselves. UCS is made even more flexible through our collaboration with technology and cloud partners, which allow for demand-driven customization.

But it’s not just Univention and other OSS companies that are driving the digital sovereign transformation: the value of the open source community in strengthening OSS is priceless. All over the world, developers are working independently to improve and spread OSS. Together, they form a dense network that sustains change and provides support when other players lack expertise, direction, tenacity, or determination. If it were not for the globally active open source community, OSS would not be anywehere near as developed as it is today.

Why We Need a European Ecosystem

“Regardless of the field, be it cloud, artificial intelligence, cybersecurity or
the so-called Internet of Things, open source software is at the heart of innovation,
and Europe has the opportunity to take the lead here,” writes APELL, the European Association of Open Source Companies in the German daily “FAZ” in December 2021. In doing so, APELL gives priority to the relevance of Europe-wide collaboration for the overall success of Digital Sovereignty. The article goes on to say that OSS at EU level would boost economic growth, facilitate the emergence of a successful European IT industry, and create jobs. The advantages of open source are obvious.

These potentials of OSS for the European economy were confirmed in a study published at the end of 2021 by the Fraunhofer Institute for Systems and Innovation Research and OpenForum Europe on behalf of the European Commission’s Directorate-General CNECT. It proves a significant impact of open source on the competitiveness of European companies, economic growth, the start-up scene and technological independence.

Key findings of the study according to the OSB Alliance:

  • Open source makes a significant contribution to the EU’s GDP
  • Increased open source contributions significantly boost the GDP and support start-up creation
  • Open source promotes high software development skills and mitigates skills shortages
  • The value created by open source exceeds the size of Europe’s institutional capacity
  • Open source software lowers total cost of ownership in the public sector as well

Summary and Outlook: Seize the Opportunity Now!

Based on the insights I gained during my research, including reading numerous studies, I know how important a Europe-wide collaboration of actors from business, civil society and politics is for a sovereignly shaped digitalization. For this to succeed, as many German-based actors as possible need to become active and stand up for open source. While some, such as the Brussels-based think tank OpenForum Europe, are setting a good example, others urgently need to catch up. More serious efforts, concrete plans and measures are needed, especially from governments, if we want to stand on our own two resilient and sovereign feet in Germany and Europe.

However, there is not much time left for the actors to do so. Univention CEO Peter Ganten emphasizes the urgency of the issue in a guest commentary in the German daily “Handelsblatt”:

We cannot afford to put this issue, which is equally important as security and the energy transition, on the back burner. If savings are made at this point, an even greater digital dependency will emerge that will be even more dangerous for Germany’s economy and democracy in the medium term than the current dependency on energy supplies.

In the third article in our blog series on digital sovereignty, you will learn why the digital sovereign transformation is particularly important for administration and education and how these sectors can be supported in their use of OSS.

Der Beitrag Blog Series about Digital Sovereignty erschien zuerst auf Univention.

21 September, 2022 02:42PM by Ann-Kathrin Jekel

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical joins the Connectivity Standards Alliance

Canonical will help set new security and reliability standards for IoT

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/49bf/Ubuntu-and-CSA-on-colored-background.png" width="720" /> </noscript>

September 21st, 2022 – Canonical, the publisher of Ubuntu, announces today that it has joined the Connectivity Standards Alliance as a participant member.

In this role, Canonical will help the alliance to develop open standards for the Internet of Things (IoT) and advocate for the role of open-source software in this domain. Canonical is the first company offering a major independent Linux distribution to join the alliance. 

A leader in the embedded Linux space

By joining the Alliance, Canonical reinforces its commitment to advance IoT innovation and set new standards in the embedded Linux market. Canonical is a leading provider of open-source software across the compute spectrum. Ubuntu is the most popular Linux among developers, and Ubuntu Core, Canonical’s containerised operating system based on Ubuntu and designed for embedded devices, sets the standard for security and reliability in IoT.

The Connectivity Standards Alliance creates, evolves and manages IoT technology standards through a well-established, collaborative process. The Alliance empowers companies with practical, usable assets and tools to ease and accelerate development, freeing them to focus on new areas of IoT innovation.

“Canonical’s commitment to the Alliance is a sign of the growing importance of Linux and open source in the IoT space. We look forward to them contributing their expertise in open-source software to help build the standards of the future.” Chris LaPré, Head of Technology at Connectivity Standards Alliance.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/od46ervaQ6ZU0fVU4hth9JHby4qtydwiCdCSX5DCyVbm0sfVZ8h4weGPxwi_FK5t-zDzpCzSh5C690dnTCZLVmMnSDqK-F8ZEsRf2wR5pO4Tzrmk0Pk2xqVn45Ai9KFg6G4TM02u9FGi95s6N7cyi8Tbz9YQB07FE6SNytx6vXyiC1SJYrBpcl0KsQ" width="720" /> </noscript>

Matter out of the box on Ubuntu Core

Canonical will support the upcoming Matter standard, an Internet Protocol (IP)-based communications standard designed to make smart home devices secure, reliable, and seamless to use. Historically, the smart home industry has suffered from a lack of standards, resulting in vendor lock-in and fragmentation. The Matter standard aims to increase interoperability and accessibility, which are open-source values that Canonical stands by. 

“We aim to make Ubuntu Core the best platform for Matter devices. Our goal is to support Matter out of the box on Ubuntu Core so that it’s the quickest and most reliable way to bring a Matter device to market”, said Nathan Hart, Product Manager at Canonical.

Ubuntu Core complements the Matter standard, providing polished solutions for over-the-air updates and security maintenance – areas outside the scope of Matter. By removing the overhead of maintaining a secure, embedded linux, Ubuntu Core allows companies to focus on the value of their applications. Ubuntu Core and Matter together can provide a fantastic open source solution for smart home products.

21 September, 2022 01:16PM

September 20, 2022

Ubuntu Blog: Debuginfod is now available in Ubuntu

We are happy to announce that Ubuntu now has a debuginfod service available for its users!

What is debuginfod?

According to the project’s official page, debuginfod “… is a client/server software that automatically distributes ELF/DWARF/source-code from servers to clients such as debuggers across HTTP

You can think of debuginfod as a much better replacement for debuginfo packages (i.e., the ddebs packages we have in Ubuntu). When you configure your system to use a debuginfod server, the debugging tool you are using will automatically download the debug symbols it needs over HTTPS, making the process much more seamless.

How can I use it?

If you are using Ubuntu 22.10 Kinetic Kudo, when you install GDB (GNU Debugger) your system will be configured to contact Ubuntu’s debuginfod service automatically when you are debugging a program. GDB will ask you to confirm whether you want to use debuginfod when you invoke it. Please refer to the service webpage for more details on how to configure GDB to automatically use the service

If you are using a supported Ubuntu series released before 22.10 (e.g., 22.04 LTS), you will need to manually configure the service for now.  But don’t despair!  All you need to do is make sure that the DEBUGINFOD_URLS variable is exported into your environment.  The following should do it:

export DEBUGINFOD_URLS="https://debuginfod.ubuntu.com"

If you are using Bash as your shell, you can add the above snippet to your ~/.bashrc file.

Where can I find more information about the service?

You can visit the service webpage, which should redirect you to the Ubuntu Server Guide’s debuginfod page.  There you will find more details about the service and a link to a FAQ page as well.

20 September, 2022 10:32PM

Kubuntu General News: Kubuntu Council Elects 3 Councillors

Members of the Kubuntu Council are responsible for considering proposals made by the wider Kubuntu community. The council formalises and ratifies proposals, then votes to obtain an outcome which directs the course of progress for Kubuntu.

On 11 September I (Rick Timmis) will have been a councilor for the Kubuntu Council for 5 years. Being a councilor is a lot of fun, provides a wonderful sense of fulfillment and also carries a lot of ‘Kudos’ in conversations with those of a technical persuasion.

If you have been using Kubuntu for a while, and have explored some of our community, why not consider getting involved a little deeper ? We are always looking for testers, contributors, bug reporters, documentation, and blog writers.

Becoming a Kubuntu member is the next step up from being a contributor, and if you’ve already made a few contributions over the last 3 to 6 months, then you should consider making an application to become a member.

Kubuntu members are also entitled to stand for election to the Kubuntu council, where you get to support the development of the Kubuntu project.

3 council positions came up for election, as their 2 year terms were coming to an end. We are delighted to announce that existing councillors Myriam Schwiengruber, and Valorie Zimmerman were elected for a further term of 2 years. Simon Quigley has step down from the Council, and we thank him greatly for his many contributions to the project.
Stepping in to replace Simon, we are delighted to have Darin Miller join the Kubuntu council, Welcome Darin.

20 September, 2022 08:45PM

Ubuntu Blog: Ubuntu Core set to redefine industrial computing with new edge AI platform NVIDIA IGX

Enterprises struggle to bring AI and automation to the edge due to strict requirements and regulations across verticals. Long-term support, zero-trust security, and built-in functional safety are only a few challenges faced by players who wish to accelerate their technology adoption. 

At Canonical, we are excited by the promise of bringing secure AI and automation to the edge, and we look forward to providing a stable, open-source foundation for NVIDIA IGX, a new, industrial-grade edge AI platform announced by NVIDIA today. IGX is purpose built for high performance, proactive safety, and end-to-end security in regulated environments. The first product under the IGX platform is NVIDIA IGX Orin, designed to deliver ultra-fast performance in size and power. It’s ideal for use cases in manufacturing, logistics, energy, retail and healthcare.  

NVIDIA IGX brings functional safety to the industrial edge

Organisations are extending to the edge, pushing workloads closer to where users and embedded systems connect to the network. NVIDIA IGX is charting the course for customers to navigate the shift to the edge by bringing built-in functional safety.

Three layers of functional safety — reactive, proactive and predictive — play a crucial role at the industrial edge. Whereas the reactive layer is about mitigating the severity of threats, and predictive safety comprises anticipating future exposure based on past performance, proactivity is about identifying concerns before events occur.

Designed for industrial and medical certifications, IGX Orin redefines industrial computing by delivering proactive safety in regulated environments. The prevention and detection of random and systematic hardware errors are safety features crucial for environments where humans and robots work together. IGX Orin features a programmable safety microcontroller unit built into the board design, enabling functional safety to become a reality.

Enterprises aiming to tap into the fourth industrial revolution can now rely on IGX Orin’s cutting-edge 275 tera-ops per second of AI performance to proactively prevent damage, reduce costs and improve factory efficiency. Even at the edge, requirements and regulations vary, from automotive to industrial to medical use cases. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/1834/edge-ai-press-igx-launch-1600x90052.jpg" width="720" /> </noscript>
NVIDIA IGX

Amanda Saunders, Senior Manager of Edge AI at NVIDIA, said, ”The growth of AI and automation at the edge has led to new requirements in specialised markets. With NVIDIA IGX Orin, we are helping customers seize the opportunity at the edge by bringing AI, security, and proactive safety to regulated markets like industrial automation and medical devices.”

In addition, IGX Orin takes  power optimisations from the mobile system-on-a-chip world to a server form factor. The bleeding-edge performance of IGX Orin, shipping with an NVIDIA ConnectX-7 SmartNIC, capable of 200 gigabits per second, is an energy-efficient system built for low-latency applications with real-time constraints. 

Trusted and secure underlying OSs for a new generation of industrial use cases

Ubuntu, backed by Canonical, is the most popular open-source operating system (OS) for developers, with commercial-grade support available for production deployments. Such support means that Ubuntu is not just the reference OS for innovators and developers, but also the vehicle enabling enterprises to run secure AI workloads at the edge without users having to worry about the stability and security of the underlying platform.

With Ubuntu Core, the application-centric OS for embedded Linux devices, the built-in security of IGX Orin devices can be enhanced, beyond bug fixes and CVE patches. Industrial pioneers will benefit from Ubuntu Core’s state-of-the-art security features, from full-disk encryption to strict confinement. Similarly, every edge application on top of Ubuntu Core sits in a secure, sandboxed environment. By using an OS designed for utmost reliability and optimised for security, world-leading suppliers and manufacturers are free to concentrate their efforts and redirect resources towards their value-add activities.

Bringing high performance to the edge

The 22.04 LTS release of Ubuntu brought increased energy-performance features. Running Ubuntu on the new IGX Orin will provide developers with significant usability, battery and performance improvements. 

Real-time kernel support by Canonical is also available for Ubuntu users, guaranteeing ultra-low latency and security for critical infrastructure. “There is a greater need for high-performance and energy-efficient systems built for real-time applications at the edge,” said Edoardo Barbieri, Product Manager at Canonical. “Real-time Ubuntu will power the next generation of industrial innovations by providing a deterministic response time to their extreme low-latency requirements. Powered by IGX Orin’s high performance, we will deliver minimal latency for enterprise workloads at the edge.”

Using Ubuntu will also enable the community to leverage the open-source ecosystem of applications and AI-based workloads.

Ideal to accelerate industrial transformation

Companies deploying smart automation solutions using Ubuntu Core have plenty to look forward to with the IGX platform.

Take, for instance, robotics: as the automation market grows, so do robotics development and the need for functional safety in environments with close human interactions. Ubuntu Core developers are pushing robotics to new heights, from warehouses to hospitals. NVIDIA IGX’s safety architecture and features will allow robotics companies to accelerate the adoption of their products in safety-critical environments.

“As factories strive for increased overall equipment effectiveness and reduced process downtime, Orin IGX and Ubuntu deliver the perfect combination of high performance, built-in functional safety, end-to-end security and long-term support,” Edoardo Barbieri, Product Manager at Canonical, said, “By delivering proactive safety in regulated environments, we can now predict machine failure based on vibrations before they happen. With proactive part replacement and by preventing downtime, we are bringing the future of industrial automation forward.”

Ready to redefine industrial computing

By redefining industrial computing, NVIDIA IGX will meet the enterprise-grade demands of high-performance systems at the edge. Forward-thinkers and innovators are now in the driver’s seat to push the envelope of AI and robotics in regulated markets.

With the introduction of NVIDIA IGX, enterprises get a boost to bring AI and automation to the edge. Canonical is looking forward to working with NVIDIA to deliver the highest performance, ease of use, and industrial readiness with Ubuntu and Ubuntu Core on the NVIDIA IGX platform.

Resources

For more information about NVIDIA IGX, please check out the NVIDIA blog here.

To explore more about Ubuntu Core, please read the blog and watch the on-demand webinar.

To join us at GTC 2022, please check the blog here.

20 September, 2022 04:58PM

Ubuntu Blog: Common use cases for digital twins in automotive

Digital twins have become somewhat of a buzzword in the past couple of years. But what exactly are they? A digital twin, as its name indicates, is a non-physical copy of a physical object. Just like a digital scan of a physical picture. This virtual element enables a real-time view of all relevant data coming from said object. Depending on the system being studied, specific sensors can be tracked and monitored. This allows for the replication of the system’s environment (adherence of the road, weather, surrounding objects or systems, etc). In this blog post, we will discuss digital twins and their use cases in automotive. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/59a7/audi-g55f8d60f6_1920.png" width="720" /> </noscript>
https://pixabay.com/illustrations/audi-a8-sports-car-automobile-1889698/

Digital twins have become somewhat of a buzzword in the past couple of years. But what exactly are they? A digital twin, as its name indicates, is a non-physical copy of a physical object. Just like a digital scan of a physical picture. This virtual element enables a real-time view of all relevant data coming from said object. Depending on the system being studied, specific sensors can be tracked and monitored. This allows for the replication of the system’s environment (adherence of the road, weather, surrounding objects or systems, etc). In this blog post, we will discuss digital twins and their use cases in automotive. 

For automotive, the value of using digital twins lies mostly in running simulations. It’s easier (and cheaper) to simulate crash tests, autonomous driving and other scenarios in a simulator, rather than using physical vehicles.

With the use of artificial intelligence and machine learning (AI/ML), the virtual twin also can help identify issues before they appear on the physical twin. This makes it possible to apply fixes to the physical twin before any problem occurs in real life. Let’s explore the use cases in more detail. 

Developing new vehicles

Digital twins in the automotive industry can be used during the system design phase in multiple fields, from vehicles to robotic arms. From a vehicle perspective, digital twins allow for more reliable vehicle design and development.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ae69/audi-g4eada3b7b_1920.jpg" width="720" /> </noscript>
https://pixabay.com/illustrations/audi-a8-sports-car-cars-automobile-1889699/

Consider electric vehicles (EVs). Modelling energy consumption for new prototypes is very important. Having a clear view on how a vehicle behaves – from the battery management system (BMS) to the wheel and tyre pressure efficiency – allows engineers to perfect its design. Being able to enhance the placement of wires while limiting the thermal or electromagnetic impacts, can help reduce the weight of the vehicle as well as its cost. It’s true that the battery’s state of charge is the first element that comes to mind when thinking about power consumption. But vehicle aerodynamics have a huge impact on power consumption (in all vehicles and not just EVs). Thanks to computational fluid dynamics simulations, the vehicle’s aerodynamics can be highly optimised. This is how OEMs can obtain the lowest drag coefficient.

Factory and supply chain simulations

Digital twins can also help to optimise manufacturing flows. OEMs and suppliers have to take the whole supply chain, including manufacturing constraints into account, to streamline operations. From a factory point of view, the design of robotic arms, the development of the supply chain and conveyor belts are highly critical. Companies can also simulate their supply chain using extended digital twin models and running AI/ML models to test different scenarios. Being able to anticipate the best positioning of sensors while designing factory machines can save time during use but also increase savings related to component and material optimisation. Furthermore, companies can simulate their supply chain using extended digital twin models and running AI/ML models to test different scenarios.

Autonomous driving simulations

Digital wins in the automotive industry can also be used during the serial life phase.

Thanks to digital twins, it’s possible to simulate autonomous driving (AD) algorithms using AI/ML computations in real time. Indeed, verifying AI/ML algorithms using a simulated environment allows engineers and developers to know if they are safe. Once said algorithm has been tested in a digital environment imitating the real world for over tens of thousands of kilometres, then it can be applied to a physical prototype. While physical testing takes time, it’s possible to speed up simulations and run them in parallel in order to generate thousands of hours of driving while keeping a realistic simulated environment with applied gravity, weight and physical collision anticipation. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3287/audi-gfdcc9e8a1_1920.jpg" width="720" /> </noscript>
https://pixabay.com/illustrations/audi-a8-sports-car-automobile-1889697/

Digital twins allow for better monitoring of such computations and can help identify specific scenarios that require more in-depth simulations. For example, some AD real-life scenarios are extremely difficult to reproduce but can go a long way to fine tune sensors and algorithms. Physical prototypes won’t go away, but having a digital model of a real physical sensor (ie. camera, lidar, etc) makes it unnecessary to run scenarios that were unforeseen during development in the open world. Not only does it allow for huge savings, it also limits the risk of accidents with other vehicles, pedestrians, etc.

Predictive maintenance

Digital twins that use real-time vehicle sensor data make predictive maintenance achievable. Many automotive companies use them today in order to monitor any sensor (let’s say the airbag deployer), and obtain a status on the wear and tear of any part of a vehicle or factory machine. This allows for huge savings (no more downtime, stock and resource anticipation). It minimises the risk of accidents, whether it’s in the factories or from vehicle defects, and provides constant knowledge on the status of each critical (safety-related) and non-critical system element.

A powerful enabler for the automotive industry

As you can see, digital twins offer vast potential for automotive companies at different stages of the vehicle’s lifecycle.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/c23a/audi-gaa4f962da_1920.jpg" width="720" /> </noscript>
https://pixabay.com/illustrations/audi-a8-sports-car-automobile-1889696/

In the development phase, they help OEMs lower costs by simulating and optimising the vehicle’s conception, wiring, weight, aerodynamics and overall structure – instead of testing each of these variables on a physical prototype. In the manufacturing phase, they help optimise equipment locations, maintenance, and required movements for each step of the building process. In the serial life phase, they can help anticipate wear and tear, defects, but also be used for replaying real-life scenarios encountered by a physical vehicle.

In this post, we’ve only covered the tip of the iceberg! In upcoming blogs, we will delve into real-life use cases that your company may probably be facing right now. For those interested in the technologies that power up digital twins, we will discuss how vGPUs can help with virtual desktop infrastructures (VDI), high performance computing (HPC) and advanced autonomous driving simulations. Stay tuned.

Contact Us

Curious about automotive at Canonical? Check out our webpage.

Want to learn more about software-defined Vehicles? Download our guide. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3a71/A-CTO’s-guide-to-software-defined-vehicles-1200-×-300px.png" width="720" /> </noscript>

Curious about how Charmed OpenStack and NVIDIA vGPU Software work together? Watch our dedicated webinar.

20 September, 2022 09:30AM

September 19, 2022

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 753

Welcome to the Ubuntu Weekly Newsletter, Issue 753 for the week of September 11 – 17, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

19 September, 2022 10:42PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.26 Beta available for testing

Are you using Kubuntu 22.04 Jammy Jellyfish, our current stable LTS release? Or are you already running our development builds of the upcoming 22.10 Kinetic Kudu?

We currently have Plasma 5.25.90 (Plasma 5.26 Beta) available in our Beta PPA for Kubuntu 22.04 and for the 22.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.

https://kde.org/announcements/plasma/5/5.25.90/

5.26 Beta packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.24?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel IRC channel on libera.chat if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on libera.chat
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel


19 September, 2022 05:12PM

September 18, 2022

hackergotchi for OSMC

OSMC

OSMC's September update is here with Debian 11 (Bullseye)

In May, we released test builds of Debian Bullseye for all supported OSMC devices. For the next four months, we've been working hard on a number of improvements and fixes to ensure that OSMC is released on the latest version of Debian with the best experience possible.

We are still shipping Kodi v19.4, as a 19.5 point release has not yet been finalised. However we have updated the version we ship to the latest commit on the Kodi Matrix branch.

Update: some users of the official Plex add-on have reported some issues. A solution has been made available here.

Here's what's new:

Debian changes

There are some significant changes to the Debian base as a result of this upgrade, which we would like to highlight here:

  • The default and only Python interpreter is Python 3.x. Previously, we included Python 2 as this was needed for the Samba server package. As a result, a significant amount of space is saved on the root filesystem and initial installation. If you have your own scripts which use Python 2, you’ll need to upgrade these to use Python 3.
  • New installations of OSMC will contain the usrmerge package. This means that /{bin,sbin,lib}/ directories become symbolic links to /usr/{bin,sbin,lib}/. Debian 12 (Bookworm) will only support the merged usr-root filesystem layout.

Bug fixes

  • Fixed an issue which caused some devices (notably FLIRC) to behave badly and have repeat presses.
  • Fixed an issue which could cause green screens when playing HEVC content on Vero 4K/4K+
  • Fixed an issue when pairing and remembering devices that used a Bluetooth PIN
  • Fixed an issue which could cause exFAT formatted drives to be mounted with prohibitive permissions
  • Fix CRDA configuration in My OSMC so that it now works as expected
  • Fixed an issue playing VP9 content on Vero 4K/4K+
  • Fixed an issue where seeking could cause a green screen on Vero 4K/4K+
  • Fixed an issue where there can be a single green pixel in the top corner on Vero 4K/4K+ during playback
  • Fixed an issue where some Bluetooth devices would not automatically reconnect
  • Fixed a number of issues which could cause problems with CEC on Vero 4K/4K+
  • Fixed an issue which prevented Raspberry Pi Model 3A+ from booting

Improving the user experience

  • Added support for HD audio passthrough on Raspberry Pi 4 / 400
  • Improved VC-1 playback quality on Vero 4K/4K+
  • Updated Vero 4K/4K+ to new video stack with a number of playback improvements
  • Improved tethering logging in My OSMC
  • Added a Disconnect option in My OSMC's Bluetooth settings window
  • Home screen adjustments and minor improvements for the OSMC skin
  • Remove EDID 3D parsing limitation, improving 3D compatibility for some projectors with Vero 4K/4K+.
  • Disabled interlaced mode selection on Vero 4K/4K+, so that they cannot be selected in Kodi which can lead to a suboptimal experience
  • Backported support for CIFS3 support to Vero 4K/4K+
  • Mask sensitive information when generating logs via My OSMC
  • Prevent a user from changing in to Windowed mode in Kodi which may cause them to lose the screen
  • Improve the TTY terminal by allowing a user a longer time to login
  • Improve the TTY terminal by using a larger, more readable font.
  • Reduce log size in My OSMC by limiting system journal output
  • Add support for retreiving EDID in logs on Raspberry Pi
  • Optimise video thumbnails in Kodi to reduce size significantly without a noticeable reduction in quality
  • Added ZRAM kernel support for Raspberry Pi models
  • Added SCSI Generic kernel support for Vero 4K/4K+
  • Warn a user if they try and set a GUI resolution above 1080p on Vero 4K/4K+
  • Improve detection of frame rate for specific streams on Vero 4K/4K+
  • Re-worked right-eye first detection to improve playback of some MVC content on Vero 4K/4K+
  • Refactored amcodec video decoder for Vero 4K/4K+ with a number of improvements

Miscellaneous

  • Re-factor My OSMC with numerous improvements
  • Updated website URL in MOTD
  • Ensured that default password check can also handle the yescrypt hash which is now used by default
  • Updated Transmission torrent client to version 3.0.0

Wrap up

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on Twitter, like us on Facebook and consider making a donation if you would like to support further development.

You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.

18 September, 2022 05:23PM by Sam Nazarko

September 16, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Join us at Operator Day, hosted by Canonical at Kubecon NA 2022

What is Operator Day?

Software operators are crucial in the Kubernetes landscape. A software operator encapsulates the knowledge and expertise of a real-world operations team and codifies it into a dedicated piece of software. Software operators help human operators and administrators run their applications efficiently and effectively. Canonical provides an OSS-based platform and framework for building and running operators.

And what better place to talk about operators than KubeCon, the Cloud-Native Foundation’s flagship conference? Canonical has been hosting Operator Day at Kubecon since 2020. The fourth edition of Operator Day took place at Kubecon Europe earlier this year. We hosted various sessions about the basics behind operators, what they are, how to use them, how to create them and how your team can benefit from them. Speakers shared knowledge and insights about their software operator journey, from configuration management to application management. If you missed it, you can access everything freely on youtube.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4b9e/Op-day.png" width="720" /> </noscript>

Join us at the 5th edition of Operator Day on Monday, October 24th!

The 5th Operator Day is coming up. It will take place at KubeCon North America 2022. This edition will center on cases where software operators have been applied successfully. Join us to hear about our experience in building software operators using Juju, an open-source operator lifecycle manager. Operators implemented for Juju are called Charmed Operators.

Operators beyond Kubernetes

Although the concept of a software operator is often associated with Kubernetes, operators can be applied to many substrates: bare metal servers, private clouds, popular public clouds and Kubernetes clusters. Juju offers a mature, consistent and intuitive user interface for integrating applications for all of these substrates, covering the entire stack.

Especially for Kubernetes solutions, applications are not isolated but are composed. As solutions become more and more complex, integrating applications is more critical than ever. Juju and Charmed Operators provide a uniform abstraction layer on top of Kubernetes clusters, bare metal and private clouds.

The 5th Operator Day will be a virtual event on Monday, October 24th in the week of KubeCon NA 2022. You can register for this event during the registration for the KubeCon and choose Operator Day in the co-located events section – or directly use this link:

More on previous events:

16 September, 2022 04:25PM

hackergotchi for Ubuntu

Ubuntu

Call for Ubuntu Community Council nominations

The Community Council is looking for nominees for the upcoming election.

We will be filling all seven seats this term, with terms lasting two years. To be eligible, a nominee must be an Ubuntu Member. Ideally, they should have a vast understanding of the Ubuntu community, be well-organized, and be a natural leader.

The work of the Community Council, as it stands, is to uphold the Code of Conduct throughout the community, ensure that all the other leadership boards and council are running smoothly, and to ensure the general health of the community, including not only supporting contributors but also stepping in for dispute resolution, as needed.

Historically, there would be two meetings per month, so the nominee should be willing to commit, at minimum, to that particular time requirement. Additionally, as needs arise, other communication, most often by email, will happen. The input of the entire Council is essential for swift and appropriate actions to get enacted, so participation in these conversations should be expected.

As you might notice from Mark Shuttleworth’s post, there is a greater vision for the structure of the Ubuntu community, so this term could be an exciting time with perhaps vast and sweeping changes. That said, it would be wise that nominees have an open mind as to what is to come.

To nominate someone (including yourself), send the name and Launchpad ID of the nominee to community-council [AT] lists.ubuntu.com. Nominations will be accepted for a period of two weeks until 30 September 2022 11:59 UTC.

Once the nominations are collected, Mark Shuttleworth will shortlist them and an actual election will take place, using the Condorcet Internet Voting Service. All Ubuntu Members are eligible to vote in this election.

If you have any other questions, feel free to post something in the Ubuntu Discourse #community-council category so all may benefit from the answer.

Thanks in advance to all that participate and for your desire to make Ubuntu better!

16 September, 2022 03:48PM by José Antonio Rey

hackergotchi for ARMBIAN

ARMBIAN

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: FAQ: MLOps with Charmed Kubeflow

Charmed Kubeflow is Canonical’s Kubeflow distribution and MLOps platform. The latest release shipped on 8 September. Our engineering team hosted a couple of livestreams to answer the questions from the community: a beta-release webcast and a technical deep-dive. In case you missed them, you can read the most frequently asked questions (FAQ) about MLOps and access helpful resources in this blog post.


Note that you can also watch the videos on Youtube: Beta-release & a technical deep-dive.

Upstream Kubeflow and Charmed Kubeflow: the differences explained

What’s the main feature of the new Kubeflow 1.6?

One of the themes of this version of Kubeflow was improved user experience and pipelines 2.0 in particular. The new release comes with improved input-output rules, faster meta-data support, and simpler authoring components. Learn more about the new Kubeflow pipelines in our livestream.
Kubeflow 1.6 also supports Kubernetes 1.22, and many bug fixes related to Notebooks. Learn more about this here.

What’s the difference between Kubeflow upstream and Charmed Kubeflow?

Charmed Kubeflow is an official distribution of the upstream project. It includes the same features and follows the same release cycle and roadmap development. The main difference is that Charmed Kubeflow uses charms as operators that manage its lifecycle.

Charms are Kubernetes operators that automate maintenance and security operations. They accelerate workload deployment,  allowing data scientists to take models to market more efficiently.

Will Charmed Kubeflow always release at the same time as the upstream?

Canonical made a considerable effort to align the release cycle with the upstream release. Part of our team also actively contributes to the upstream project.

Are Charmed Kubeflow’s pipelines aligned with the upstream ones?

Yes. Charmed Kubeflow’s pipelines have the same features as the upstream project. It applies to all components that Canonical’s product has.

Charmed Kubeflow: understanding its features 

As a Charmed Kubeflow user, why would you choose the latest stable release?

Charmed Kubeflow has more versions that can be found on CharmHub. Latest stable is what most of the users aim for because it has been extensively tested and verified by the engineering team. However, edge is dedicated to those who are interested in testing bleeding edge features the engineering team has been working on.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3c95/3_Capture.jpg" width="720" /> </noscript>

What’s the difference between manifests and charms?

Kubeflow Manifests provide a static reference deployment for Kubeflow.  They can be what you need if you’re interested in being your own expert by managing your own configuration changes, catching typical errors, and general maintenance.  Charmed Kubeflow wraps the same applications of Kubeflow in operators that handle a lot of the configuration and maintenance hassles.  From deployment to day-2 operations, these operators make managing Kubeflow easier by automating and handling common situations.  These operators also provide easier integrations with other tools, such as observability, using Grafana or Prometheus, through interoperability with the rest of the charming ecosystem found on CharmHub.

Why is Charmed Kubeflow integrated with an observability stack?

Whenever there is a big deployment,  system administrators are interested in understanding what’s happening with the product. The integration with Grafana and Prometheus gives them further information about the status of the deployment through the collection of metrics and logs. The administrator can see operational details, like how many resources are being used or how many deployments are live.

What’s the difference between the beta and general availability release?

The main difference between Charmed Kubeflow 1.6 Beta and the general availability release consists of small bug fixes, related to various components such as Tensorboard or Notebooks. All known issues in Charmed Kubeflow 1.6 Beta are available here.  They have been addressed and fixed for the general availability version.

Integrations with Charmed Kubeflow

How can you host the dashboard publicly?

Charmed Kubeflow can host and access the dashboard from a public domain. In order to install it, please follow the quick start guide. To access the dashboard, follow our docs. For having your dashboard accessible through the public domain you would have to expose the istio-ingressgateway service through a public domain, like you would with any other Kubernetes Service. Be aware of the additional security risks that come with exposing dashboards to the public internet.

Will the old pipelines work after the update?

Yes. You can learn more about kpf v2 from the upstream documentation.
If you would like to upgrade from Charmed Kubeflow 1.4 to Charmed Kubeflow 1.6 please follow our guide.

Any suggestions for storage class used in multiple nodes under Microk8s 1.22?

The Microk8s team provides an OpenEBS addon with some more advanced storage features than default Microk8s storage.

Charmed Kubeflow: what’s next

What will be the focus for the next release?

In the future, Charmed Kubeflow is going to keep evolving. From a CI/CD angle, there will be more scheduled testing, for both charms and bundles. Canonical will improve the out-of-the-box experience, by having more detailed documentation and projects to help users get started. Moreover, the observability integration is going to grow.  There will be a set in the Grafana dashboard, such that once deployed, system administrators have all the information handy.

At the moment, the upstream project is building the new release team. Shortly after, the team is going to start working on the roadmap. Join the community meetings if you want to stay up to date on this.

What other apps are you thinking of integrating with?

Charmed Kubeflow is a product that Canonical is looking forward to integrating with other applications such as MLFlow, Spark or Mindspore. Improving model management is one of the challenges that Canonical wants to address by integrating with other applications.

Do you consider ML model monitoring as well?

On our next roadmap, we are looking into this feature and how to enable it for our users. Our engineering team is looking into model monitoring and model drift monitoring, analysing the various options and apps that could be used for these tasks.

Join the community

 How can we get involved in the upstream community?

Anyone who is interested in Kubeflow can contribute to the community in different ways. Follow the upstream guidelines and start contributing right away. The release team is a great way to get started because you get an overview of the whole project. You can sign up until 28 September!
If you are a Charmed Kubeflow user, you can share your feedback on Discourse or raise issues on our Git.

If you have other questions about MLOps or Charmed Kubeflow, please contact us on Discourse.


16 September, 2022 09:06AM

Ubuntu Blog: ASUS IoT and Canonical partner on Ubuntu Certification for IoT Applications

TAIPEI, Taiwan, September 14, 2022ASUS IoT, a global AIoT solution provider, today announced a partnership agreement with Canonical to certify the device manufacturer’s boards and systems with Ubuntu 20.04 LTS. ASUS IoT devices are used in a wide range of edge computing applications. New devices like the PE100A will be certified for optimised performance with Ubuntu, ensuring faster development times and ease of configuration. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7a26/image-1.png" width="720" /> </noscript>

The ASUS IoT and Canonical guarantee

This collaboration between ASUS IoT and Canonical ensures that individual hardware I/O functions conform to industrial-grade standards and to the version of Ubuntu running on the device. Moreover, security updates for the Ubuntu base OS, critical software packages and infrastructure components are provided for up to 10 years with Canonical’s Extended Security Maintenance.  The solution is ideal for companies in industrial manufacturing, smart retail, smart transportation and many other sectors.

The first NXP i.MX8 Canonical certification

The PE100A is a hand-sized edge computer optimised for TCO (total cost of ownership) and the first NXP i.MX8 device to be certified by Canonical for use in Ubuntu 20.04 LTS. It is supported by a secondary I/O board for extra functionality; the i.MX 8M system-on-a-chip (SoC) delivers efficient performance with low power consumption.

With Canonical’s certification, the PE100A is preloaded with an Ubuntu image that empowers customers to focus on applications and software development. They can choose between Ubuntu Core 20 and Ubuntu Server 20.04 at the time of purchase. 

Tony Chiang, General Manager for Canonical in Taiwan, said, “For Canonical, working with ASUS IoT to combine our leading Ubuntu operating system with their leading edge computing and IoT hardware is a natural partnership. This is a great opportunity for Canonical to cooperate with ASUS IoT to deliver the best possible Linux experience, providing long term security and reliability support for all its end users. We’re looking forward to working with ASUS IoT to deliver many more high quality, great value products for the IoT market.”

More about Ubuntu Certified Hardware: https://ubuntu.com/certified

###

About ASUS

ASUS is a global technology leader that provides the world’s most innovative and intuitive devices, components and solutions to deliver incredible experiences that enhance the lives of people everywhere. With its team of 5,000 in-house R&D experts, ASUS is world-renowned for continuously reimagining today’s technologies for tomorrow, garners more than 11 awards every day for quality, innovation and design, and is ranked among Fortune’s World’s Most Admired Companies.

About ASUS IoT

ASUS IoT is a sub-brand of ASUS dedicated to the creation of incredible solutions in the fields of AI and IoT. Our mission is to become a trusted provider of embedded systems and partner to the wider AIoT solutions ecosystem.

About Canonical

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars, and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company. 


16 September, 2022 03:43AM

September 15, 2022

Ubuntu Blog: Why Enterprises Choose Canonical Ubuntu on AWS

Canonical is excited to partner with AWS and feature on this week’s episode of AWS on Air. Watch us live on September 16, at 12pm PT.

As the publisher of the Linux distribution Ubuntu, Canonical support, secure, and manage Ubuntu infrastructure and devices for thousands of businesses. Ubuntu runs from cloud to edge. It is the platform that everybody uses on the public cloud including AWS, and the preferred workstation experience for builders all over the world!

In this blog we will do a deep dive into key reasons why enterprises choose Ubuntu and how it helps companies run open source securely in the cloud.

1. Ubuntu is the builder’s OS of choice.

Whether it be from software development to machine learning data scientists and engineers, from your desktop to the cloud, from the ones who want freedom and low time to market to the ones who need security and compliance, Ubuntu has you covered!

According to Hackerearth Developer Survey 2021, Ubuntu is the preferred Linux operating system by developers.

This story applies on the cloud as well. Ubuntu is the most used third party Operating System running production workloads. Ubuntu powers more than 60% of the cloud workloads today.

Why is Ubuntu the builder’s OS of choice?

There are several reasons behind this. While every case has it’s unique particularities, the reason builders prefer Ubuntu can be summarized into three points:

  • It gives users the freedom of linux and a way to consume or use open source software with no toil on configuration, maintenance, security and so on.
  • The strong community behind Ubuntu makes it the best supported OS on the market. A simple google on how to do something in Ubuntu will get you hundreds of relevant results. Its third party open source repository “Universe” is also maintained and supported by both the community and Canonical.
  • Ubuntu has a dedicated version for those that require additional security and compliance. Upgrade to Ubuntu Pro to enjoy extended maintenance support (including more than 30,000 third party open source packages), livepatch, FIPS for FedRamp compliance and CIS/Disa-Stig hardening profiles.

2. Ubuntu is secure by design

Did you know that according to a study done this year by Synopsys with over 2,400 commercial codebases, 78% of code reviewed was open source, which shows that open source is everywhere! The study revealed that  88% of these codebases contained components with outdated versions with 81% containing at least one vulnerability. The main reasons for these vulnerabilities were that the majority of the reviewed apps were either running with components over 4 years or had no updates from the vendor. If you recall equifax’s data breach or log4j vulnerability episode, you will understand the danger this poses. 

This shows that security should be prioritized not only in enterprises but everyone. Having a way to get in-call-support, committed or SLA’d maintenance and enterprise compliance is more critical than ever. Nobody should  deploy an open source application with no support or further maintenance. Just imagine. What happens if tomorrow someone discovers a security vulnerability on a package in your app?

Security is in the core of Ubuntu. No system is perfectly secure and vulnerabilities will always arise. This makes the speed and success with which these issues are resolved evermore important. Ubuntu also comes with automatic critical updates and committed security maintenance as part of its LTS model, which covers Ubuntu for 5 years from its release date.

For companies and enterprises that need professional solutions, such as SLA’d support, compliance, and other additional features, upgrade to Ubuntu Pro on the AWS Marketplace.

Ubuntu Pro is the same Ubuntu everybody knows and loves with further access to extended maintenance and security support up to 10 years from the release date. It includes kernel livepatch, FIPS for FedRamp compliance, CIS and Disa-stig automated hardening profiles. Additionally it includes security maintenance for more than 30,000 third party open source packages from Ubuntu repositories. 

Our team is here to ensure our users can consume OSS securely and consistently.

3. Ubuntu is the first choice on public clouds

Ubuntu server is available as a first citizen on all major cloud platforms, running more than 60% of the cloud workloads. 

We have had an active collaboration with AWS for over a decade. Ubuntu is available as a native and optimized OS on EC2 and other AWS services such as EKS, Lightsail, and even as container images on ECR Public Gallery.

You will always find the latest and greatest Ubuntu on AWS.

4. Ubuntu ensures security even on containers

Containers are great! We believe that they are the natural evolution of cloud services where you can focus on the top layer of the stack, while getting more portability, scalability, easier management, and much more.

But, according to a study done by Unit42 from Palo Alto Networks on 2021, they found out that 96% of third-party container applications deployed in cloud infrastructure contained known vulnerabilities. 

This means that the security concerns remain the same, if not more in containers. Even if you are using managed services to keep the risk low, what goes into the container needs to be secure; from the base layers to the packages you are installing into it (i.e. provenance and maintenance).

On top of that, containers are more challenging, since they are harder to keep up to date compared to a VM. They are immutable assets as once you publish a container, it runs with a different lifecycle than a VM. In other words, you don’t log in and run an update but have to rebuild and redeploy.

We have Ubuntu LTS containers for free which you can grab from the ECR public Gallery so that users can build and launch their own applications on top of a 5-years-supported base container. There you can also find other LTS container images for important third party open source applications such as redis, apache2, nginx, cassandra, mysql and more, ready to use out of the box.

Finally, to close off with the latest news, we are currently working on Alpine-sized Ubuntu containers. We call them Chiseled images which are the smallest possible containers that reduce footprint and the attack surface while still being the same Ubuntu you know and love; No additional packages, no package manager, no shell, no root, etc. We are confident that this will undoubtedly improve security while creating lightweight containers with a lower resource utilization.

Less than a month ago, we launched our first chiseled container with .net runtimes which got astounding  support from the community.

How do you get it?

You can find Ubuntu directly on the EC2 console and Ubuntu Pro on AWS Marketplace. 

Our container images are available on ECR Public Gallery and Docker Hub.
Visit Ubuntu Pro on AWS to learn more. If you already have Ubuntu Pro visit our onboarding guide to learn how to get started and get the best value from Ubuntu Pro.


15 September, 2022 11:22PM

hackergotchi for Purism PureOS

Purism PureOS

Auto Contrast on Librem 5 smartphones

Using Librem 5 outdoors is easier now! Because the Librem smart phones have more sensors than an ordinary computer, and we can use these extra sensors to improve ease of use and accessibility. The most recent release of Phosh (our user interface), has added a feature to automatically switch to a high contrast theme when in […]

The post Auto Contrast on Librem 5 smartphones appeared first on Purism.

15 September, 2022 08:50PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Weekly News #278

Based on Sway, a port of LXQt is being developed:
https://cartaslinux.wordpress.com/2022/08/28/lxqt-sway-usando-lxqt-para-hacer-que-sway-sea-mas-amigable/

Fedora Linux 39 plans to disable SHA-1-based signatures support by default:
https://www.mail-archive.com/devel-announce@lists.fedoraproject.org/msg02882.html

Apache OpenOffice passed 333 million downloads:
https://blogs.apache.org/OOo/entry/more-than-333-million-downloads

Release of the QEMU 7.1:
https://lists.nongnu.org/archive/html/qemu-devel/2022-08/msg04598.html

Armbian 22.08:
https://www.armbian.com/newsflash/armbian-22-08/

Release of Ubuntu 20.04.5 LTS with graphical stack and kernel update:
https://lists.ubuntu.com/archives/ubuntu-announce/2022-September/000283.html

Linux From Scratch 11.2 and Beyond Linux From Scratch 11.2:
https://lists.linuxfromscratch.org/sympa/arc/lfs-announce/2022-09/msg00000.html

Release of the OBS Studio 28.0:
https://github.com/obsproject/obs-studio/releases/tag/28.0.0

Release of Nmap 7.93, timed to the 25th anniversary of the project:
https://seclists.org/nmap-announce/2022/1

The webOS Open Source Edition 2.18:
https://www.webosose.org/blog/2022/09/02/webos-ose-2-18-0-release/

Release of Nitrux 2.4:
https://nxos.org/changelog/release-announcement-nitrux-2-4-0/

Google Open Source Software Vulnerability Rewards Program:
https://security.googleblog.com/2023/08/Announcing-Googles-Open-Source-Software-Vulnerability-Rewards-Program%20.html

Peter Eckersley, co-founder of Let's Encrypt, passed away:
https://community.letsencrypt.org/t/peter-eckersley-may-his-memory-be-a-blessing/183854

The platform code for Notesnook, has been opened:
https://blog.notesnook.com/notesnook-is-going-open-source/



Credits:
Full Circle Magazine
@fullcirclemag
Host: bardmoss@pm.me, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

15 September, 2022 06:00PM

Ubuntu Blog: Ubuntu Summit — Calling All Proposals

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4090/Ubuntu-summit-social-call-for-abstracts.jpg" width="720" /> </noscript>

Calling All Talk/Workshop Proposals

In case you haven’t heard the good news, the Ubuntu Summit is a community-focused event taking place in Prague, Czech Republic from November 7–9th!

The Ubuntu Summit Organising Committee would like to bring your attention to our open Call For Proposals. Until September 30th, we are open to your abstract submissions for talks and workshops. (Don’t worry, your entire presentation does not need to be ready just yet!) If your abstract is accepted, we will offer to sponsor your attendance to join us in Prague, Czech Republic!

What kinds of abstracts are sought after?

Yours!

We want abstracts that showcase anything that has been part of your Ubuntu journey. Have you contributed to projects that improve the gaming experience on Ubuntu? Are you a professional designer that uses open source tools to be creative? Are you passionate about making robots that are running Ubuntu Core? Have you done something really neat with WSL? You get the idea.

In our experience, most people that are passionate about these topics tend to drive improvements in their ecosystems for the better. We want to hear about how you’ve done that and what challenges you face. 

Why should you submit an abstract?

The Ubuntu Summit will be a gathering of fabulous community members and companies alike. This is your chance to tell your open source story, in person, to a group of like-minded people! Besides the benefit of interacting with a community near and dear to your heart, there are some other flashy reasons why you should submit your proposal today:

  • It only costs a small amount of your time to submit an abstract in exchange for an exciting opportunity. .
  • Ubuntu Summit is a live streamed event, and sessions are recorded. Tell the world about what you’ve been up to lately!
  • Podcasters, YouTubers, and Linux journalists may create content based on your talk.
  • You have the opportunity to present a talk or workshop to a very diverse audience.
  • Giving a public talk looks attractive on your resume!

What happens if your abstract is accepted?

If your abstract is accepted, you will be invited to join us in Prague to deliver your presentation and participate in the entirety of the Ubuntu Summit. Your travel, hotel, and meals are provided, courtesy of Canonical.

Your invitation will connect you with our travel agency to book your travel, and we will reserve a room for you at the event location, the Hilton in Prague.

How do you submit your abstract?

This all sounds so attractive, how can you pass up the opportunity to share and mingle with others?

Sign-up

Your Ubuntu One account is how you can log into our events platform, to register and submit an abstract. No matter if you have an account or not, clicking on “Login” on our events platform will take you to Ubuntu One for sign-up and login. 

Conference Registration

If you are ready to travel, regardless of if you will speak at the event, make sure you register for in-person attendance. As we have limited seats, you will need to “Apply” for registration, but we will generally accept everyone until we reach the capacity. If you are not sure yet, go ahead and join us remotely

Submitting an Abstract

Once you’ve put some thought into your proposal, head over to the call for abstracts page and click on “Submit new abstract”. Here are some tips on putting together a solid proposal:

Title and Content: The Content section is the elevator pitch for us and those potentially interested in joining your talk. The title shows in the timetable that will be published closer to the event, and there will be an event-specific page with more background and information about yourself. Here are a few questions to think about in case you need help putting together your abstract:

  • The session title should get people interested in your topic. If you read it in a full schedule, would you click on it to learn more?
  • Provide background about the project or topic you’ll be presenting. Make sure it also appeals to people who don’t (yet) know much about the technology.
  • What is known and what is unknown about the topic?
  • What will you be talking about during the session? How did you solve the problem at hand?
  • What will participants learn by the end of the session?
  • How do people get involved once they’ve completed your session? 

Tracks: We’ve put together 6 tracks to categorise your session and give participants an idea of what to expect at the conference. Head to our tracks page to learn more and decide how your session fits in.

Authors and Bio: Add yourself, and anyone else presenting with you, as authors to your proposal. Other authors will need to sign in to our events platform at least once for you to find them in the search. The page about your session will show information about the presenters, so make sure you fill in a short bio and attach a picture to tell us who you all are.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3068/summit2022-cfp-screenshot.png" width="720" /> </noscript>

And that’s it! Complete the form and wait for a reply. You will hear from us on or before October 14th, 2022.  Interested in staying up to date with general news about the Summit? Sign up to the Summit News

Thank you so much for your submission and we look forward to hearing your amazing talk!

15 September, 2022 03:56PM

Podcast Ubuntu Portugal: E213 Lucas Lasota

Em período de férias, estivemos à conversa com o Lucas Lasota, brasileiro a morar na Alemanha, advogado de profissão, trabalha no sector jurídico na FSFE (Free Software Foundation Europe). Falou-se sobre questões legais no geral mas também sobre a tematica router freedom. Novamente uma conversa interessantissima sobre Ubuntu e outras cenas… Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

15 September, 2022 12:00AM

September 14, 2022

Ubuntu Blog: Should you use open-source databases?

You are not the only one asking this seemingly popular question! Several companies are torn between the rise in appeal of open-source databases and the undeniable challenges inherent to their adoption. Let’s explore the trends, the drivers and the challenges related to open-source database adoption.

The popularity of open-source databases

Several sources confirm the growing popularity of open-source databases. For example, the DB-Engines ranking shows that open-source databases have been overtaking commercial ones since early 2021 in terms of popularity.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/SfU8BPLqJJVWg5SLS9W9n1WZ_CfQozApZXzfrDsGh4vGvv3ni9IZxA6N5nH5abwMNcZn5RzHPpjbYtwtStVWNR_9rSMsqkhewAN2-Vr-7opaYSDEIN5taM5MElj0Ti0nZ_6HjMOkzyq4Vn8Lc_Ri" width="720" /> </noscript>

According to a 2022 StackOverflow survey, 7 out of the top 10 most used databases offer open-source editions of their products:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/s1PX8QRTuAvlkghCHJATW-me97uCnGtHc6XWoZLV0atQuN0x4rS-yQKSPwlDDyW9jA8LDrtqJKOXJkWa274HApMAI9M8XCqn81JbMjaz3fangSRserf4OHDAuQzU3zTidUmU01R9S4o4NibIHkmg" width="720" /> </noscript>

The top 4 most used databases are, indeed, all open source.

The adoption of open-source databases is set to grow. According to this Percona survey, nearly half of IT companies are planning to increase their adoption of open-source solutions in the upcoming years:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/EmfM_Mh_gXvE1pk_FD5xGg6QlaX72fe0fC34sCirOYaGGuseWtExsb9dUuvav__ut_36Lz8D2Emujju46DIHRDl16pS8kfiX4ZJsnynGGazOD-YZs0opJ1YeWac1lhN2G6uZzKqWA5k5q4E9kjDn" width="720" /> </noscript>

According to the same survey, nearly 90% of the respondents already use 2 open-source databases or more.

So what’s driving the increasing popularity of open-source databases?

Drivers for open-source databases adoption

Cost savings

Moving from a commercial database system to an open-source database can lower your Total Cost of Ownership. The cost savings are due to a significant reduction in licence costs. For example, the following diagram (referenced in this article) shows a cost reduction by a factor of 3-4 between open-source databases and closed-source ones.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/bxUVeRewRVZ-QX0o-XH8rnoUftIxRs9jRTex2LuaqR3_-gJFEVpiwIQtCs24mwh30LvtclOQm6-dt2xUFpEV2FaZUIL-xROiEX6TcftBz6fWScux5CIV2s1A1JoykqeO9zkaY5gkAUbcFwtE4NI" width="720" /> </noscript>

Avoiding vendor lock-in

Many companies do consider data as part of their most valuable assets. So, binding their future to the sole will of a single database provider is an uncomfortable situation for many.

Purchasing several closed-source database licences can help mitigate the risk of vendor lock-in. Yet, relying on open-source databases gives you additional guarantees:

  • Open-source databases nurture competition. There are, for example, dozens of providers of PostgreSQL-based solutions. If one of them fails, you will find an alternative quickly with low migration costs.
  • Open-source licences protect your organisation from sudden changes in provider licensing or policy. For example, OpenSearch came into existence after some companies expressed concerns about ElasticSearch’s move to the SSPL licence.

Avoiding vendor lock-in is the second most important driver (after cost savings) for adopting open source, as Percona’s findings show:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/-6QmfRgfz9BIUbt-DtZ6kOVgUp4weL4DRIK3u8h6rD6HIhNewVoDlcyKohzlEhXWoz85CXQRtVW1MkiuILIKcscN-KmmJ7iph0wf49jc-9vPgmqgKtzgF-lbiO5Gib_qd3iELU7WXiN8dcnjXtxN" width="720" /> </noscript>

The growing maturity of open-source databases

Maturity is another important factor driving adoption. Open-source databases have been battle-tested for a few decades already. MySQL, one of the widely used open-source databases, dates back to 1995. PostgreSQL, another popular choice, dates back to 1987! So open-source databases have nothing to prove regarding successful track records.

Moreover, open-source databases have been successfully playing catch up with closed-source databases on feature sets. Oracle, for example, implemented partitioning in 1997. PostgreSQL and MySQL implemented similar functionalities in 2005 and 2008, respectively. The latter pattern is becoming more frequent as more companies contribute to open-source software.

Talent attraction and retention 

Open source matters to IT professionals for several reasons ranging from better employability to embracing open-source values (e.g., collaboration, freedom).

According to the 2021 open source job report, around 97% of the surveyed hiring managers agree that “hiring open source talent is a high priority”. According to the same report, 50% of surveyed IT professionals ranked the “ability to architect solutions based on open source software … as the most valuable skill”.

Now that we have an overview of the major drivers behind the increasing popularity of open-source databases, we will discuss some of the challenges related to their adoption in the next section.

Challenges to overcome for open-source database adoption

Lack of support

According to the Percona survey (mentioned above), “lack of support” is the first concern related to open-source databases:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/QTB4FukS_DNQrADPaZduPC9r8awuTCkpr8uxOkbHgyC7xJnJ8LhRr5T-cKqZ06Vr1ScmgD5vwv3YL0M6XvM0HRFINWKdUhMEhItlXZuLR9_kb5TWfc11DBJBlTldzJLOCARsCyXDHUtQxwuKtX3a" width="720" /> </noscript>

When it comes to supporting open-source databases, you have two main options.

The first option is to build a qualified team of database professionals that can provide support internally to the other teams in the company. The support should not only include usual DBA tasks like performance optimisation and version upgrades. It should also cover other aspects like working with the open source community to roll out/backport fixes, running scans and building releases.

This first option, albeit viable, can be challenging for many as it requires hiring and retaining highly sought-after specialised profiles. According to the 2021 open source job report, 92% of “hiring managers report difficulty finding sufficient talent with open source skills”.

This option can become even more challenging if the concerned company is planning to use several open-source databases.

The second option is to buy support services from companies providing database solutions. The purchased services might range from providing security and bug fixes to managing the whole database infrastructure on your behalf.

Lack of integrated solutions

To successfully run a reliable database deployment, you don’t only need a database engine. You need to test, use and maintain several additional tools and extensions to provide high availability, monitoring, alerting and backup for your installation.

Closed-source databases, like Oracle database, provide these mentioned functionalities with tools that are shipped within the database engine (or tightly integrated with it). Most of these tools are covered by the purchased vendor support.

With open-source databases, like PostgreSQL, you need to test a few options (think of repmgr, Patroni, Stolon, PAF) with the rest of your set-up to ensure that they meet your requirements.

Basically, with closed-source databases you have less choices but you have an integrated solution that covers most of your needs. With open-source databases, you need to have the expertise and time to select, test and maintain the additional tools that you will need to cover your requirements.

Migration complexity

Migrating from an already used database to another solution often needs careful planning and testing. This is true for any migration between different database engines (open-source or not).

Even among the same family of databases (e.g., relational databases, document databases), you might need – for example – to re-write some of your queries to avoid performance degradation. Every engine implementation is unique and, therefore, will excel in some cases and underperform in others.

Deciding on a migration strategy depends on several factors. Here is a list of some of the important ones:

  • Database size; the larger your databases, the more complex the migration can get
  • The outage constraints related to your business: how long of an outage can you afford per migration?
  • The dependencies between your applications and their databases. Can you migrate every database independently, or do you need to migrate them in batches of related databases?
  • The available infrastructure for the migration: do you need to re-use the existing infrastructure? Are you planning to combine the database migration with a change in the infrastructure?

To summarise, lack of support, poor integration and migration complexity are some of the challenges organisations face when adopting an open-source database. Canonical offers a variety of solutions for organisations looking to overcome these challenges and get the most out of open-source in this space.

Canonical solutions for open-source databases

Expanded support and regulated environments

Canonical offers, through Ubuntu Pro and Ubuntu Advantage, up to 10 years of fixes for high and critical CVEs not only to Ubuntu itself but also to around 25,000 accompanying deb packages. The covered deb packages include popular open-source databases like MongoDB, Redis and PostgreSQL.

Moreover, Canonical helps organisations comply with a wide range of certifications and standards like DISA-STIG, FedRAMP and ISO27K.

Operators for open-source databases

Canonical provides a growing list of Juju-based operators to manage the lifecycle of your database clusters. 

Canonical’s database solutions are a suite of integrated open-source tools. They provide all the needed functionalities to deploy, scale , backup, monitor and maintain your database deployments.

Our Juju operators go through a growing list of unit and integration tests to ensure high quality for the whole solution. 

Moreover, our operators work with bare-metal, virtual machines and Kubernetes. Juju can deploy your databases on the major cloud providers (e.g., AWS, Azure, GCP, Oracle). So, it will be up to you to decide what fits you best!

Canonical support

We are already using open-source databases in hundreds of our products deployments. We gained extensive knowledge operating them in different environments. Our field engineering teams can help you:

  • Plan the migration of your data to the database of your choice
  • Design a database solution that meets your requirements
  • Manage the infrastructure for you

Contact us to speed up open-source database adoption in a scalable and secure way.

14 September, 2022 09:45AM

hackergotchi for Qubes

Qubes

Qubes Canary 032

We have published Qubes Canary 032. The text of this canary is reproduced below.

This canary and its accompanying signatures will always be available in the Qubes security pack (qubes-secpack).

View Qubes Canary 032 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/canaries/canary-032-2022.txt

Learn how to obtain and authenticate the qubes-secpack and all the signatures it contains:

https://www.qubes-os.org/security/pack/

View all past canaries:

https://www.qubes-os.org/security/canary/


                    ---===[ Qubes Canary 032 ]===---


Statements
-----------

The Qubes security team members who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is September 13, 2022.

2. There have been 84 Qubes security bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

       427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
   Project (e.g. to hand out the private signing keys or to introduce
   backdoors).

5. We plan to publish the next of these canary statements in the first
   fourteen days of December 2022. Special note should be taken if no new
   canary is published by that time or if the list of statements changes
   without plausible explanation.


Special announcements
----------------------

We plan to create a new Release Signing Key (RSK) [3] for Qubes OS 4.2.
Normally, we have only one RSK for each major release. However, for the
4.2 release, we will be using Qubes Builder version 2, which is a
complete rewrite of the Qubes Builder. Out of an abundance of caution,
we would like to isolate the build processes of the current stable 4.1
release and the upcoming 4.2 release from each other at the
cryptographic level in order to minimize the risk of a vulnerability in
one affecting the other. We are including this notice as a canary
special announcement since introducing a new RSK for a minor release is
an exception to our usual RSK management policy.


Disclaimers and notes
----------------------

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.

The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.


Proof of freshness
-------------------

Tue, 13 Sep 2022 02:47:47 +0000

Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
Poland's Prime Minister on Ukraine War and Energy Crisis
Habeck's Meltdown: Nuclear Energy Standby Proposal Has Germany's Greens Seeing Red
European Commissioner Gentiloni: "The Coming Winter Could Be One of the Worst in History"
Russian Meddling in the Balkans: "Over and Over, Putin Says Kosovo, Kosovo, Kosovo!"
Laos and the New Silk Road: The Train to Dependence on China

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Ukraine’s Sudden Gains Prompt New Questions for Commanders
Russian Critics Speak Out, Prompted by Ukraine Losses
King Charles Pays Tribute to Queen Elizabeth on a Day Steeped in Tradition
Oppressive Blackouts Force Lebanese to Change Rhythm of Life
Ukraine Claims More Ground in Northeast and South

Source: BBC News - World (https://feeds.bbci.co.uk/news/world/rss.xml)
Ukraine war: We retook 6,000 sq km from Russia in September, says Zelensky
Ukraine war: What will Russia's losses mean for Putin?
Ukraine war: A successful surprise attack - but danger still looms
Sweden election: Result could take days as vote too close to call
Taoiseach: Queen's death 'reminder to nurture UK-Ireland relations'

Source: Blockchain.info
00000000000000000002fb0e59c723277069b5389aa2df4b8ff6dc8d80da6ad4


Footnotes
----------

[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/

[3] https://www.qubes-os.org/security/verifying-signatures/#how-to-import-and-authenticate-release-signing-keys

--
The Qubes Security Team
https://www.qubes-os.org/security/

14 September, 2022 12:00AM

September 13, 2022

hackergotchi for VyOS

VyOS

The future of VyOS image signature verification

There's one thing about our releases that we introduced quietly and neglected to explain to those unfamiliar with it: minisign signatures. Let's discuss why we started using them in addition to GPG signatures and what we are going to do next. Read on for details!

13 September, 2022 03:50PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Home Assistant tutorial on Ubuntu Core

Install Home Assistant and create a motion-activated light automation with this tutorial

KeyValue
SummaryInstall Home Assistant and create an automation
CategoriesSmart Home, Ubuntu Core
Difficulty3
Authornathan.hart@canonical.com

Overview

In this tutorial, we’ll learn how to install Home Assistant on Ubuntu Core, then create a motion-activated light automation. This is a great starting point for a secure, open-source smart home. By the end, we’ll have learned the skills we need to create Home Assistant automations of our own with whatever smart home devices we wish.

Materials required:

  • Raspberry Pi 4
  • Power cable
  • Case (recommended)
  • SD card (at least 8GB recommended)
  • SD card reader (if not built into your computer)
  • Aeotec Z-Stick Gen 5+ Z-Wave USB stick
  • Aeotec MultiSensor 6
  • Aeotec LED Bulb 6 Multi-white
  • A monitor with an HDMI interface
  • A mini-HDMI cable
  • A USB keyboard
  • A USB power adapter
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/F_qC_FlKWFKky75zINyqP6IBFVMZzjx3NCrq6tGO7_XXLpFXpx3CbUzW_HsNn_73KJTtHTruPbPQ670A5uFh4rmx8Wi82fw8wHIErbybUslfOkxUcd_o03jeFGXJzOSh90_I77WA" width="720" /> </noscript>

Installing Ubuntu Core

Duration: 20 minutes

We’ll start by following the instructions here to install Ubuntu Core on our raspberry pi, making note of its IP address during the process.

Pairing the Z-Wave devices

Duration: 3 minutes

We can pair the bulb and multi-sensor to the Z-Wave stick without installing any software. The Z-Wave stick has a battery in it, so it doesn’t even need to be plugged in!

First, we’ll install the bulb in a light fixture, switched off. After pressing the button on the Z-Wave stick, we should see blue light flashing around the button. Now, we’ll switch on the bulb that will flash a few times to indicate it is searching for a Z-Wave host with which to pair. The two should automatically detect each other and pair.

Now, we’ll plug in the USB power cable included with the multi-sensor into the sensor and our USB power adapter. We’ll press the button on the Z-Wave stick again and confirm that we see the flashing blue light. There is a “tamper button” on a corner of the rear panel of the multi-sensor. We’ll press the tamper button, and the multi-sensor should pair with the Z-Wave stick automatically.

Installing the Home Assistant snap

Duration: 4 minutes

We can connect to the raspberry pi via ssh by opening a terminal on our computer and running the command:

ssh @

Now, we’ll run the command:

snap install home-assistant-snap

This will install the latest version of the Home Assistant snap from the stable channel

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/91UHrS0qXPT4fEm_kQnmJqKyLT5syTSzPz7jf2dcOiIRn-S_1lysLlVb-DK5GzZZZXah8Nzp_QdbGYJAzlItirvhCWkSazmC-7EsCmPdkLR6p-bnnPDKJ5j10sRWS4RGxIlQ-r-U" width="720" /> </noscript>

To support the Z-Wave stick, we will need to install a Z-Wave server as well, fortunately, there is a snap for it!

We’ll run the command snap install zwavejs2mqtt, followed by

  • snap connect zwavejs2mqtt:raw-usb
  • snap connect zwavejs2mqtt:hardware-observe

These commands will give the Z-Wave server access to the USB port so it can talk to the Z-Wave stick. For further information, refer to the documentation here.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/9H1VxLBmLsi4kl5vdDR1SZK0BmTiLBbIiSu8bAgy-9wyO6dY6-BogHZ5C0CwMt490Z8_bLvT372dcMViuEg6nR_fouOv8s3QMfr5d8GyzRGOj8hrTMfY0qrGWVUsZwLDAuY3HMit" width="720" /> </noscript>

Linking Home Assistant and the Z-Wave server

Duration: 3 minutes

We’ll now start the Z-Wave service/daemon by running the command sudo zwavejs2mqtt.enable

Now, in a browser window, we can type in the IP address of the pi, followed by the Z-Wave server port number like this http://:8091

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/zDsb1bFtzJvsWFqezJAqDmHA8X0CLzcHtL4aIA8l5eYbeUfd0IXNrXB3rSDOH6zaVuDrqpgUpNxyW7ZBeSflUcMQISo81oolb-V9EufSQU6hZbcsQbEhJWUIseV6PmQZvc9RwmrV" width="720" /> </noscript>

Now, we’ll click on settings and scroll to the Home Assistant section where we’ll enable WS server.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Bqwm5JgYGJ49UsaJtm8JI3O4eLfX5MMl3U_Q65Vc6WhcC9lQK13hjXYYK2K_G3XiM6Sr0LLyR0GgsVSYNTIszvhJS1mclxf4niLZlyvwYSjq9-TSA_u1RF86MyU77pBDKBYo0biy" width="720" /> </noscript>

Press SAVE, and we’ll then move to the Z-Wave section of the Settings page. There, we’ll disable the soft reset setting. This is necessary for the particular type of Z-Wave dongle we are using. Ensure as well that the serial port is listed as shown in the screenshot below.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/gYMYhzGiOJWexghxQpNo69dPnDhQr6E-73yKOXe4s0KN3jaDTu_fLQh7eFiW-F_kdbCcihJo8cSg8fNy0KYZ7bSG24YslFLs721aiJs9zMY9Sxe6DhuJEkxQryTyu_8cgdsZ5PIQ" width="720" /> </noscript>

Now, let’s configure Home Assistant. To do this, we’ll open a new browser tab and type the IP address of the Pi and port 8123 like this http://:8123

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/bHQVWpc08ErAwUrLp_BWvByVT1TWbBK72c4ckAKKJ7Fk4ZBjHNQehxcbdC0ewH01l70zLa7AQl5yrX441xjMpdcZlc8t2TtRzBxMHeEjwKynRfIkUYf7EBonykLAaDEKH8ZGk9sH" width="720" /> </noscript>

The UI will prompt us to set our home location, units of choice, and usage metrics preferences. It also may discover some services on your local network to connect.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/yHfTBgmN9kNdFim49GhRdhyxLrFn6RrUxBK-x3CLGiT5K0H7fuuuRDwJ4-M_ncZBMncZ6riKc8Gdi9wXvfZvUAVoJHw_WIfVN0XshGU_67lKf9FbBXwjXvhZIj4TO1TFG-1I56co" width="720" /> </noscript>

We’ll skip these for now, and click FINISH.

We should now be able to see the Home Assistant home page. From here, we’ll click Configuration in the left menu, then Devices & Services.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/fP-2N7E841NBZZgx1fVH3srbOHkwlulpzAZLDPfVcMAg0DzEzpu7_uc2aAfAPL0V5WWSnBTRXR9Cqhe8CJOfmPjw1r6_P6xogPKhQG3P1UDZKPnI9alc_Rz4y7idr1nyMpY0sJjc" width="720" /> </noscript>

In the Integrations tab, we’ll now click on ADD INTEGRATION.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/dtnvrmH_6qVCzZy4jPBnBdcqrO56Z__FWB_JXbXygGueZzQWF6_9KpT4RNRVEspBD1vAsvmZPWlsg4UOzaQ-wawFzxsZ-OMahZxqs7jng4C9uh8Funi1rkxDznF9YSSySPbblP5V" width="720" /> </noscript>

In the New Integration interface, search for and click on Z-Wave JS, taking care to not use the deprecated Z-Wave integration.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/SH3hCPImPI51tDzyjxDIgOZBfYT7Cw8B9RE6oEq4AQkBLfFOSurxxP8GY8hW2CRiFlGm2U7_7ASCtPDZ_j3TvRwSzR0S0uVY9NuDzeThk1Q7y-t4JLymfQpw1hdZX9yOuY8VbZRB" width="720" /> </noscript>

The URL field should be populated automatically, but if not, type in ws://localhost:3000. This comes from the previous setup we did in the Z-Wave server.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/kAG0DUD6H33mUitHeQJ6XExUkdgfvf57U-LeloTEhAGHntGH21RxowioqzJJu07VL6ESHTajoSH38zYBCVtXEU1wD5h_wbuoJDw3LMyaypWwKZToJgc8z4uY-QL3wQeKLNjmKxVP" width="720" /> </noscript>

We should now see the devices we previously paired to our Z-Wave stick! Set the area fields appropriately. For me, I put Office in the Area field, as I am creating my motion-activated light in my office.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/9Qj7GezRaVIbixcVDnQFqklyWPdCLhfZIJS2WWARg0aBBszlPozxdYvS69OWqmQ0gMKj0kDJwY1jqhg7ziAgXme-0hLVBBpyj0BUUaC3AMLR6oS4qM1yIR-rsb4DzGte4AjJKqzS" width="720" /> </noscript>

Creating the automation in Home Assistant

Duration: 4 minutes

From the Home Assistant homepage, we’ll now go to Configuration -> Automations & Scenes -> Add Automation. In the pop-up window, choose Select a Blueprint and pick Motion-activated Light from the dropdown.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/94Ihw0bEbDIDty0u-AtoVF1bsrLQJj7pN6Z2aBBNFrrpTIngqm-0JBoHIXuWah9Q4nt2lZVT7Y-5PI8u8nFD4dfjq8B6IZVah5e0G8OD4CHh0pYtWIyMCv8_DeqNxioEAaTc5Duc" width="720" /> </noscript>

We need to choose a name for the automation, something like Office Light should suffice. In the Motion Sensor dropdown, we should select our multi-sensor, which should appear automatically.

In the Light section, we’ll select Choose device and select our light bulb from the dropdown.

We’ll now set the wait time to how long the light should remain on with no motion detected, 120 seconds in my case.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/OGul_CR9Opxq1xy2-3ApgTENCD6Ki4mnqtEdyzrbegZ36QYtOmc3_gZM7Vv9UKMb-RppmalnULD55Wi3ZfXUh9faeT6vWyd85P1f4DLVsmLzXc0n6AKcdF7T6z7JPSstrTyEo6R7" width="720" /> </noscript>

Now, we’ll click SAVE and we’re done!

For ideas on what to try next, check out the community forums at Home Assistant, and let us know what you were able to create on Ubuntu Core in our discourse community.

13 September, 2022 03:36PM

Ubuntu Blog: IBM LinuxONE Emperor 4: Maximise the value of next generation IBM LinuxONE servers with Ubuntu 22.04.1

IBM has today announced the next generation of its enterprise-grade Linux server family, IBM LinuxONE. The LinuxONE Emperor 4 is IBM’s most highly performing, secure, sustainable and open Linux server to date. 

As the world’s most powerful Linux-based server, LinuxONE Emperor 4 matches perfectly with Ubuntu, the leading Linux operating system. The latest Ubuntu Server release combines Canonical’s signature stability and price-performance together with purpose built LinuxONE hardware support – providing users with an unparalleled path to value for the new IBM server.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/9598/linuxone-emperor4-image-1.jpg" width="720" /> </noscript>

Powering sustainable Linux data centres

In the face of the global climate crisis, minimising data centre carbon footprint is a key priority for all responsible businesses. At the same time, with the COVID-19 pandemic leading to fewer people in offices and data centres, servers must be more robust than ever to ensure high availability.

IBM LinuxONE systems have always excelled in sustainability and reliability, and the new generation builds on these strengths even further by delivering significant performance improvements without increasing energy consumption.

Compared to the previous generation LinuxONE III LT1, the new LinuxONE Emperor 4 offers:

  • 50% more cache
  • 9% increased per-core performance
  • 17% higher total system capacity
  • 25% more processor capacity per drawer

For businesses not already using IBM LinuxONE servers, consolidating common server workloads onto LinuxONE Emperor 4 can reduce energy consumption by 75% and data centre floor space by 50%. 

These performance and sustainability improvements have been achieved without compromising LinuxONE’s industry-leading security, scalability, and ease of maintenance, making LinuxONE Emperor 4 an ideal choice for mission-critical Linux workloads of all sizes.

Scalability is especially crucial for optimising resource utilisation and energy consumption. With LinuxONE Emperor 4, users can seamlessly scale up, scale out, flex capacity on demand, and reallocate resources to align with evolving priorities. Capacity can be increased with minimal energy impact by simply turning on unused cores.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/9eff/linuxone-emperor4-image-3.jpg" width="720" /> </noscript>

Utilising new capabilities

Earlier this year, IBM z16 launched with an array of powerful new features, including quantum cryptography and AI inferencing. LinuxONE Emperor 4 brings these capabilities to the LinuxONE server family, enabling users to secure their infrastructure against potential future threats, benefit from real-time AI-driven insights, and more.

Taking advantage of these new hardware features requires operating system support, which is where Ubuntu comes in. Building on the long-standing partnership between IBM and Canonical, Ubuntu Server 22.04 LTS was designed in parallel with IBM z16 to support the new features.
And today the new LinuxONE Emperor 4 system can make immediate use of this support, too.

Since then, Canonical has worked to continuously improve IBM hardware support across cryptography, compression, networking, AI, and machine, systems, and container management. As a result, the recently released Ubuntu 22.04.1 update provides out-of-the box support for the full scope of LinuxONE Emperor 4 capabilities, ensuring that users can make the most of the next generation IBM platform from day one.

Ubuntu Server 22.04.1 LTS can be flexibly installed in a LinuxONE Emperor 4 LPAR (classic or DPM systems), as an IBM z/VM guest, as a KVM virtual machine, and in different container environments such as LXD, Docker or Kubernetes.

Ubuntu further complements LinuxONE by giving easy users access to a vast library of open source applications and solutions. What’s more, server functionality can be augmented with products and services from the IBM Hyper Protect family of offerings in the IBM Cloud.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a689/linuxone-emperor4-image-4.jpg" width="720" /> </noscript>

Cost-effective support

Ubuntu Advantage for Infrastructure is Canonical’s comprehensive enterprise support subscription. Whereas most vendors price support entirely on a per-core basis, Canonical offers a per-drawer model. Given that LinuxONE Emperor 4 includes up to 200 customer cores, the per-drawer pricing approach with Ubuntu Advantage for Infrastructure represents a uniquely cost-effective support option.

For existing LinuxONE and Ubuntu Advantage for Infrastructure customers, the support subscription remains the same when adopting LinuxONE Emperor 4, so joint customers can keep their data centres secure and highly available without extra hassle or excessive licence fees.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8808/linuxone-emperor4-image-5.jpg" width="720" /> </noscript>

Ubuntu Server 22.04.1 LTS and LinuxONE Emperor 4

Pairing Ubuntu with the new generation of IBM LinuxONE servers represents an ideal system for not only today’s IT, but also for the future. 

To learn more about LinuxONE Emperor 4 visit ibm.com/products/linuxone-emperor-4

Ubuntu 22.04.1 LTS is available now. Download it here.

13 September, 2022 03:29PM

Ubuntu Blog: Ubuntu at SIGGRAPH 2022: What’s new in the world of Linux and VFX

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8b26/Untitled-design-19.png" width="720" /> </noscript>

Canonical attended its first SIGGRAPH event, which was a significant step in working more closely with the VFX and Media & Entertainment community. We have built a dedicated team to support Linux in this industry and SIGGRAPH was instrumental for us to have one on one conversations with ISVs, studios, and artists to know what we need to do next. 

This year Canonical became a proud member of the Academy Software Foundation. We believe this is a great first step to equipping the creative community to be able to easily do what they do best while running on Ubuntu. Check out the ASWFs post announcing the exciting news here!

Highlights from the Linux in VFX Report

When CentOS went end of life the community needed a new reference platform. Ubuntu is a close contender but the Linux distro of choice from the VFX report is RHEL. But do not be dismayed!

“Ubuntu is an excellent Linux distribution with many compelling benefits, so a secondary recommendation is that in the longer term all software vendors should consider providing equal support for both RHEL and Ubuntu,” says the report. Rocky and Alma were also recommended because of the speed of migration, but Ubuntu is a strong contender particularly because we offer 10 years of support, which surpasses all of the community distros that were recommended.

Check out the 2021 usage and 2022 plans from the community from the Studio Platform Survey Report:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/BoUFz5FH040yGV1EAOopz97jRcklvynM-X1bRqx-WvlmhfmbHhplehmOdTQg5JdKLFo7vlA8CAicyeIn6mC4JE87ow15WzbhNxh5As4Ux0nBcRSm0WUfNRew7XElJ1lLPjj_GWpv_AT5hU_TH8Idf1qa2OXBByDNn5XzTkno-sXzUIcFcyK8KaoqDg" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/hRZ8FTwi9bNmyzhLvF4Zj1R_9O70FdO0EW3H_PYYYET1JFZ7np6PQd5dyDmcZetLIvp7zs0Qu1s65jHrujAW_f0L1OZA8kIeym8y4ltH6iadBnX5Jy1gYSc1VPnz_XctQB95W0JVUYkxVNnCaOX-YQPWhtTPlAUcyBBmKeYKnv7xrRW-O2dD_LOUXw" width="720" /> </noscript>

Take a look at the full report here

Learn more about Canonical’s Desktop offering on our website!

Open Source Initiatives

VFX is going more open source with 4 exciting new projects from the ASWF:

  • ASWF formed a working group to investigate the state of playback and review systems, and to explore the opportunity to create a common set of media tools to drive the creative review process throughout production. This working group went on to form the Open Review Initiative project, which is currently in the sandbox stage.
  • Foundry’s OpenAssetIO is the newest ASWF-hosted project. Developed by Foundry, OpenAssetIO aims to give artists the ability to find and share deeply connected assets, regardless of how or where they’re stored. It is an open-source interoperability standard between tools and asset management systems that reduces the integration effort and maintenance overhead of content creation pipelines. According to the company, OpenAssetIO standardizes scalable asset-centric workflows by enabling tools and asset management systems to freely communicate with one another. 
  • Initially created by Pixar Animation Studios, OpenTimelineIO (OTIO) is an Open Source API and interchange format that facilitates collaboration and communication of editorial data and timeline information between a studio’s Story, Editorial, and Production departments all the way through Post-Production. OTIO makes it easier to build tools that use editorial timeline information, filling a gap in film production pipelines that were previously underserved by similar, proprietary technologies. It supports clips, timing, tracks, transitions, markers, and metadata in an API that is easy for studios to integrate with their tools and for vendors to integrate with their software. Use cases include tracking shot length changes, providing context shots in dailies, communicating shots added or removed, conforming new renders into a cut, and dealing with picture-in-picture shots.
  • DPEL is a library of digital assets – 3D scenes, digital cinema footage, etc. – that demonstrate the scale and complexity of modern feature film production, including computer graphics, visual effects, and animation. Curated by the Academy Software Foundation, these assets are available free of charge to researchers and developers of both open source and commercial projects, to test, demonstrate, and inspire their ideas.

VFX on Ubuntu

Whilst some key apps are a challenge we met tonnes of big studios using Ubuntu already and Ubuntu was demoed at a bunch of booths.

VFX tools on Ubuntu:

Blender – with support

Unity

Unreal

DaVinci Resolve

We have a task force working on bringing more tools to Ubuntu soon so watch this space. Look out for case studies soon…

What can you do?

Connect with our team and tell us what you need and the priorities of your company. Talk to your ISVs about your priorities as well to have Ubuntu as your platform of choice.

Get in touch with us to find out more about how we can enable your VFX studio with Ubuntu and all its great enterprise management features.


13 September, 2022 01:59PM

hackergotchi for Pardus

Pardus

Pardus 21 ve 19 için Yeni Güncellemeler Yayımlandı

Pardus 19 ve 21 için yeni güncellemeler yayımlandı. Yapılan değişiklikleri gözlemlemek için Pardus yüklü sisteminizi güncel tutmanız yeterlidir. Güncellemeler, Pardus 19 veya 21 mevcut sisteminize bildirim olarak gelecektir. Pardus Yazılım Merkezi uygulamasının Güncellemeler menüsünden sisteminizi güncelleyebilirsiniz.

Pardus 19 Başlıca Değişiklikler

  • Öntanımlı internet tarayıcısı Firefox sürümü 91.13′e yükseltildi.
  • Öntanımlı e-posta istemcisi Thunderbird sürümü 91.13′e yükseltildi.
  • Kernel sürümü 4.19.0-21′e yükseltildi.
  • Güvenlik güncellemeleri yayınlandı.
  • Bazı üçüncü parti uygulamalar ve kurumsal uygulamalar son sürüme güncellendi.
  • Kurulu sisteme 50‘nin üzerinde paket ve yama içeren güncelleştirmeler getirildi.
  • Depoda 500‘ün üzerinde paket güncellendi.

Pardus 21 Başlıca Değişiklikler

  • Öntanımlı internet tarayıcısı Firefox sürümü 91.13′e yükseltildi.
  • Öntanımlı ofis uygulaması LibreOffice güncellendi.
  • Kernel sürümü 5.10.0-18′e yükseltildi.
  • Güvenlik güncellemeleri yayınlandı.
  • Bazı üçüncü parti uygulamalar ve kurumsal uygulamalar son sürüme güncellendi.
  • Kurulu sisteme 100‘ün üzerinde paket ve yama içeren güncelleştirmeler getirildi.
  • Depoda 1000‘in üzerinde paket güncellendi.

Pardus 19 için aylık, Pardus 21 için ise haftalık güncel ISO’lar yayımlanmaya başlandı.
Güncel ISO indirmek isteyen kullanıcılar Pardus Güncel sürümlerine buradan ulaşabilir.

Paket Adı Yeni Sürüm Eski Sürüm
apache2-bin 2.4.38-3+deb10u8 2.4.38-3+deb10u7
base-files 11pardus19.5.4 11pardus19.5.3
cups 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-client 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-common 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-core-drivers 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-daemon 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-ipp-utils 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-ppdc 2.2.10-6+deb10u6 2.2.10-6+deb10u5
cups-server-common 2.2.10-6+deb10u6 2.2.10-6+deb10u5
distro-info-data 0.41+deb10u5 0.41+deb10u4
firefox-esr 91.13.0esr-1~deb10u1 91.8.0esr-1~deb10u1
firefox-esr-l10n-tr 91.13.0esr-1~deb10u1 91.8.0esr-1~deb10u1
gir1.2-rsvg-2.0 2.44.10-2.1+deb10u3 2.44.10-2.1
gnome-orca 3.30.1-2 3.30.1-1
grub2-common 2.06-3~deb10u1pardus1 2.02+dfsg1-20+deb10u4pardus1
grub-common 2.06-3~deb10u1pardus1 2.02+dfsg1-20+deb10u4pardus1
grub-pc 2.06-3~deb10u1pardus1 2.02+dfsg1-20+deb10u4pardus1
grub-pc-bin 2.06-3~deb10u1pardus1 2.02+dfsg1-20+deb10u4pardus1
krb5-locales 1.17-3+deb10u4 1.17-3+deb10u3
libfreetype6 2.9.1-3+deb10u3 2.9.1-3+deb10u2
libfribidi0 1.0.5-3.1+deb10u2 1.0.5-3.1+deb10u1
libgssapi-krb5-2 1.17-3+deb10u4 1.17-3+deb10u3
libk5crypto3 1.17-3+deb10u4 1.17-3+deb10u3
libkrb5-3 1.17-3+deb10u4 1.17-3+deb10u3
libkrb5support0 1.17-3+deb10u4 1.17-3+deb10u3
libnet-ssleay-perl 1.85-2+deb10u1 1.85-2+b1
libqt5core5a 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
libqt5dbus5 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
libqt5gui5 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
libqt5network5 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
libqt5widgets5 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
librsvg2-2 2.44.10-2.1+deb10u3 2.44.10-2.1
librsvg2-common 2.44.10-2.1+deb10u3 2.44.10-2.1
libxslt1.1 1.1.32-2.2~deb10u2 1.1.32-2.2~deb10u1
linux-cpupower 4.19.249-2 4.19.235-1
linux-image-amd64 4.19+105+deb10u16 4.19+105+deb10u15
orca 3.30.1-2 3.30.1-1
pardus-package-installer 0.5.0~Beta2 0.5.0~Beta1
pardus-software 0.3.0 0.1.0
publicsuffix 20220811.1734-0+deb10u1 20211109.1735-0+deb10u1
qt5-gtk-platformtheme 5.11.3+dfsg1-1+deb10u5 5.11.3+dfsg1-1+deb10u4
rsyslog 8.1901.0-1+deb10u2 8.1901.0-1+deb10u1
tzdata 2021a-0+deb10u6 2021a-0+deb10u3
zlib1g 1:1.2.11.dfsg-1+deb10u2 1:1.2.11.dfsg-1+deb10u1
Paket Adı Yeni Sürüm Eski Sürüm
avahi-daemon 0.8-5+deb11u1 0.8-5
base-files 12pardus21.3.1 12pardus21.3
dpkg 1.20.12 1.20.11
firefox-esr-l10n-tr 91.13.0esr-1~deb11u1 91.11.0esr-1~deb11u1
firefox-esr 91.13.0esr-1~deb11u1 91.11.0esr-1~deb11u1
fonts-opensymbol 2:102.11+LibO7.0.4-4+deb11u3 2:102.11+LibO7.0.4-4+deb11u1
gir1.2-ayatanaappindicator3-0.1 0.5.5-2+deb11u2 0.5.5-2
gir1.2-gdkpixbuf-2.0 2.42.2+dfsg-1+deb11u1 2.42.2+dfsg-1
gir1.2-javascriptcoregtk-4.0 2.36.7-1~deb11u1 2.36.4-1~deb11u1
gir1.2-webkit2-4.0 2.36.7-1~deb11u1 2.36.4-1~deb11u1
grub2-common 2.06-3~deb11u1pardus1 2.04-20pardus1
grub-common 2.06-3~deb11u1pardus1 2.04-20pardus1
grub-pc-bin 2.06-3~deb11u1pardus1 2.04-20pardus1
grub-pc 2.06-3~deb11u1pardus1 2.04-20pardus1
gstreamer1.0-gtk3 1.18.4-2+deb11u1 1.18.4-2
gstreamer1.0-plugins-good 1.18.4-2+deb11u1 1.18.4-2
gstreamer1.0-pulseaudio 1.18.4-2+deb11u1 1.18.4-2
krb5-locales 1.18.3-6+deb11u2 1.18.3-6+deb11u1
libavahi-client3 0.8-5+deb11u1 0.8-5
libavahi-common3 0.8-5+deb11u1 0.8-5
libavahi-common-data 0.8-5+deb11u1 0.8-5
libavahi-core7 0.8-5+deb11u1 0.8-5
libavahi-glib1 0.8-5+deb11u1 0.8-5
libavahi-gobject0 0.8-5+deb11u1 0.8-5
libayatana-appindicator3-1 0.5.5-2+deb11u2 0.5.5-2
libc6 2.31-13+deb11u4 2.31-13+deb11u3
libc-bin 2.31-13+deb11u4 2.31-13+deb11u3
libc-l10n 2.31-13+deb11u4 2.31-13+deb11u3
libcurl3-gnutls 7.74.0-1.3+deb11u3 7.74.0-1.3+deb11u1
libcurl4 7.74.0-1.3+deb11u3 7.74.0-1.3+deb11u1
libdpkg-perl 1.20.12 1.20.11
libgdk-pixbuf-2.0-0 2.42.2+dfsg-1+deb11u1 2.42.2+dfsg-1
libgdk-pixbuf2.0-bin 2.42.2+dfsg-1+deb11u1 2.42.2+dfsg-1
libgdk-pixbuf2.0-common 2.42.2+dfsg-1+deb11u1 2.42.2+dfsg-1
libgnutls30 3.7.1-5+deb11u2 3.7.1-5+deb11u1
libgssapi-krb5-2 1.18.3-6+deb11u2 1.18.3-6+deb11u1
libhttp-daemon-perl 6.12-1+deb11u1 6.12-1
libjavascriptcoregtk-4.0-18 2.36.7-1~deb11u1 2.36.4-1~deb11u1
libjuh-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libjurt-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libk5crypto3 1.18.3-6+deb11u2 1.18.3-6+deb11u1
libkrb5-3 1.18.3-6+deb11u2 1.18.3-6+deb11u1
libkrb5support0 1.18.3-6+deb11u2 1.18.3-6+deb11u1
libldb2 2:2.2.3-2~deb11u2 2:2.2.3-2~deb11u1
liblibreoffice-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libnss-myhostname 247.3-7+deb11u1 247.3-7
libnss-systemd 247.3-7+deb11u1 247.3-7
libpam-systemd 247.3-7+deb11u1 247.3-7
libpcre2-16-0 10.36-2+deb11u1 10.36-2
libpcre2-8-0 10.36-2+deb11u1 10.36-2
libpoppler102 20.09.0-3.1+deb11u1 20.09.0-3.1
libpoppler-cpp0v5 20.09.0-3.1+deb11u1 20.09.0-3.1
libpoppler-glib8 20.09.0-3.1+deb11u1 20.09.0-3.1
libpq5 13.8-0+deb11u1 13.7-0+deb11u1
libreoffice-base-core 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-base-drivers 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-base 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-calc 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-common 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-core 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-draw 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-gnome 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-gtk3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-impress 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-java-common 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-l10n-tr 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-math 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-nlpsolver 0.9+LibO7.0.4-4+deb11u3 0.9+LibO7.0.4-4+deb11u1
libreoffice-report-builder-bin 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-report-builder 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-script-provider-bsh 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-script-provider-js 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-script-provider-python 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-sdbc-firebird 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-sdbc-hsqldb 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-sdbc-mysql 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-sdbc-postgresql 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-style-colibre 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-style-elementary 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice-wiki-publisher 1.2.0+LibO7.0.4-4+deb11u3 1.2.0+LibO7.0.4-4+deb11u1
libreoffice-writer 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libreoffice 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libridl-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libsmbclient 2:4.13.13+dfsg-1~deb11u5 2:4.13.13+dfsg-1~deb11u4
libsnmp40 5.9+dfsg-4+deb11u1 5.9+dfsg-3+b1
libsnmp-base 5.9+dfsg-4+deb11u1 5.9+dfsg-3
libsystemd0 247.3-7+deb11u1 247.3-7
libtirpc3 1.3.1-1+deb11u1 1.3.1-1
libtirpc-common 1.3.1-1+deb11u1 1.3.1-1
libudev1 247.3-7+deb11u1 247.3-7
libuno-cppu3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libuno-cppuhelpergcc3-3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libunoil-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libunoloader-java 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libuno-purpenvhelpergcc3-3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libuno-sal3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libuno-salhelpergcc3-3 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
libwbclient0 2:4.13.13+dfsg-1~deb11u5 2:4.13.13+dfsg-1~deb11u4
libwebkit2gtk-4.0-37 2.36.7-1~deb11u1 2.36.4-1~deb11u1
libxnvctrl0 470.141.03-1~deb11u1 470.103.01-1~deb11u1
libxslt1.1 1.1.34-4+deb11u1 1.1.34-4
linux-image-5.10.0-16-amd64 5.10.127-2 5.10.127-1
linux-image-amd64 5.10.140-1 5.10.127-1
locales 2.31-13+deb11u4 2.31-13+deb11u3
nvidia-tesla-470-alternative 470.141.03-1~deb11u1 470.129.06-6~deb11u1
nvidia-tesla-470-kernel-support 470.141.03-1~deb11u1 470.129.06-6~deb11u1
openjdk-11-jre-headless 11.0.16+8-1~deb11u1 11.0.15+10-1~deb11u1
openjdk-11-jre 11.0.16+8-1~deb11u1 11.0.15+10-1~deb11u1
pardus-locales 0.3.1 0.2.0
pardus-welcome 0.3.0 0.2.2
poppler-utils 20.09.0-3.1+deb11u1 20.09.0-3.1
publicsuffix 20220811.1734-0+deb11u1 20211207.1025-0+deb11u1
python3-ldb 2:2.2.3-2~deb11u2 2:2.2.3-2~deb11u1
python3-uno 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
samba-libs 2:4.13.13+dfsg-1~deb11u5 2:4.13.13+dfsg-1~deb11u4
systemd-sysv 247.3-7+deb11u1 247.3-7
systemd-timesyncd 247.3-7+deb11u1 247.3-7
systemd 247.3-7+deb11u1 247.3-7
tzdata 2021a-1+deb11u5 2021a-1+deb11u4
udev 247.3-7+deb11u1 247.3-7
uno-libs-private 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
unzip 6.0-26+deb11u1 6.0-26
ure 1:7.0.4-4+deb11u3 1:7.0.4-4+deb11u1
webkit2gtk-driver 2.36.7-1~deb11u1 2.36.4-1~deb11u1
xserver-common 2:1.20.11-1+deb11u2 2:1.20.11-1+deb11u1
xserver-xephyr 2:1.20.11-1+deb11u2 2:1.20.11-1+deb11u1
xserver-xorg-core 2:1.20.11-1+deb11u2 2:1.20.11-1+deb11u1
xserver-xorg-legacy 2:1.20.11-1+deb11u2 2:1.20.11-1+deb11u1
xwayland 2:1.20.11-1+deb11u2 2:1.20.11-1+deb11u1
zlib1g 1:1.2.11.dfsg-2+deb11u2 1:1.2.11.dfsg-2+deb11u1

13 September, 2022 11:49AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Charmed Kubeflow 1.6: what’s new?

Kubeflow 1.6 was released on September 7, and Charmed Kubeflow 1.6 (Canonical’s distribution) came shortly after, as it follows the same roadmap. Charmed Kubeflow introduces a new version of Kubeflow pipelines as well as model training enhancements.  Read our official press release.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/153d/1_Capture.jpg" width="720" /> </noscript>

Kubeflow pipelines: a better user experience

Kubeflow pipelines are an end-to-end orchestration platform that helps users build and deploy reusable multi-step ML workflows. The alpha release of the functionality (KFP v2) represents the biggest improvement, which brings a better user experience and new features that help you save time and improve efficiency.

Metadata is a project that is used to better track and manage machine learning workflows. It provides information about runs, models, datasets and data artefacts, enabling users to monitor and understand their artificial intelligence projects. However, in the previous versions of Kubeflow, machine learning engineers had to manually configure it to benefit from this feature, which was often challenging. Moreover, they could not log additional metadata or use any metadata in downstream components. Kubeflow 1.6 changes the asynchronous process implementation that the metadata project had. It offers more assurance that the metadata is captured and recorded, regardless of the deployment step. The metadata is now sourced from the pipeline execution cache. KFP concepts are used to capture the metadata instead of the Pod spec. 


The latest version of Kubeflow improved the correlation between the input and output as well. It makes it more intuitive for those unaware of the rules they need to follow when writing their own pipelines. Lastly, changes have been made to the authoring component, allowing engineers and data scientists to develop faster. The YAML components will be supported in the future, but some parts of the code will need to change, such as ContainerOp. 

Watch our livestream and learn more about Kubeflow Pipelines!

Hyperparameter support in Katib

Katib is a Kubernetes-native project dedicated to automated machine learning (AutoML).  Katib supports hyperparameter tuning. It is important to have this feature available for data scientists who want to control a parameter in the learning process. Katib is agnostic of the AI framework and allows developers to write their programming language of choice. Population-based training (PBT) provides optimised modelling and ease of production fit for models and is available in the latest Kubeflow version.  Kubeflow’s distributed training operator combines PBT with various frameworks such as Tensorflow, PyTorch or MPI operator.  The model serving was also part of Kubeflow’s roadmap. In the latest version, a new Model Spec was introduced to the inference service, aiming to specify new models.

CI/CD for Charmed Kubeflow

Charmed Kubeflow is Canonical’s enterprise-ready distribution of Kubeflow, an open-source machine learning toolkit designed for use with Kubernetes. Charmed Kubeflow is composed of a bundle of charms, which are Kubernetes operators that automate maintenance and security operations. 

One of Canonical’s objectives was to align Charmed Kubeflow releases with the upstream release. Thus, the engineering team invested time in automating the CI/CD pipelines to enable faster operations. This gives the user the chance to use the latest stable release, but also the latest edge, allowing them to see the latest technical updates that the team does.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/3c95/3_Capture.jpg" width="720" /> </noscript>

Learn more about Charmed Kubeflow

Upgrade to the latest version of Charmed Kubeflow by following our guide and contact us if you have any questions.

Follow our tutorials and have fun with Charmed Kubeflow.

Read our whitepaper and get started with AI.


13 September, 2022 09:40AM