January 27, 2023

hackergotchi for Univention Corporate Server

Univention Corporate Server

Looking back at 16th Univention Summit 2023: Open New Deal as the Basis for Sustainable and Open Digitalization

More than 630 IT experts and decision makers from politics, administration, education and the software industry met last week at our Univention Summit 2023 to “rock IT”. The musically inspired motto “Rock’n’Roles’n’Rights – WE ROCK IT!” referred to our focus this year on the further development and expansion of the Univention roles and rights model, but also to the digital turnaround that we want to implement this year together with our partners and customers.

Those were the Topics

Accordingly, the numerous presentations, workshops and discussions at the Summit repeatedly focused on IT trends, successful open source projects and the many possibilities offered by digitally sovereign IT infrastructures, such as the sovereign administration workstation and the dPhoenixSuite, which we are developing together with Dataport and a number of open source partners.

Both the presentations on January 17 at the Metropol Theater and the workshops and technology barcamp on the following day were very well attended, which was not least due to the exciting topics of our guest speakers and partners that were on the agenda. And, most importantly, the 31 partners who once again made the Summit the platform for networking and knowledge transfer for the topics of Open Software and Digital Sovereignty. Well-known IT providers and long-time partners of ours, such as agorum, ownCloud, Open-Xchange, ownCloud, Bechtle, Dataport, OpenTalk and Apple, aroused great interest among the attendees with their topics and encouraged personal exchanges of experience following the exciting workshops and presentations.


Univention Summit 2023: Foyer

Sustainable Digitalization Rocks!

In his opening keynote, Peter Ganten (Founder and CEO of Univention) introduced the idea of an “Open New Deal” and explained the significance of a digital turnaround that must secure freedom, the ability to shape the digital world, and the provision of vital basic IT infrastructure. Particularly in a tense global political situation, in which politics and the globalized economy are increasingly driven by the interests of states, corporations, or even individuals, it is important to form an ecosystem of stakeholders that creates an IT infrastructure that is available and reliable for all, based on clearly defined rules.

Already during the industrialization 150 years ago, such a reliable infrastructure was the basis and engine for global prosperity and success. In contrast, Peter Ganten rejected protectionist approaches. Global challenges can only be solved together on the basis of openness, digital sovereignty and standards. An important component of an open and at the same time integrated IT infrastructure is central identity management, which enables secure access to a wide range of IT services and resources – a topic on which he and the entire Univention team have been working intensively for years and will continue to do so.

Unlike the current energy crisis, which was at least cushioned by gas storage facilities, this is not feasible in the case of cloud services. We cannot stockpile IT, it must be constantly produced anew, our CEO pointed out once again.


Univention Summit 2023: Keynote Peter Ganten

Our technology partners were also optimistic about the development of an open and integrated open source solution stack as an alternative to the closed platforms that dominate the market today, and presented various initiatives of their own:

  • Andreas Gauger, CEO of our long-time partner Open-Xchange, presented the new META API “OSATRA” (“One Single API To Read (them) All”) in his keynote.
  • Heinlein presented its open source video conferencing solution OpenTalk, a smart alternative to proprietary applications from U.S. vendors, and announced deeper integration with UCS at the Summit.
  • At its booth, Dataport demonstrated the latest version of dPhoenixSuite, which is currently being piloted as an open source alternative to Microsoft365 for schools in Baden-Württemberg.

Keynote Opentalk

Positive Response to Presentations & Workshops on the 1st Summit Day

This year’s Summit program featured a wide variety of topics and speakers. In the Education Track, school boards presented their digitalization projects. For example, Oliver Bouwer and Meik Hansen, IT managers at the Senate of Education in Bremen, shared the challenges they have encountered with the digitalization of Bremen’s schools. Malte Matthiesen reported on the progress made in the central administration of school servers in the city of Flensburg, the district of Hameln-Pyrmont presented its concept for an IT infrastructure that integrates the city and the district, and the city of Lübeck presented the successful use of Open-Xchange as a unified mail solution for teachers and students.


Univention Summit 2023: Education Track, Bildungssenat Bremen

In the parallel technology track, our colleagues from development gave a review and outlook on the technical development of UCS, highlighting the plans for the new UCS roles and rights concept, the integration of Keycloak and the measures we are taking to make our products fit for use in clouds based on Kubernetes. Another very interesting presentation was given by Matthias Waack from Schwäbisch Hall, a pioneer in the use of open source solutions in public administration, who made it very clear that the only sensible answer to the growing shortage of IT specialists must be the introduction of centrally manageable IT infrastructures.


Univention Summit 2023: Matthias Waack

Following in the afternoon was the Public Sector Track. Dr. Julia Pohle of the Social Science Research Center Berlin explored the myth of Internet fragmentation. The panelists tackled an equally important issue. Andreas Reckert-Lodde from the Center for Digital Sovereignty of the Federal Ministry of the Interior, Sirko Scheffler from DATABUND, Dr. Ralf Resch, Chairman of VITAKO, Silke Tessmann-Storch, new member of the Dataport Board of Directors, and our CEO Peter Ganten discussed the need for open and standardized interfaces to connect specialized processes to the new administrative cloud. All panelists agreed that the multi-cloud strategy envisioned by the German government can only become a reality if interfaces are mandated that allow all vendors to connect solutions. Otherwise, a quasi-standard based on proprietary platforms would lead to hidden vendor lock-in.


Univention Summit Panel

The presentation program concluded with a keynote by Matthias Spielkamp of AlgorithmWatch, who called for more transparency and traceability in the decision-making of AI systems, and the presentation of Infinite Scale, the new version of ownCloud, which is said to bring a revolutionary concept for a new data strategy.

The evening program continued with swing classics, evergreens, and pop culture from the Bremen-based swing band Trio Royal, and attendees took the opportunity to make new contacts and deepen old ones.

And Matthias Krischner took a playful look at the topic of independence in IT when he read from his book “Ada & Zangemann – A Fairy Tale about Software, Skateboards and Raspberry Ice Cream”, and it quickly became clear to the audience that inventor Zangemann should no longer be the only one who controls all the devices. It was a nice way to end a very eventful day.


Univention Summit 2023: Band Trio Royal
Univention Summit 2023: Buchlesung

Constructive exchange in workshops & at the Technology Barcamp on January 18

The 2nd day of the Summit was also a day of “rocking”. During the workshops at the Haus der Wissenschaft and the Technology Barcamp at the Konsul-Hackfeld-Haus, customers and partners met in small groups to exchange ideas, learn about new solutions and actively discuss with Univention software developers. The buzzword “Kubernetes” and the question of how UCS can run in Kubernetes in the future or even be a node of Kubernetes itself was of particular interest.

Docker Compose vs. Kubernetes on a single UCS was also discussed. Of course, we listened carefully to the concerns expressed by some attendees about increased product complexity due to Kubernetes and their desire for transparent communication and mature education through training and UCS documentation.


Technik-Barcamp
Workshops

Univention Continues to Grow

The growth figures announced by our CEO Peter Ganten show how successfully we have developed. In contrast to many international IT companies, we at Univention have recorded an average revenue growth of more than 35 percent over the past three years. The number of users of the Univention platform has also grown by more than 30 percent, and we continue to grow at Univention because we are always looking for open source enthusiasts who want to break new ground together for sustainable digitalization.

To continue this growth, we are consistently expanding our Univention Corporate Server (UCS) platform and UCS@school, which is optimized for the education sector, to create an open platform for broad deployment and integrating other solutions. The focus here is on the migration to Keycloak as an identity service provider, the operation of cloud infrastructures based on Kubernetes and the expansion of the roles and rights model for differentiated access to resources and services.

Watch the full keynote, panel discussion and recordings of user presentations from the Technology, Public Sector and Education tracks on our YouTube channel (only available in german)

For more impressions of Univention Summit 2023, as well as information on speakers, partners and the complete agenda, please visit our Summit website.

Der Beitrag Looking back at 16th Univention Summit 2023: Open New Deal as the Basis for Sustainable and Open Digitalization erschien zuerst auf Univention.

27 January, 2023 02:44PM by Bianca Scheidereiter

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: ROS 2 Foxy and ROS Melodic EOL – Keep your robots up and running

ROS Melodic EOL is around the corner. With more than 1,004 repositories in rosdistro, Melodic is among the top 3 ROS distributions (with Indigo and Kinetic). Together with Ubuntu 18.04 LTS, Melodic was widely used by many pioneering companies who deployed the first ROS devices to market. By the end of April, both distributions will reach EOL.

ROS 2 Foxy is also approaching EOL this year. While the number of devices deployed with Foxy is low, Foxy’s EOL will also impact some of us. In this blog, we will cover the implications for robot makers and the different options you have to keep your machines running smoothly.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/YiUwT8UjQ9gG17nnopUeod-Ul9R3xZOM03k2vramL4JdzDmKpi5Kbm-F4BVmrcQwGNwZzWM3p_qNERF20Gty7wUPzsdfennhq0Ny8ofxa5mkWDeBcIzPjt_ELvqsurMyJRKRsO8qkTH0uFsCXRIFgJSVA_Lf9Et7OiRJUQaIKUpSAmno_Uk8VAAx5UVmAA" width="720" /> </noscript>

Implications of ROS Melodic EOL and ROS 2 Foxy EOL 

From April 2023, ROS Melodic EOL and ROS 2 Foxy EOL will stop getting any maintenance. For companies with deployed devices, this could become a major obstacle. For those using ROS Melodic, Ubuntu 18.04 will also reach the end of the standard support

Most organisations using robots need to comply with cybersecurity requirements. Using the company network for a device with software no longer supported is a breach of these requirements (from a simple laptop to a robot). As such, robotics users will reasonably demand their robotics suppliers to update their devices. The consequences of failing to do so could vary and they cannot be taken lightly. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/099b/18.04-ESM-blog-footer-3.png" width="720" /> </noscript>

Take for instance ROS Kinetic EOL, which entered EOL in April 2020. Canonical has released more than 1,400 CVE patches for our ESM customers using ROS Kinetic and Ubuntu 16.04 ESM. Companies that have not updated their fleet of devices are missing patches for critical, high and medium Common Vulnerabilities and Exposures (CVEs). This makes their devices and users a target.

What can you do if you’re impacted by ROS 2 Foxy and ROS Melodic EOL?

Companies with deployed devices in the market should migrate to a supported distribution. 

If you have major dependencies that rely on ROS 1, then the most reasonable step is to stay in ROS 1. The latest LTS distribution for ROS 1 is Noetic. But please keep in mind: 

  • Some of your ROS packages may still need to be supported in newer distributions of ROS.
  • Some APIs from your current configuration might depend on specific versions of the applications and libraries of Ubuntu Xenial. For example, Python 2.7 is no longer supported by ROS 1 Noetic or ROS 2 (For more information please read transitioning to Python 3). 

You can also move to ROS 2. If you are already using Foxy, you know that ROS 2 provided several benefits over ROS 1. However, migrating is not a straightforward process. ROS 2 comes with a learning curve, a different build environment, more C++ 11, a higher number of built-in functions and support for Python 3 only. Here you can find a complete migration guide for ROS 2 Humble. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/NA5ulqiqeOwIkUXnaVv07oCgSX8X0zGYrzt3CplhLvrFtC3e6Wjz0pwCnK3cLEGaHVjqyskisWyW7fqYZXuI9w-Qz0Saesc02FWXsdZaP1CmNQXjeiFRJJWj8Np-RgrmHj6lfgKarcn8OyCzsWChRJvYZmlo0Pfjmkxk42aniseMflqLnRGVa_vO7xfSuQ" width="720" /> </noscript>

Keep in mind that you will also need to address your migration path for Ubuntu. We advise you to have a look at Ubuntu Core. While Ubuntu Desktop and Server can be used for robotics, Ubuntu Core is optimised for these kinds of devices. With out-of-the-box features such as OTA update control, low-touch device recovery, strict confinement, secure boot, and more, Ubuntu Core makes it easier to deploy and manage devices. It also comes with a longer window of standard support: 10 years. That’s ideal for robots that have to be out there in the field for a while. 

The migration shouldn’t be painful. Bundling all your dependencies using snaps, you can move from your Desktop or Server system to Core. 

Can’t migrate? Get Canonical’s ROS ESM

Sometimes migration is not straightforward. Migrating a vast code base of robots takes time and resources. Dealing with dependency changes can be troublesome. It also implies redoing the whole test phase as well as re-experiment the stability. Besides, simply recalling devices from the field could represent a major task for your organization. Sometimes, robots might operate in mission-critical systems where downtime could create major losses.

While the aim is to migrate eventually, you might need some time. ESM gives you 5 extra years before ROS Melodic EOL and ROS 2 Foxy EOL.

Canonical’s ROS ESM (short for Extended Security Maintenance) addresses these issues. As part of the Ubuntu Pro subscription, ROS ESM gives you up to 5 more years of security maintenance for ROS 1 Kinetic and Melodic, and ROS 2 Foxy. 

ROS ESM covers REP-142 ‘ros_base’ for ROS 1 Kinetic and Melodic and its equivalent ‘ros core’ for ROS 2 Foxy. This includes packages as python-catkin, python-rosdep, ros-${ROS_DISTRO}-ros-core…, ros-${ROS_DISTRO}-genmsg/rosbag…, per supported ROS distribution. 

Get ROS ESM

For more information about ROS ESM:

Beyond ROS 2 Foxy and ROS Melodic EOL

While ROS 2 Foxy is not a widely adopted ROS distribution, ROS Melodic is. Paired with Ubuntu 18.04, Melodic is built on top of more than 2,300 packages from Ubuntu Main repositories. You find packages such as Python, OpenSSL, OpenVPN, network-manager, sed, curl, systemd, udev, bash, OpenSSH, login, libc… Packages that will stop getting standard support for Ubuntu 18.04. 

Beyond Ubuntu Main, companies also leverage packages from Ubuntu Universe. More than 23,000 packages are in this category. For example, Boost, Qt, OpenCV, PCL, python-(argcomplete, opencv, pybind11, png…), cython, eigen, GTK, FFMPEG… Some of these packages will also reach their EOL. For the whole list of what’s included in Main, you can visit the Ubuntu Packages Search tool

Ubuntu Pro now covers the whole stack; ROS, Main and Universe to provide companies with a definitive solution. All of these are under a single subscription for up to 5 extra years.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/f1bUpIs7mFVb_dg_NmejPd7IoyHgmEEk8esR6A2_vw-5-lhmHIGs9ZFhmyOQqjDA7f5Jh7OXgeVq4Tk3Zhes6nIBaw6N0OUva23e6UVXSDMWz72W81N1r7yY_Qbvnjze7shFCdXqkAoP7nYtVj6K9fTAxh8WcDUlagkVW0q8HEmXWOSThbGA35GJC7AWWw" width="720" /> </noscript>

ROS ESM is part of Ubuntu Pro

ESM is part of the Ubuntu Pro subscription. Besides getting ESM, customers can also enjoy other services like:

For more information about the Ubuntu Pro subscription visit the webpage, and the service description.

Consuming ROS ESM updates 

You can consume only security-related updates, or both security updates and bug fixes when you purchase Ubuntu Pro. This user introduction document has all you need to get started. In essence, you do not have to make changes to your current ROS application. ROS ESM simply enables a new PPA for you to consume updates. This reduces the downtime or resources needed to migrate to ROS ESM. 

For more information read our FAQ blog about ROS ESM

Moving forward

ROS ESM extends the support window of ROS and Ubuntu an extra 5 years. While ROS ESM allows you to be compliant for the time being, you can also explore your migration path. 

For instance, you can use tools like the ROS 1 – ROS 2 bridge. This can help you develop new features on ROS2 while keeping your current ROS1 with the security offered by ESM. It gives you some latitude to plan your next move.  

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/JZ20A737ok8WBk4UZvgquS7KxL7Vp-4hjHVOvmvNgXhlhxaP6bJOAKEpW-47WRNuQRXvOIn-2WATamwOcNG8OLPcck0J0d7B5JrmLbJ1UeZOzsaNJb9Skf4EkYofijbu_ZUqw1dFpoVILANzvRJLyTC5QN2EVpTyEBWCwno6xHFBuR62_XlKJWf4fbgG9g" width="720" /> </noscript>

Summary- ROS 2Foxy and ROS Melodic EOL implications

As ROS Melodic, ROS 2 Foxy and Ubuntu 18.04 reach EOL in April of 2023, companies that have deployed devices with this LTS need to take action. Staying on an EOL distribution is a security risk that device manufacturers can’t afford. While migration to a supported LTS is the main recommendation, we understand this is not possible for everyone. If that’s the case for you, you can rely on ESM to get extra time.  
Get in touch if you need advice on the best path for your company.

27 January, 2023 02:04PM

Ubuntu Blog: Jammin’ with Jami – Freedom, privacy, snaps

About a year ago, the Advocacy team established first contact with Savoir-Faire Linux, a free software consultancy company behind Jami, a privacy oriented VoIP and conference platform. The Jami developers were interested in some sort of collaboration with us, and shedding fresh light on their product. Intrigued by their technology and business model, we featured Jami in the Snap Store. Since, Jami has seen a steady 3X growth in their active user base. Last week, we met again to talk about Jami, their experience with snaps, future plans, and more.

My interview peer was Sébastien Blin, a free software consultant and a core Jami developer working with the company since 2017.

Tell us a bit more about your workplace …

Savoir-Faire Linux is a free Software consultant Company working in several areas of expertise (embedded systems, infrastructure, software development, R&D), based in Canada and France. 

What is Jami?

Jami is a free (GPLv3) communication platform available on all major platforms (GNU/Linux, Windows, MacOS, iOS, Android, Android TV). What makes it unique and interesting is that it uses distributed P2P technology, based on OpenDHT. It provides the standard set of telephony and VoIP features like SIP video calls and video conferencing, screen sharing, text messaging and groups for up to 8 people, and more. You also get unlimited file transfers and plugin customization. The connections between peers are encrypted.

Jami’s broader message is to enable communities around the world to communicate, work, educate, and share their knowledge, or simply catch up with friends and loved ones privately, securely, and in freedom by building upon the latest relevant free protocols and standards.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/4VLUsAfPLaHEftTbXgHYKFeTRg_Zll5D5Ivcx_dYo0jMlnuvSLEDsBI6gP0EgPxA9XbBl2i_YiOTVf_uF-cr0J6FKK8FNUQGBLzsSWvKHimtnSBe4kO9MYehp46WiOyR7_9CiXTPmsUiOsH9rAWWKVtsoOeZWE2EPcbBA-ehYq6qI73xSConqOUBvplN8g" width="720" /> </noscript>

How do you see Jami faring against well-known and established VoIP tools?

As highlighted above, Jami is mainly based on standard protocols (SIP, ICE, TLS encrypted sockets, x.509 certificates chain) and its goal is to be resilient against censorship by design. This is why Jami is using P2P technologies and can provide off-grid communication. With all the current world events, it’s also important for us to be able to use Jami during critical events (Internet shutdown, massive censorship, natural disasters) or even in disconnected areas (on a boat, smart agriculture, educational classes, etc). It’s possible to run Jami without using any infrastructure provided by Savoir-Faire Linux.

Moreover, the use of P2P sockets essentially means that the company does not limit capabilities like size for file transfers or bitrate for audio and video quality. Users can rely on whatever their connections can support.

In 2022, the company focused a lot on enabling group discussion with a technology we call Swarm. This allows us to synchronize messages across all devices by syncing a tree of messages.

Can you tell us more about what a day-in-life for Jami people (developers) looks like?

Jami is a small team. Because Savoir-Faire Linux is a free Software consultancy, the size of the active team can and will vary a lot from time to time. Overall, we have a small core team, with several other developers being “loaned” to the project from time to time. The core team will work on big features, while the other developers will work on smaller tasks. We try to follow a roadmap, and usually do standup meetings at 10am every morning to check who is blocked and how we can help move the work along. The rest of the day is usually dedicated to developing, reviewing and testing code. Every night, two big test suites run to check various scenarios – and also make sure nothing is broken.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/b7FdCgRm7X2vpQ0SSODtPIrK94HX9xOLNPy7gH3t5irTATTBZG6ZCx_6p3zodv7yVzdDIY1WCS48f9OiC1Mfy159pq6IJ313pHG_ZGV0bBZVQ22V68HoCpPD28issXqZuT_LaR-fV802glGLcawHmS_CNEzxCPD_zB_d-n-stTHPb7Cm-Dy6CFTbogso0w" width="720" /> </noscript>

For my part, I generally review the results of the tests every morning, and take time to check feedback from the community. As we’re a small team, we rely on the community, and thanks to their help and ideas, it is possible to improve the user experience. We are also grateful for all the people helping us with translations. We also have a number of community members doing some patches, documentation and bug reports that help us analyze and check new usage scenarios.

How do snaps feature in your business?

From our perspective, for the last several versions of Ubuntu, we find the snaps quite pervasive and sort of mandatory to the user experience. Because we want our users’ journey to be as easy as possible, it made perfect sense to publish Jami through the Snap Store, and make it possible for our users to discover a sandboxed version of the application.

On the upside, this also allows us to provide an up-to-date version for all GNU/Linux distributions supported by snap, reducing the cost of packaging and maintenance. Jami uses a lot of up-to-date libraries, with some patches in a number of libraries, so typically, it may not be easy for our team to maintain all packages using the traditional distribution methods. Even if the community does a lot of great stuff regarding this point, the overhead is too big.

Are you happy with snaps?

The snap technology has its advantages and drawbacks. Overall, we’re pretty happy because it gives us a convenient way to provide our users with an up-to-date version on a lot of distributions, even though we maintain our repositories for Ubuntu, Debian, Fedora, and openSUSE.

This is highly important for us, because, as communications protocols are evolving, users require an up-to-date version of software at all times, in order not to downgrade the quality of the communication on both sides. That’s generally why other communication platforms may not connect if the user isn’t running the latest version.

For developers, it is quite easy to package applications as snap, using the snapcraft.yaml file. The final artifact is sandboxed, and the users have the ability to give and remove permissions as they want.

In the future, we really hope the snap ecosystem will have a better, more visible permission mechanism. For instance, Android is pretty great on this point, where applications can ask the user to access the camera or filesystem via a popup, instead of failing sometimes in a weird way.

For people who experience problems during packaging, the snapcraft.io forum has a great community that will provide answers rather quickly, most of the time. However the snap packaging process may sometimes be quite complex, with a lot of layers that can cause bugs. With Jami, during development, we encountered two main problems.

  • The first one was, for a while, snap users would not have video playback – only a static blue screen. This was due to the use of the GNOME platform snap, with a rather specific bug. We resolved the issue by switching to core22 base in our product.
  • The second problem was, since we use Docker to build snaps, and due to the fast pace of snap ecosystem evolution, we encountered connectivity issues with the Dockerfile we use to build snap packages. The Python module inside the container was broken for weeks.

Those two bugs are now fixed, and apart from small maintenance overhead, everything is working fine!

What does the future hold for Jami?

Regarding snap, we would like to use the Snap Store build infrastructure, as this will allow us to publish the application for more platforms and architectures. This work has started but not yet been completed.

For Jami itself, we have a lot of new things coming. We want to increase the limit of members in a swarm, we are working on a Web client compatible with WebRTC, we’re improving the audio and video quality in calls, adding new plugins (we’re working on an offline Transcript Plugin, and much more). Finally, we want to increase our presence out there.

Thank you for featuring us! If there’s something we can help with, feel free to reach out to us

=====

Editor’s note:

I would like to thank Sébastien for his time, and invite developers out there to get in touch regarding their cool software experiences. If you’re interested in sharing your story about Linux and snaps with us, please email snap-advocacy at canonical dot com, and we will think together what can be done.

Main page image credits: Photo by Duy Pham on Unsplash.

27 January, 2023 12:51PM

hackergotchi for Qubes

Qubes

XSAs released on 2023-01-25

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is not affected. Therefore, no user action is required.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

  • (none)

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-425 (Qubes 4.1 does not use the affected Xen version; denial-of-service only)

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

27 January, 2023 12:00AM

January 26, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How Ubuntu Pro delivers enhanced security and manageability for Linux Desktop users

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8a58/WP_Desktop_26.01.2023_v1.jpg" width="720" /> </noscript>

At the end of last year Canonical announced that Ubuntu Pro, our expanded security maintenance and compliance subscription, is now available for data centers and desktops as a public beta. This week, Ubuntu Pro entered general availability, giving Ubuntu users access to extra hardening and security patching.

If you’re a developer using Ansible, Apache Tomcat, Apache Zookeeper, Docker, Nagios, Node.js, phpMyAdmin, Puppet or Python 2, you’ll want to read on.  The subscription expands security coverage for critical, high and medium Common Vulnerabilities and Exposures (CVEs) to these and thousands of other applications and toolchains in Ubuntu’s repositories

Ubuntu Pro Desktop replaces Ubuntu Advantage Desktop to provide a comprehensive single subscription for enterprise users.  It is available free for up to five machines, extending to 50 for official Ubuntu community members.

Power to developers, peace of mind for IT

Ubuntu Desktop is the preferred operating system for experienced developers and the most popular Linux OS in the enterprise. Our certified hardware program also means that it’s easy to procure workstations from your existing OEM vendor with Ubuntu pre-installed. With expanded security patching, integrated management tooling and support for additional hardening and certification, Ubuntu Pro Desktop is designed to give IT professionals peace of mind. Enterprises can drive secure Linux adoption, while developers can use their preferred open-source operating system.

Read our new guide for best practices:

Adopting a secure enterprise Linux desktop

Ubuntu Pro Desktop feature highlights

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Xk_sw_lPZ4TIgwmKSZy7kumNIgkYPT-4eUszMFafRhxn_UeoGr9NXWN7KBZVqFC6l9jLFmGf7s5eIUOabDZNcwleRLh5pGZs0p9Ir9o_W6ydRp6D2J53khYnnc9yZYsJABM544dg1WVIyJRJNOkdAgM06p--ZUEpb2z0d885gXtyxytETqyqv6CPBhsi-A" width="720" /> </noscript>
Expanded Security Maintenance

Every Ubuntu LTS release comes with 5 years of free security patching for Ubuntu main, the primary repository for the Ubuntu OS. With Ubuntu Pro that support is expanded to 10 years and now covers Ubuntu Universe which includes over 23,000 software packages.

Canonical Livepatch, which allows users to apply kernel patches while the system runs, is also included in Ubuntu Pro. 

Enterprise-grade management with Landscape and Active Directory

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/7l3WpQembQscm5xZN5KGtbuNc6i8pSMFhHLr29X5i3QCTMxEVgIY1B4En78EntG31-9LdxYTmuqjPtkxcYpl_jhy78LJRzut39VTLH7vSsW5H-IsyLWBrt25b6UYFznOO3QzQHG7f7Vf1EXWct-eCdgDhVwW1zI72B9gPLKiS6ruWMIhYh5Habke1HBtpg" width="720" /> </noscript>

Ubuntu Pro subscriptions include Landscape, Canonical’s monitoring and management tool for all versions of Ubuntu that offers software updates, configuration management, policy compliance and permission control for your entire physical and virtual fleet.

Integrating Ubuntu Desktop into an existing Active Directory architecture can be an automated and effortless process when using System Security Services Daemon (SSSD). Ubuntu Pro users receive access to additional Active Directory policies like native Group Policy Object support, custom script execution and privilege management.

These features align the Active Directory management experience of Ubuntu as closely as possible to the one available in Windows, flattening the learning curve required by system administrators to securely manage a fleet of Ubuntu desktops at scale

Read more about our Active Directory integration

Compliance, certification and hardening

Ubuntu Pro delivers security and compliance for even the most sensitive workloads with FIPS 140-2 certified modules, CIS hardening and Common Criteria EAL2 certification available.

Learn more about Ubuntu’s security practices

Optional Weekday or 24/7 support tiers

With Ubuntu Pro Desktop, users can choose from two support level SLAs: weekdays or 24×7 support coverage. These deliver direct access to our world-class enterprise open-source support team through our web portal, knowledge base or via phone. 

Free for small scale use and no price change for existing Ubuntu Advantage Desktop users

Ubuntu Pro is free for personal and small-scale commercial use in up to five  machines. If you are already taking advantage of your free personal token then your entitlements will be upgraded automatically. See below for how to activate expanded security maintenance on your desktop.

Existing Ubuntu Advantage Desktop customers will also be entitled to this upgrade at no extra cost. Going forward, Ubuntu Pro Desktop will retain the same price as the current Ubuntu Advantage Desktop subscription.

For Ubuntu Pro pricing on other platforms, please refer to the store page.

Activate your Ubuntu Pro benefits

Ubuntu Pro can be attached and enabled via the command line or the Software & Updates application. Follow our new tutorial for more details.

New users can register for their free token or sign up from this screen in the Software & Updates application.

Note: Please ensure your system is up to date to access this new menu. It may take a few days for this change to be rolled out to everyone.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/XWzhEmA9VPoZTBucUnfGVHPQ4en2_OExfRPFZFZ2pNQ-P1Ag4na5Hgq5Sjsb4m8vuienlQtIBhfvvHtaR1vk8HuVJOAqZCOsSxClkwl_3fDhVQ5GPCpfVn9XX2QQTY4uXpHDUqirTHLEWxs1EeSHGO0sD9kOfn4dRL0m48c4HUZ4VZ1qy_oD1KkFhfRbtQ" width="720" /> </noscript>

Existing users can also use this interface to enable their new Expanded Security Maintenance coverage by toggling the ESM Apps option.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/yS0m8t9jVzSIdohGWiYB4MGhPoOtHIgP6RYbFgo3Xh84mcAvFbTSIBP7QVEtjOkrRxtTi8A5ghuPx2AsevRQvYpJFrl4Qu9fs3ssi5fCPjB8fNwHaNbFdexpwCakgUPAR6AD-WFLhwUtFXjK-OragU0HZkSS0_uHlyd3DJ-5qZOWyl4S6mMNqWPVB0-YTg" width="720" /> </noscript>

When ESM updates are available they will be visible in the update manager notification.

Learn more about Ubuntu Desktop for organisations

To hear more about how to onboard Ubuntu Desktop in a Windows centric organisation, check out our LinkedIn webinar from November 2022.

Adopting Linux securely in a Windows-centric enterprise

Additional resources

26 January, 2023 01:35PM

Ubuntu Blog: Ubuntu Pro enters general availability

Ubuntu Pro, Canonical’s comprehensive subscription for secure open source and compliance, is now generally available. Ubuntu Pro, released in beta in October last year, helps teams get timely CVE patches, harden their systems at scale and remain compliant with regimes such as FedRAMP, HIPAA and PCI-DSS.

The subscription expands Canonical’s ten-year security coverage and optional technical support to an additional 23,000 packages beyond the main operating system. It is ideal for organisations looking to improve their security posture, not just for the Main repository of Ubuntu, but for thousands of open-source packages and toolchains.

Timely patching for your favourite open-source toolchains

Canonical has an 18-year track record of timely security updates for the main Ubuntu OS, with critical CVEs patched in less than 24 hours on average. Ubuntu Pro’s coverage spans critical, high and selected medium CVEs for thousands of applications and toolchains, including Ansible, Apache Tomcat, Apache Zookeeper, Docker, Nagios, Node.js, phpMyAdmin, Puppet, PowerDNS, Python, Redis, Rust, WordPress, and more.

Ubuntu Pro is available for every Ubuntu LTS from 16.04 LTS. It is already in production for large-scale customers offering global services. The beta release was welcomed by the likes of NVIDIA, Google, Acquia, VMWare and LaunchDarkly. Since the beta announcement in October 2022, tens of thousands of Ubuntu users have signed up for the service.

“I manage my own compute cluster leveraging MAAS and other Canonical tools to support my research. The open source security patches delivered through Ubuntu Pro give my team peace of mind, and ensure my servers are secure. Canonical is continuously delivering timely CVE patches covering a broad portfolio of open source applications for the entire ten-year lifetime of an Ubuntu LTS. This brings much needed stability and compliance”, said David A Gutman, MD PhD, Associate Professor of Pathology, Emory University School of Medicine.

A single subscription for security and compliance 

Besides providing timely security patches, Ubuntu Pro includes tools for compliance management in regulated and audited environments. Ubuntu Security Guide (USG) enables best-in-class hardening and compliance standards such as CIS benchmarks and DISA-STIG profiles.

Ubuntu Pro users can access FIPS-certified cryptographic packages necessary for all Federal Government agencies as well as organisations operating under compliance regimes like FedRAMP, HIPAA, and PCI-DSS.

System management and automated patching at scale are facilitated through Landscape. Ubuntu Pro also includes Livepatch, which patches critical and high-severity kernel vulnerabilities at runtime to minimise the need for unplanned reboots of your Ubuntu estate.

Subscription types and pricing

The standard Ubuntu Pro subscription covers the full set of security updates for all packages in Ubuntu Main and Universe repositories – this is the most suitable choice in most cases. Ubuntu Pro costs $25 per year for workstation or $500 per year for server and is available directly from ubuntu.com/pro/subscribe with a 30-day free trial.

Ubuntu Pro is also available through our public cloud partners’ marketplaces – AWS, Azure and Google Cloud. It is offered on a per-hour basis, billed directly by the cloud, and priced at approximately 3.5% of the average underlying compute cost.

An Ubuntu Pro (Infra-only) subscription (formerly offered as Ubuntu Advantage for Infrastructure) covers the base OS and the private cloud components needed for large-scale bare-metal deployments, but excludes the new broader application coverage. It is useful for organisations building private clouds that use other guest operating systems for applications.

A free tier is available for personal and small-scale commercial use in up to 5 machines. Official Ubuntu community members can benefit from Ubuntu Pro on up to 50 machines. In order to get the token, log in with your existing Ubuntu One account or create a free account.

Book a call to find the right subscription for your use case

Optional support

Ubuntu Pro can be combined with up to 24×7 enterprise-grade support coverage for the Ubuntu operating system, MAAS, LXD, Kubernetes, OpenStack or Ceph storage, and now also a range of open source applications.

Get started today

To explore the benefits of Ubuntu Pro, discover the scope of coverage and additional security updates available, follow these steps.

Additional resources

Open source security: best practices for early detection and risk mitigation | webinar

Ubuntu Pro | product page

Ubuntu Pro | plans and pricing

Buy Ubuntu Pro

26 January, 2023 12:50PM

Balint Reczey: How to speed up your next build with Firebuild?

Firebuild logo

TL;DR: Just prefix your build command (or any command) with firebuild:

firebuild <build command>

OK, but how does it work?

Firebuild intercepts all processes started by the command to cache their outputs. Next time when the command or any of its descendant commands is executed with the same parameters, inputs and environment, the outputs are replayed (the command is shortcut) from the cache instead of running the command again.

This is similar to how ccache and other compiler-specific caches work, but firebuild can shortcut any deterministic command, not only a specific list of compilers. Since the inputs of each command is determined at run time firebuild does not need a maintained complete dependency graph in the source like Bazel. It can work with any build system that does not implement its own caching mechanism.

Determinism of commands is detected at run-time by preloading libfirebuild.so and interposing standard library calls and syscalls. If the command and all its descendants’ inputs are available when the command starts and all outputs can be calculated from the inputs then the command can be shortcut, otherwise it will be executed again. The interception comes with a 5-10% overhead, but rebuilds can be 5-20 times, or even faster depending on the changes between the builds.

Can I try it?

It is already available in Debian Unstable and Testing, Ubuntu’s development release and the latest stable version is back-ported to supported Ubuntu releases via a PPA.

How can I analyze my builds with firebuild?

Firebuild can generate an HTML report showing each command’s contribution to the build time. Below are the “before” and “after” reports of json4s, a Scala project. The command call graphs (lower ones) show that java (scalac) took 99% of the original build. Since the scalac invocations are shortcut (cutting the second build’s time to less than 2% of the first one) they don’t even show up in the accelerated second build’s call graph. What’s left to be executed again in the second run are env, perl, make and a few simple commands.

The upper graphs are the process trees, with expandable nodes (in blue) also showing which command invocations were shortcut (green). Clicking on a node shows details of the command and the reason if it was not shortcut.

Could I accelerate my project more?

Firebuild works best for builds with CPU-intensive processes and comes with defaults to not cache very quick commands, such as sh, grep, sed, etc., because caching those would take cache space and shortcutting them may not speed up the build that much. They can still be shortcut with their parent command. Firebuild’s strength is that it can find shortcutting points in the process tree automatically, e.g. from sh -c 'bash -c "sh -c echo Hello World!"' bash would be shortcut, but none of the sh commands would be cached. In typical builds there are many such commands from the skip_cache list. Caching those commands with firebuild -o 'processes.skip_cache = []' can improve acceleration and make the reports smaller.

Firebuild also supports several debug flags and -d proc helps finding reasons for not shortcutting some commands:

...
FIREBUILD: Command "/usr/bin/make" can't be short-cut due to: Executable set to be not shortcut, {ExecedProcess 1329.2, running, "make -f debian/rules build", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/sort" can't be short-cut due to: Process read from inherited fd , {ExecedProcess 4161.1, running, "sort", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/find" can't be short-cut due to: fstatfs() family operating on fds is not supported, {ExecedProcess 1360.1, running, "find -mindepth 1 ...
...

make, ninja and other incremental build tool binaries are not shortcut because they compare the timestamp of files, but they are fast at least and every build step they perform can still be shortcut. Ideally the slower build steps that could not be shortcut can be re-implemented in ways that can be shortcut by avoiding tools performing unsupported operations.

I hope those tools help speeding up your build with very little effort, but if not and you find something to fix or improve in firebuild itself, please report it or just leave a feedback!

Happy speeding, but not on public roads! 😉

26 January, 2023 09:06AM

Podcast Ubuntu Portugal: E231 Agenda Cheia!

Confirmou-se! Para a Comunidade Ubuntu em Portugal, Janeiro é um mês muito animado. Neste episódio dedicámos grande parte do tempo ao balanço do encontro de Aveiro. Para a semana talvez Sintra e depois Lisboa… o que não falta é festa e animação. Olhando para o início de Fevereiro parece que a história se repete mas depois se verá! Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

26 January, 2023 12:00AM

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Grub disables os-prober by default from Debian Bookworm

Users upgrading to BunsenLabs Boron (experimentally available soon) will discover that os-prober is no longer run by default to discover other operating systems available on the machine.

Reported here:
https://bugs.debian.org/cgi-bin/bugrepo … ug=1013797
https://bugs.debian.org/cgi-bin/bugrepo … ug=1009336

This change was made by grub upstream due to security considerations:
https://git.savannah.gnu.org/cgit/grub. … c66f517666

Users who need to make other operating systems available can re-enable os-prober by adding a line to /etc/default/grub

GRUB_DISABLE_OS_PROBER=false

From grub 2.06-4 the line is already there, commented, so just needs to be uncommented.

26 January, 2023 12:00AM

January 25, 2023

hackergotchi for Volumio

Volumio

The Volumio Primo is now Roon Ready

We are pleased to announce that the Volumio Primo is now Roon Ready

 

The post The Volumio Primo is now Roon Ready appeared first on Volumio.

25 January, 2023 04:12PM by Graham

hackergotchi for Mobian

Mobian

Off-topic: The importance of efficient tooling

You’re probably aware of the existence and hopes tied to the rise of the RISC-V architecture, which means we should prepare for future (yet hypothetical) RISC-V Linux-based mobile devices.

The first step in this journey being the ability to build binary packages for this architecture, we purchased a StarFive VisionFive v2 SBC with the goal to turn it into a (Debian-powered, of course!) GitLab CI runner. And that’s exactly where the tools developed to build Mobian came in really handy!

Debian on RISC-V

64-bits RISC-V (or riscv64 in Debian terms) is not an architecture officially supported by upstream Debian. However, the vast majority of packages are built and available to riscv64 devices thanks to the Debian Ports initiative.

As an unofficial port, the riscv64 packages live in a separate archive, providing only the unstable and experimental suites. Those details aside, all the usual Debian tools work in the exact same way they do on other architectures.

With this in mind, why not use the tools we’ve been developing for Mobian and create an “almost-pure” Debian image for the VisionFive v2?

The board

The VisionFive 2 is a RISC-V SBC based on StarFive’s JH7110 SoC. This chip includes 4x SiFive U74 cores, Gigabit Ethernet, PCIe and many other interfaces.

The board itself includes a choice of 2, 4 or 8GB RAM, a now-standard 40-pin GPIO header, a microSD card slot, an eMMC socket as well as an M.2 slot for an NVMe SSD, and of course several USB ports and 2 Gigabit Ethernet ports.

OK, but… can it run Debian?

In order to run any kind of Linux system (including a Debian-based one), we need the following elements:

  1. a bootloader
  2. a kernel which includes support for this specific device
  3. a root filesystem containing userspace software, compiled for the device’s CPU architecture

Das U-boot Abenteuer

The first item is already present on the board, in the form of a SPI NOR flash chip, factory-flashed with u-boot. However, the one present on our (early bird) board lacked support for standard “distroboot”, therefore we had to build a more recent version from StarFive’s u-boot repository and flash it using the documented recovery procedure.

It required backporting an upstream patch to be able to build using the compiler and binutils from current Debian testing. However, for some reason using the latest commit (at the time) of the JH7110_VisionFive2_devel branch led to a non-functional binary, unable to detect the RAM size of our board. One more patch later, we could however get a working bootloader!

It wasn’t capable of using the “distroboot” due to wrong and/or missing environment variables, which were later added. Feel free to refer to our patched source tree, or simply download the binary files from our artifacts branch.

A working kernel

Similar to u-boot, StarFive provides a kernel repository, including all needed patches to get it running on the VisionFive 2 board… and just like u-boot, it doesn’t build from a current Debian testing…

This is easily solved by, once again, backporting an upstream patch. Once built with the usual make bindeb-pkg command (and, of course, the proper values for the ARCH and CROSS_COMPILE env vars), we get a .deb containing the kernel and its modules, which boots just fine. However, the default kernel config is somewhat limited and doesn’t allow us to run docker (remember, we want to make this board a GitLab CI runner!). With some additional tweaking, we finally get to a point where this kernel is fully usable for our purpose.

Our patched kernel is of course also available on GitLab.

Tying it all together

Putting the architecture differences aside, this device is quite similar to the PinePhone Pro from a low-level perspective: the bootloader is directly flashed to the board’s SPI flash, and we need to create a bootable image to be written on a microSD card.

We already know how to do this on the PPP, so why not re-use this knowledge for the VisionFive 2? Not wanting to mess with Mobian’s “production” codebase, we imported this repo to gitlab.com and made the necessary changes there. As you’ll notice from the single commit needed to generate the VisionFive 2 image, the changes are very minimal, demonstrating the flexibility and power of the tools we’ve been developing over the past 3 (!) years.

Let’s walk through those changes:

  • the build.sh script is modified to support a new riscv64 device (it could/should probably have been named vf2, but remember this is only a quick and dirty experiment), using the riscv64 architecture and fetching its device-specific recipes from devices/riscv64
  • the devices/riscv64 is mostly a copy from devices/rockchip with only a few small adjustments:
    • as the kernel package is locally-built and not available from the Mobian repository, it is copied under a subfolder and imported to the rootfs using the overlay action, then manually installed using a run action
    • no package is installed for phosh as we want to create a minimal system.
  • the global packages-base.yaml sub-recipe is modified to always include openssh-server, which is pretty much a requirement for a headless system.
  • the most important changes lie in the rootfs.yaml recipe:
    • use the debian-ports archive when building for the riscv64 architecture; as this archive uses different GPG keys than the main Debian archive, we use the relevant key file from the host system (this requires first installing the debian-ports-archive-keyring package on the host system).
    • as debootstrap won’t install it, we install the debian-ports-archive-keyring package to the rootfs so it can be updated over time.
    • we drop the packages-$environment sub-recipe (minimal system, remember?)

With those changes, building the image is as simple as running ./build.sh -t riscv64. It can then be flashed to a microSD and should bring you a fully functional Debian/Mobian system on your VisionFive 2 :)

Note: we could have easily made it a “pure” Debian image, however we carry a patched version of u-boot-menu which simplifies the image generation process a lot.

Final words

This process can probably easily be replicated for PINE64’s Star64 once it becomes available, as both boards use the same SoC. Likewise, this experiment can and will be re-used as the first step towards proper riscv64 support in Mobian, hopefully in a not-so-distant future ;)

We hope this article will also highlight how building and/or using flexible and powerful tools can greatly help expanding a project’s features set, and can even be used for only remotely-related tasks. It also shows how “easy” it can be to create Debian-based images for embedded systems (as opposed to, for example, having to re-compile the whole world twice using Yocto ;) ).

Finally, we want to point out how such things can only happen in the FLOSS world:

  • being able to build each package from source is what makes the Debian ports initiative possible at all
  • the vendor publishing the source code to the bootloader and kernel allowed us to build those with the needed options to fulfill our needs
  • anyone (with decent knowledge of Linux systems internals) can build their own image easily, adding or removing packages and tweaks as they see fit, rather than being stuck with a generic (and probably bloated) third-party image.

25 January, 2023 12:00AM

January 24, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How digital twins enable data-driven automotive supply chains

The automotive industry is facing one of its biggest revolutions since the advent of automation. In this post, we will go through the Industry 4.0 aspects and how OEMs can turn these challenges into opportunities.

To put it simply, the first Industrial Revolution relied on steam power, the second one on electricity and the third one on computers. What about the fourth Industrial Revolution everyone is talking about? I would describe it as a data-driven revolution. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/c85a/Untitled-design-68.png" width="720" /> </noscript>

Data is a wide theme, as it enables a multitude of production processes that were not possible before. In order to leverage this data, the industry will use internet of things (IoT) technologies allowing the upload, use and sharing of said data. Data will enable the automotive industry’s digital transformation: in the vehicle, in the factory and beyond. 

In a previous post, we discussed digital twins. While the most obvious use case for digital twins in automotive is optimising vehicle development, there’s a lot of value in using digital twins for supply chain and factory modelling and optimisation.

Supply chain use cases

In order to build a vehicle today, more than 3,000 parts need to be integrated per vehicle! These parts come from hundreds of suppliers around the world that need to work hand in hand with the OEM.

From designing the part, to sourcing it, producing it and delivering it, the entire supply chain needs to behave like clockwork. On top of that, there are environmental and social commitments that have to be considered, mainly related to worker well-being, for example.

Logistics and inventory management

Building and using a digital supply chain twin helps monitor the status of each part in the supply chain, while ensuring the previously stated constraints are respected.

For instance, data sharing can enable both suppliers and OEMs to have a clear and up-to-date view of where parts are coming from and where they are. This implies connectivity either on the shipment itself or with scanning solutions each step of the way.

An analysis of these elements can help with unexpected events that could occur during the delivery – re-routing specific parts, for example. This knowledge also enables cost savings and a reduction in environmental impact, as the routing of each part can be improved.

A digital twin can also help companies avoid inventory scrapping and better balance part purchases based on the stock on hand. Digital twins can also be a powerful tool to avoid inventory shortages, as they provide visibility into stock levels.

Supply chain automation 

Digital twins are useful on their own but the end goal is supply chain automation. Having a digital supply chain twin enables the use of AI/ML solutions for most tasks. The rest can be handled based on event and analytics-driven alerts. Most issues can be anticipated with preventive feedback based on your supply chain’s data. Your experts can focus on making sure these events are treated correctly.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/de53/Untitled-design-80.png" width="720" /> </noscript>

In our upcoming 2023 Automotive Trends white paper, we describe the semiconductor shortage. With the right technology and data, OEMs can have better visibility on their inventory levels as well as forecasted inventory information based on suppliers’ inputs. While it wouldn’t have avoided the chip shortage, it would have reduced the supply chain disruptions.

We have explored supply chain use cases. But what about applying digital twins in the factory?

Factory use cases

The digital twin approach that we mentioned for vehicles and for the supply chain, has a lot of potential in the factory itself. Imagine all the robots on the factory floor sharing their data in real time and adapting their actions based on the context of the factory and of the supply chain.

Downtime prevention

Digital twins make it possible to anticipate downtime and use downtime for planned and not unexpected maintenance. Some tasks can be kept at the edge for better response time and security, while more power-intensive processes like factory floor optimisation can be pushed to the cloud and handled by HPC clusters.

Factory design

Having a factory digital twin also allows for an optimised view of factory scalability. For example, finding the best position for additional robots, in case the manufacturing chain needs to expand. Using advanced simulations can contribute greatly to the improvement of your manufacturing process and assembly lines.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/1e55/Untitled-design-65.png" width="720" /> </noscript>

Quality assurance use cases

Automations based on aggregated data can provide accurate information on potential quality issues too. Some defects aren’t immediately visible to the human eye. Catching quality issues beforehand avoids warranty costs related to maintenance and vehicle recalls.

Digital twins allow the entire manufacturing process to be monitored from teams all around the world, allowing OEMs to leverage their talent wherever they may be. Most, if not all, the automotive Industry 4.0 innovations rely on data, but data alone is not sufficient for transforming the industry, some technologies are becoming essential.

Addressing related challenges with digital twins

The right data makes it possible to build and effectively run a factory, but there are always challenges to address. Worker enablement is one of them. Training factory workers is difficult: it takes time, and commitment from both sides and retention tends to be low. Ensuring the workers are trained for new machines, tools, and software have a high cost.

Combined with digital twins, virtual reality (VR) and augmented reality (AR) make it possible to train factory workers outside the factory with very realistic VR 3D environments in which specific scenarios are played out. With AR, the worker can see indications related to specific scenario steps being displayed while they are interacting with machines and parts.

We talked about the importance of using transparent and collaborative data so that OEMs could leverage its traceability. This is all the more true when feeding said data into AI/ML data exploitation solutions. From the design to the servicing, AI solutions are a key enabler of supply chain automation.

One of the issues that factory digital transformation is facing is the fact that factories are still using old systems that cannot be integrated with new ways of working. Therefore, they cannot share data in the expected manner. The transition from these non-connected machines to sensor-packed, smart, collaborative robots might take a while due to cost and profitability constraints. We anticipate that the investments will be worth the cost when taking into account the potential CO2 reductions that a smart supply chain and factory can provide.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/90e3/Untitled-design-69-1.png" width="720" /> </noscript>

Conclusion

In conclusion, Industry 4.0 is ushering in a new era of automation and digitalisation that brings with it a host of challenges for manufacturers. However, by leveraging the power of digital twins, companies can effectively navigate this evolving landscape and stay competitive in the marketplace.

Ubuntu Core provides a secure and reliable foundation for industry applications, which enables companies to easily connect, manage and update their devices and machines remotely. This means that manufacturers can easily monitor and control the devices that power up their production lines, ensuring that they are running smoothly and efficiently at all times.

Additionally, Canonical enables companies to leverage the power of artificial intelligence and machine learning to optimise their production processes, leading to improved efficiency, cost savings and more advanced factory and supply chain related digital twins.

In short, Canonical empowers OEMs and suppliers in their industry 4.0 transition. By embracing software, OEMs can successfully face new industry challenges by having a secure and reliable connected factory and supply chain devices. Whether it is for your connected OT, your factory robots, or harnessing the value of your data using HPC clusters, we have the solutions ready to help you improve your supply chain and factory efficiency.

Contact Us

Curious about automotive at Canonical? Check out our webpage.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/8e05/A-CTO’s-guide-to-software-defined-vehicles-1200-×-600px-13.png" width="720" /> </noscript>

24 January, 2023 12:00PM

January 23, 2023

The Fridge: Ubuntu Weekly Newsletter Issue 771

Welcome to the Ubuntu Weekly Newsletter, Issue 771 for the week of January 15 – 21, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

23 January, 2023 10:00PM

Andrea Corbellini: What time is it? A simple question with a complex answer. How computers synchronize time

Ever wondered how your computer or your phone displays the current date and time accurately? What keeps all the devices in the world (and in space) in agreement on what time it is? What makes applications that require precise timing possible?

In this article, I will explain some of the challenges with time synchronization and explore two of the most popular protocols that devices use to keep their time in sync: the Network Time Protocol (NTP) and the Precision Time Protocol (PTP).

What is time?

It wouldn’t be a good article about time synchronization without spending a few words about time. We all have an intuitive concept of time since childhood, but stating precisely what ‘time’ is can be quite a challenge. I’m going to give you my idea of it.

Here is a simple definition to start with: time is how we measure changes. If the objects in the universe didn’t change and appeared to be fixed, without ever moving or mutating, I think we could all agree that time wouldn’t be flowing. Here by ‘change’ I mean any kind of change: from objects falling or changing shape, to light diffusing through space, or our memories building up in our mind.

This definition may be a starting point but does not capture all we know about time. Something that it does not capture is our concept of past, present, and future. From our day-to-day experience, we know in fact that an apple would fall off the tree due to gravity, under the normal flow of time. If we observed an apple rising from the ground, attaching itself to the tree (without the action of external forces), we could perhaps agree that what we’re observing is time flowing backward. And yet, both the apple falling off the tree and the apple rising from the ground are two valid changes from an initial state. This is where causality comes into place: time flows in such a way that the cause must precede the effect.

We can now refine our definition of time as an ordered sequence of changes, where each change is linked to the previous one by causality.

How do we measure time?

Now we have a more precise definition of time, but we still don’t have enough tools to define what is a second, an hour, or a day. This is where things get more complicated.

If we look at the definition of ‘second’ from the international standard, we can see that it is currently defined from the emission frequency of caesium-133 (133Cs) atoms. If you irradiate caesium-133 atoms with some light having sufficient energy, the atoms will absorb the light, get excited, and release the energy back in the form of light at a specific frequency. That frequency of emission is defined as 9192631770 Hz, and the second is defined as the inverse of that frequency. This definition is known as the caesium standard.

Here’s a problem to think about: how do we know that a caesium-133 atom, after getting excited, really emits light at a fixed frequency? The definition of second is implying that the frequency is constant and the same all over the world, but how do we know it’s really the case? This assumption is supported by quantum physics, according to which atoms can only transition between discrete (quantified) energy states. When an atom gets excited, it transitions from an energy state $E_1$ to an energy state $E_2$. Atoms like to be in the lowest energy state, so the atom will not stay in the state $E_2$ for long, and will want to go back to $E_1$. When doing that, it will release an amount of energy of exactly $E_2 - E_1$ in the form of a photon. According to the Planck formula, the photon will have frequency $f = (E_2 - E_1) / h$ where $h$ is the Planck constant. Because the energy levels are fixed, the resulting emission frequency is fixed as well.

By the way, this process of absorption and emission of photons is the same process that causes fluorescence.

Visualization of the absorption and emission process for an atom transitioning between two energy states Visualization of the absorption and emission process for an atom transitioning between a ground state $E_1$ to an excited state $E_2$.

Assuming that caesium-133 atoms emit light at a single, fixed frequency, we can now build extremely accurate caesium atomic clocks and measure spans of time with them. Existing caesium atomic clocks are estimated to be so precise that they may lose one second every 100 million years.

The same approach can be applied to other substances as well: atomic clocks have been constructed using rubidium (Rb), strontium (Sr), hydrogen (H), krypton (Kr), ammonia (NH3), ytterbium (Yb), each having its own emission frequency, and their own accuracy. The most accurate clock ever built is a strontium clock which may lose one second every 15 billion years.

Time dilation

If we have two atomic clocks and we let them run for a while, will they show the same time? This might sound like a rhetorical question: we just established that the frequencies of emission of atoms are fixed, so why would two identical atomic clocks ever get out of sync? Well, as a matter of fact, two identical atomic clocks may get out of sync, and this problem is not due to the clocks, but with time itself: it appears that time does not always flow in the same way everywhere.

Many experiments have shown this effect on our planet, the most famous one probably being the Hafele-Keating experiment. In this experiment, a set of caesium clocks was placed on an airplane flying around the world west-to-east, another set was placed on an airplane flying east-to-west, and another set remained on ground. The 3 sets of clocks, which were initially in sync before the planes took off, were showing different times once reunited after the trip. This experiment and similar ones have been repeated and refined multiple times, and they all showed consistent results.

These effects were due to time dilation, and the results were consistent with the predictions of special relativity and general relativity.

Time dilation due to special relativity

Special relativity predicts that if two clocks are moving with two different velocities, they are going to measure different spans of time.

Special relativity is based on two principles:

  • the speed of light is constant;
  • there are no privileged reference frames.

To understand how these principles affect the flow of time, it’s best to look at an example: imagine that a passenger is sitting on a train with a laser and a mirror in front of them. Another person is standing on the ground next to the railroad and observing the train passing. The passenger points the laser perpendicular to the mirror and turns it on.

What the passenger will observe is the beam of light from the laser to hit the mirror and come back in a straight line:

Beam of light in the train reference frame Portion of the beam of light in the train reference frame, emitted from the laser (bottom) and bouncing from the mirror (top). Note how it follows a vertical path.

From the observer perspective, however, things are quite different. Because the train is moving relative to the observer, the beam looks like it’s taking a different, slightly longer path:

Beam of light in the observer reference frame The same portion of light beam as before, but this time in the observer reference frame. Note how it follows a diagonal path, longer than the vertical path in the train reference frame.

If both the passenger and the observer measure how long it took for the light beam to hit back at the source, and if the principles of special relativity hold, then the two persons will record different measurements. If the speed of light is constant, and there is no privileged reference frame, then the speed of light $c$ must be the same in both reference frames. From the passenger’s perspective, the beam has traveled a distance of $2 L$, taking a time $2 L / c$. From the observer’s perspective, the beam has traveled a longer distance $2 M$, with $M > L$, taking a longer time $2 M / c$.

Beam of light in the observer reference frame Comparison of the light beams as seen from the two reference frames. In the train reference frame, the light beam is a vertical line of length $L$ (therefore traveling a path of length $2 L$ after bouncing from the mirror). In the observer reference frame, the light beam is distorted due to the velocity of the train. If the train moves at speed $v$, then the light beam travels a total length of $2 M = 2 L c / \sqrt{c^2 - v^2}$.

How can we reconcile these counterintuitive measurements? Special relativity does it is by stating that time flows differently in the two reference frames. Time runs “slower” inside the train and runs “faster” for the observer. One consequence of that is that the passenger ages less than the observer.

Time dilation due to special relativity is not easily detectable in our day-to-day life, but it can still cause problems with high-precision clocks. This time dilation may in fact cause clock drifts in the order of hundreds of nanoseconds per day.

Time dilation due to general relativity

Experimental data shows that clocks in a gravitational field do not follow (solely) the rules of special relativity. This does not mean that special relativity is wrong, but it’s a sign that it is incomplete. This is where general relativity comes into play. In general relativity, gravity is not seen as a force, like in classical physics, but rather as a deformation of spacetime. All objects that have mass bend spacetime, and the path of objects traveling through spacetime is affected by its curvature.

An apple falling from a tree is not going towards the ground because there’s a force “pushing” it down, but rather because that’s the shortest path in spacetime (a straight line in bent spacetime).

Apple falling according to classical physics, following a parabolic motion Apple falling according to classical physics, following a parabolic motion.
Apple falling according to general relativity, following a straight path in distorted spacetime Apple falling according to general relativity, following a straight path in distorted spacetime.

The larger the mass of objects, the larger the curvature of spacetime they produce. Time flows “slower” near large masses, and “faster” away from it. Interesting facts: people on a mountain age faster than people on the sea level, and it has been calculated that the core of the Earth is 2.5 years younger than the crust.

The time dilation caused by gravity on the surface of the Earth may amount to clock drifts in the order of hundreds of nanoseconds per day, just like special relativity.

Can we actually synchronize clocks?

Given what we have seen about time dilation, and that we may experience time differently, does it even make sense to talk about time synchronization? Can we agree on time if time flows differently for us?

The short answer is yes: the trick is to restrict our view to a closed system, like the surface of our planet. If we place some clocks scattered across the system, they will almost certainly experience different flows of time, due to different velocities, different altitudes, and other time dilation phenomena. We cannot make those clocks agree on how much time has passed since a specific event; what we can do is aggregate all the time measurements from the clocks and average them out. This way we end up with a value that is representative of how much time has passed on the entire system—in other words, we get an “overall time” for the system.

Very often, the system that we consider is not restricted to just the surface of our planet, but involves the Sun, and sometimes the moon as well. In fact, what we call one year is roughly the time it takes for the Earth to complete an orbit around the Sun; one day is roughly the time it takes for the Earth to spin around itself once and face the Sun in the same position again. Including the Sun (or the moon) in our time measurements is complicated: in part this complexity comes from the fact that precise measurements of the Earth’s position are difficult, and in part from the fact that the Earth’s rotation is not regular, not fully predictable, and it’s slowing down. It’s worth noting that climate and geological events affect the Earth’s rotation in a measurable way, and such events are very hard to model accurately.

What is important to understand here is that the word ‘time’ is often used to mean different things. Depending on how we measure it, we can end up with different definitions of time. To avoid ambiguity, I will classify ‘time’ into two big categories:

  • Elapsed time: this is the time measured directly by a clock, without using any extra information about the system where the clock lies into or about other clocks.

    We can use elapsed time to measure durations, latencies, frequencies, as well as lengths.

  • Coordinated time: this is the time measured by using a clock, paired with information about the system where it’s located (like position, velocity, and gravity), and/or information from other clocks.

    This notion of time is mostly useful for coordinating events across the system. Some practical examples: scheduling the execution of tasks in the future, checking the expiration of certificates, real-time communication.

Time standards

Over the centuries several time standards have been introduced to measure coordinated time. Nowadays there are three major standards in use: TAI, UTC, and GNSS. Let’s take a brief look at them.

TAI

International Atomic Time (TAI) is based on the weighted average of the elapsed time measured by several atomic clocks spread across the world. The more a clock in TAI is precise, the more it contributes to the weighted average. The fact that the clocks are spread in multiple locations, and the use of an average, mitigates relativistic effects and yields a value that we can think of as the overall time flow experienced by the surface of the Earth.

Note that the calculations for TAI does not include the Earth’s position with respect to the Sun.

Distribution of the laboratories that contribute to TAI all over the world Distribution of the laboratories that contribute to International Atomic Time (TAI) all over the world as of 2020. Map taken from the BIPM Annual Report on Time Activities.

UTC

Coordinated Universal Time (UTC) is built upon TAI. UTC, unlike TAI, is periodically adjusted to synchronize it with the Earth’s rotation around itself and the Sun. The goal is to make sure that 24 UTC hours are equivalent to a solar day (within a certain degree of precision). Because, as explained earlier, the Earth’s rotation is irregular, not fully predictable, and slowing down, periodic adjustments have to be made to UTC at irregular intervals.

The adjustments are performed by inserting leap seconds: these are extra seconds that are added to the UTC time to “slow down” the UTC time flow and keep it in sync with Earth’s rotation. On days when a leap second is inserted, UTC clocks go from 23:59:59 to 23:59:60.

Visualization of leap seconds inserted into UTC, and a comparison with TAI A visualization of leap seconds inserted into UTC until the end of 2022. Each orange dot represents a leap second (not in scale). When UTC was started in 1972, it started with 10 seconds of offset from TAI. As you can see, the insertion of leap seconds is very irregular: some decades have seen many leap seconds, others have seen much more.

It’s worth noting that the practice of inserting leap seconds is most likely going to be discontinued in the future. The main reason is that leap seconds have been the source of complexity and bugs in computer systems, and the benefit-to-pain ratio of leap seconds is not considered high enough to keep adding them. If leap seconds are discontinued, UTC will become effectively equivalent to TAI, with an offset: UTC will always differ from TAI by a few seconds, but this difference will always be constant, if no more leap seconds are inserted.

GNSS

Global Navigation Satellite System (GNSS) is based on a mix of accurate atomic clocks on ground and less accurate atomic clocks on artificial satellites orbiting around the Earth. The clocks on the satellites, being less accurate and subject to a variety of relativistic effects, are updated about twice a day from ground stations to correct clock drifts. Nowadays there are several implementations of GNSS around the world, including:

When GPS was launched, it was synchronized with UTC, however GPS, unlike UTC, is not adjusted to follow the Earth’s rotation, and due to that, GPS today differs from UTC by 18 seconds (because 18 leap seconds have been inserted since GPS was launched in 1980). BeiDou also does not implement leap seconds. GPS and BeiDou are therefore compatible with TAI.

Other GNSS systems like Galileo and GLONASS do implement leap seconds and are therefore compatible with UTC.

Time synchronization protocols

Dealing with coordinated time is not trivial. Different ways to deal with relativistic effects and Earth’s irregular rotation result in different time standards that are not always immediately compatible with each other. Nonetheless, once we agree on a well-defined time standard, we have a way to ask the question “what time is it?” and receive an accurate answer all around the world (within a certain degree of precision).

Let’s now take a look at how computers on a network can obtain an accurate value for the coordinated time given by a time standard. I will describe two popular protocols: NTP and PTP. The two are using similar algorithms, but offer different precision: milliseconds (NTP) and nanoseconds (PTP). Both use UDP/IP as the transport protocol.

Network Time Protocol (NTP)

The way time synchronization works with NTP is the following: a computer that wants to synchronize its time periodically queries an NTP server (or multiple servers) to get the current coordinated time. The server that provides the current coordinated time may have obtained the time from an accurate source clock connected to the server (like an atomic clock synchronized with TAI or UTC, or a GNSS receiver), or from a previous synchronization from another NTP server.

To record how “fresh” the coordinated time from an NTP server is (how distant the NTP server is from the source clock), NTP has a concept of stratum: this is a number that indicates the number of ‘hops’ from the accurate clock source:

  • stratum 0 is used to indicate an accurate clock;
  • stratum 1 is a server that is directly connected to a stratum 0 clock;
  • stratum 2 is a server that is synchronized from a stratum 1 server;
  • stratum 3 is a server that is synchronized from a stratum 2 server;
  • and so on...

The maximum stratum allowed is 15. There’s also a special stratum 16: this is not a real stratum, but a special value used by clients to indicate that time synchronization is not happening (most likely because the NTP servers are unreachable).

Visualization of NTP strata in a distributed network Examples of different NTP strata in a distributed network. A stratum n server obtains its time from stratum n - 1 servers.

The major problem with synchronizing time over a network is latency. Networks can be composed of multiple links, some of which may be slow or overloaded. Simply requesting the current time from an NTP server without taking latency into account would lead to an imprecise response. Here is how NTP deals with this problem:

  1. The NTP client sends a request via a UDP packet to an NTP server. The packet includes an originate timestamp $t_0$ that indicates the local time of the client when the packet was sent.
  2. The NTP server receives the request and records the receive timestamp $t_1$, which indicates the local time of the server when the request was received.
  3. The NTP server processes the request, prepares a response, and records the transmit timestamp $t_2$, which indicates the local time of the server when the response was sent. The timestamps $t_0$, $t_1$ and $t_2$ are all included in the response.
  4. The NTP client receives the response and records the timestamp $t_3$, which indicates the local time of the client when the response was received.
Visualization of the NTP time synchronization algorithm The NTP synchronization algorithm.

Our goal is now to calculate an estimate for the network latency and processing delay and use that information to calculate, in the most accurate way possible, the offset between the NTP client clock and the NTP server clock.

The difference $t_3 - t_0$ is the duration of the overall exchange. The difference $t_2 - t_1$ is the duration of the NTP server processing delay. If we subtract these two durations, we get the total network latency experienced, also known as round-trip delay:

$$\delta = (t_3 - t_0) - (t_2 - t_1)$$

If we assume that the transmit delay and the receive delay are the same, then $\delta / 2$ is the average network latency (this assumption may not be true in a general network, but that’s the assumption that NTP makes).

Under this assumption, the time $t_0 + \delta/2$ is the time on the client’s clock that corresponds to $t_1$ on the server’s clock. Similarly, $t_3 - \delta/2$ on the client’s clock corresponds to $t_2$ on the server’s clock. These correspondences let us calculate two estimates for the offset between the client’s clock and the server’s clock:

$$\begin{align*} \theta_1 & = t_1 - (t_0 + \delta/2) \\ \theta_2 & = t_2 - (t_3 - \delta/2) \end{align*}$$

We can now calculate the client-server offset $\theta$ as an average of those two estimates:

$$\begin{align*} \theta & = \frac{\theta_1 + \theta_2}2 \\ & = \frac{t_1 - (t_0 + \delta/2) + t_2 - (t_3 - \delta/2)}2 \\ & = \frac{t_1 - t_0 - \delta/2 + t_2 - t_3 + \delta/2}2 \\ & = \frac{(t_1 - t_0) + (t_2 - t_3)}2 \\ \end{align*}$$

Note that the offset $\theta$ may be a positive duration (meaning that the client clock is in the past), a negative duration (meaning that the client clock is in the future) or zero (meaning that the client clock agrees with the server clock, which is unlikely).

After calculating the offset $\theta$, the client can update its local clock by shifting it by $\theta$ and from that point the client will be in sync with the server (within a certain degree of precision).

Once the synchronization is done, it is expected that the client’s clock will start drifting away from the server’s clock. This may happen due to relativistic effects and more importantly because often clients do not use high-precision clocks. For this reason, it is important that NTP clients synchronize their time periodically. Usually NTP clients start by synchronizing time every minute or so when they are started, and then progressively slow down until they synchronize time once every half an hour or every hour.

There are some drawbacks with this synchronization method:

  • The request and response delays may not be perfectly symmetric, resulting in inaccuracies in the calculations of the offset $\theta$. Network instabilities, packet retransmissions, change of routes, queuing may all cause unpredictable and inconsistent delays.
  • The timestamps $t_1$ and $t_3$ must be set as soon as possible (as soon as the packets are received), and similarly $t_0$ and $t_2$ must be set as late as possible. Because NTP is implemented at the software level, there may be non-negligible delays in acquiring and recording these timestamps. These delays may be exacerbated if the NTP implementation is not very performant, or if the client or server are under high load.
  • Errors propagate and add up when increasing the number of strata.

For all these reasons, NTP clients do not synchronize time just from a single NTP server, but from multiple ones. NTP clients take into account the round-trip delays, stratum, and jitter (the variance in round-trip delays) to decide the best NTP server to get their time from. Under ideal network conditions, an NTP client will always prefer a server with a low stratum. However, an NTP client may prefer an NTP server with high stratum and more reliable connectivity over an NTP server with low stratum but a very unstable network connection.

The precision offered by NTP is in the order of a few milliseconds.

Precision Time Protocol (PTP)

PTP is a time synchronization protocol for applications that require more accuracy than the one provided by NTP. The main differences between PTP and NTP are:

  • Precision: NTP offers millisecond precision, while PTP offers nanosecond precision.
  • Time standard: NTP transmits UTC time, while PTP transmits TAI time and the difference between TAI and UTC.
  • Scope: NTP is designed to be used over large networks, including the internet, while PTP is designed to be used in local area networks.
  • Implementation: NTP is mainly software based, while PTP can be implemented both via software and on specialized hardware. The use of specialized hardware considerably reduces delays and jitter introduced by software.
Picture of a Time Card device Time Card: an open-source hardware card with a PCIe interface that can be plugged into a computer that can serve as a PTP master. It can be optionally connected to a GNSS receiver and contains a rubidium (Rb) clock.
  • Hierarchy: NTP can support a complex hierarchy of NTP servers, organized via strata. While PTP does not put a limitation on the number of nodes involved, the hierarchy is usually only composed of master clocks (the source of time information) and slave clocks (the receivers of time information). Sometimes boundary clocks are used to relay time information to network segments that are unreachable by the master clocks.
  • Clock selection: in NTP, clients select the best NTP server to use based on the NTP server clock quality and the network connection quality. In PTP, slaves do not select the best master clock to use. Instead, master clocks perform a selection between themselves using a method called best master clock algorithm. This algorithm takes into account the clock’s quality and input from system administrators, and does not factor network quality at all. The master clock selected by the algorithm is called grandmaster clock.
  • Algorithm: in NTP, clients poll the time information from servers periodically and calculate the clock offset using the algorithm described above (based on the timestamps $t_0$, $t_1$, $t_2$ and $t_3$). With PTP, the algorithm used by slaves to calculate the offset from the grandmaster clock is somewhat similar to the one used in NTP, but the order of operations is different:

    1. the grandmaster periodically broadcasts its time information $T_0$ over the network;
    2. each slave records the time $T_1$ when the broadcasted time was received;
    3. each slave sends a packet to the grandmaster at time $T_2$;
    4. the grandmaster receives the packet at time $T_3$ and sends that value back to the slave.

    The average network delay can be calculated as $\delta = ((T_3 - T_0) - (T_2 - T_1)) / 2$. The clock offset can be calculated as $\theta = ((T_1 - T_0) + (T_2 - T_3)) / 2$.

Visualization of the PTP time synchronization algorithm The PTP time synchronization algorithm.

Summary

  • Synchronizing time across a computer network is not an easy task, and first of all requires agreeing on a definition of ‘time’ and on a time standard.
  • Relativistic effects make it so that time may not flow at the same speed all over the globe, and this means that time has to be measured and aggregated across the planet in order to get a suitable value that can be agreed on.
  • Atomic clocks and GNSS are the clock sources used for most applications nowadays.
  • NTP is a time synchronization protocol that can be used on large and distributed networks like the internet and provides millisecond precision.
  • PTP is a time synchronization protocol for local area networks and provides nanosecond precision.

23 January, 2023 07:15PM

hackergotchi for OSMC

OSMC

Happy New Year! OSMC's January update is here

We hope that you had a good Christmas and New Year.

Our first update of the year brings Kodi v19.5, which is the final version of Kodi 19.x (Matrix). We are now working on preparing Kodi v20 (Nexus) for OSMC users. This update brings the last stable version of Kodi v19 with a few improvements to improve the upgrade process.

Bug fixes

  • Fixed an issue where the macOS installer did not run as expected unless it was in a specific directory

Improving the user experience

  • Moonlight is now supported on Vero 4K / 4K + after a number of years of requests. Sadly however, the NVIDIA Gamestream project has been discontinued, however it looks like it will live on in the open source community. OSMC supplied Moonlight developers with hardware and decoding knowledge to help get things finally supported on Vero 4K / 4K +. You can learn more about our collaboration here.
  • Added composite video output support for Raspberry Pi
  • Label partitions for new installations so we can support Pi booting exclusively from USB in the future without the need for an SD card
  • Updated Vero 4K / 4K + whitelisting logic for a better experience
  • Support EDID decoding in Vero 4K / 4K + logs using the edid-decode utility
  • Reinstated Linux installer support for OSMC using AppImage
  • Updated macOS installer to support versions of macOS and support M1/M2 hardware
  • Vero 4K / 4K +: added 3D SBS/TAB flip eye support
  • APT updates are now served exclusively over HTTPS
  • Added support for newer Hauppage DualHD tuners on Vero 4K / 4K +

Miscellaneous

  • Updated Raspberry Pi firmware
  • Added USB_IP modules for Vero 4K / 4K + customer
  • Ready other infrastructure for HTTPS based updates

Wrap up

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on Twitter, like us on Facebook and consider making a donation if you would like to support further development.

You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.

Happy New Year, store and donation link.

23 January, 2023 04:21PM by Sam Nazarko

hackergotchi for Tails

Tails

Tails 5.9 is out

We are sorry that Tails 5.8 affected many of you so badly.

Thanks to your patience and feedback, we were able to solve most of these new issues.

Changes and updates

  • Update Tor Browser to 102.7.

  • Update the Tor client to 0.4.7.13.

  • Simplify the error screen of the Tor Connection assistant when connecting automatically.

  • Improve the wording of the backup utility for the Persistent Storage.

  • Remove the confirmation dialog when starting the Unsafe Browser.

Fixed problems

  • Fix support for some graphics cards:

    • Update the Linux kernel to 6.0.12. This improves the support for newer hardware in general: graphics, Wi-Fi, and so on. (#18467)

    • Remove from the Troubleshooting Mode 2 boot options that break support for some graphics cards: nomodeset and vga=normal. (#19321)

    Please let us know if the support for your graphics cards has been fixed or is still broken.

  • Fix starting AppImages that use the Qt toolkit like Feather and Bitcoin-Qt. (#19326)

  • Fix clipboard encryption and decryption in Kleopatra. (#19329)

  • Fix at least 2 cases of Persistent Storage not activating:

    • When activation takes longer (#19347)

    • When the Dotfiles feature includes symbolic links (#19346)

    Please keep reporting issues with the new Persistent Storage. We give them top priority!

  • Fix 3 clipboard operations with KeePassXC:

    • Copying a passphrase to unlock a database (#19237)

    • Using the auto-type feature (#19339)

    • Clearing passwords automatically from the clipboard after 10 seconds

  • Fix the display of the applications menu that was broken in some GTK3 applications installed as Additional Software. (#19371)

  • Localize the homepage of Tor Browser when started from the Tor Connection assistant. (#19369)

For more details, read our changelog.

Known issues

Please keep reporting issues with the new Persistent Storage and when starting on graphics cards that used to work with Tails.

Tor Browser has no minimize and maximize buttons (#19328)

To work around this:

  1. Right-click on the Tor Browser tab in the window list at the bottom of the screen.

  2. Choose Minimize or Maximize.

Welcome Screen and Tor Connection don't fit on 800×600 (#19324)

The top of the Welcome Screen and some button of the Tor Connection assistant are cut out on small displays (800×600), like virtual machines.

You can press Alt+S to start Tails.

Progress bar of Tor Connection gets stuck around 50% (#19173)

When using a custom Tor obfs4 bridge, the progress bar of Tor Connection sometimes gets stuck halfway through and becomes extremely slow.

To fix this, you can either:

  • Close and reopen Tor Connection to speed up the initial connection.

  • Try a different obfs4 bridge.

    This issue only affects outdated obfs4 bridges and does not happen with obfs4 bridges that run version 0.0.12 or later.

See the list of long-standing issues.

Get Tails 5.9

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 5.0 or later to 5.9.

    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.9 directly:

What's coming up?

Tails 5.10 is scheduled for February 21.

23 January, 2023 12:34PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What is MLOps going to look like in 2023?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/e831/1200628-Getting-started-with-AIML-WP-2.png" width="720" /> </noscript>

While AI seems to be the topic of the moment, especially in the tech industry, the need to make it happen in a reliable way is becoming more obvious. MLOps, as a practice, finds itself in a place where it needs to keep growing and remain relevant in view of the latest trends. Solutions like ChatGPT or MidJourney dominated internet chatter last year, but the main question is…What do we foresee in the MLOps space this year and where is the community of MLOps practitioners focusing their energy? Let’s first look back at 2022 and then explore expectations for 2023.

Help us improve Canonical’s MLOps offering and define the next steps for Charmed Kubeflow

Give us your feedback

Rewind of MLOps in 2022

2022 was the year of AI. It went from a tool used mainly for experimentation to a tool that promises successful outcomes.  The Global AI Adoption Index reported an increasing percentage of enterprises that are likely to try AI, as well as more leaders that have already deployed it, especially in China, India, UAE and Italy. Yet, the challenges are far from solved, as 25% of companies still report a skill shortage or a price that is too high. 

Advisory services and products that help your team deploy AI models and deliver value

Read more

Everyone can try AI/ML these days…

ChatGPT made a tour of honour in December 2022 – almost everyone talked about it. It is an advanced chatbot, introduced by OpenAI, that is able to answer complex questions in a conversational manner. It is trained to learn what people mean from what they ask. This means it can give human-quality responses, raising various questions on how disruptive it can actually be. You should try it yourself.

Although it didn’t quite make the headlines like ChatGPT, MidJourney is a research lab that is able to generate visuals based on the description that the user provides, almost in real-time. Their vision is to expand the world’s imaginative powers and their main focus is on design, human infrastructure and AI. The image below shows you what it generated based on this description:

funny, landscape, realistic, 4k, cinematic lighting, 4k post processing, futuristic, symmetrical, cinematic color grading, micro details, reddit, funny, make it less serious and funny

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/06a3/imwhoim_funny_landscape_realistic_4k_cinematic_lighting_4k_post_6c87f0b8-fd4d-4631-8355-9f9a9a3af7a1.png" width="720" /> </noscript>

Kubeflow in 2022

Looking back, 2022 was a great year for MLOps. Canonical offers one of the official distributions of Kubeflow, so naturally kept a close eye on the project. Kubeflow had two new releases, 1.5 and 1.6. The community got back together as well and, towards the end of the year, Kubeflow Summit took place. With great sessions and working groups, use cases from companies such as Aurora, and challenges brought to the table, the event energised the community. The Kubeflow Survey 2022  also went out and shed light on the need for more documentation and tutorials, as well as improved installation and upgrades.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/6232/121_Capture.jpg" width="720" /> </noscript>

The year came with other big surprises for the project.  It applied to CNCF in November last year and it’s aiming to become an incubating project.

What does it means for the Kubeflow community and users?

Read more

Canonical MLOps in 2022

With Charmed Kubeflow, our official distribution, Canonical has been closely involved in community initiatives last year. MLOps experts discussed the new releases and published a new tutorial and end-to-end guides. At Canonical, we strongly believe in growing the MLOps ecosystem, so during the last year we made new integrations available, such as MindSpore.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/d0e7/1___Capture.jpg" width="720" /> </noscript>

We also launched our new AI/ML advisory lane, which offers companies the chance to develop their MLOps skills and kickstart their AI initiatives, with Canonical’s support.

What’s in store for 2023?

With a clear rise in popularity, MLOps is clearly here to stay and will likely advance to address more complex use cases and compliance needs this year. 

Star the year with an introduction to machine learning operations. Join our webinar on 15 February!

Register now

Data is at the core of any machine learning project and protecting it should be this year’s topic. In various communities, that has already been brought up and solutions will indeed pop up as well. Protecting pipelines where the data runs, addressing CVEs in a timely manner, offering safe data management, and monitoring data from model or data drift are just some of the topics that the industry is very likely to talk about.

On the other hand, in 2023, MLOps projects should focus on adoption and how to grow it. With documentation that is often incomplete, tutorials that are not handy and installation processes that take much longer than expected, there is a lot to be done, but one step at a time is enough.

For example, the Kubeflow community has already started an onboarding process for new joiners, which aims to help them get up to speed much faster (such that they know where, how and when they can contribute). While this seems small, it answers a big question that the community  raised last year during the summit: “How do I get started?” Initiatives like this, as well as a more focus on collateral, will be welcomed, so early adopters can finally get used to MLOps.

The first step is gathering feedback!  Fill out our form below:

Your responses will be used for an industry report that will contribute to advancing the community’s knowledge. But don’t worry: your info will be kept anonymous. You can also book a meeting with Charmed Kubeflow’s Product Manager. This is your chance to learn about our roadmap,  ask questions and provide your feedback directly.

It’s a wrap…or just a beginning for MLOps in 2023

With 12 months ahead of us, MLOps has plenty of time to surprise everyone. It is, at the end of the day,  a collaborative function that comprises data scientists, DevOps engineers and IT. As the market is going to evolve, new roles are going to be added to the list. However, everyone should be focused on an ongoing goal: improving code quality, increasing the rhythm of model development and automating tasks. 

New solutions will likely appear on the market, similar to the ones that we mentioned above. Enterprises will probably have higher expectations from machine learning projects, and thus from the tooling behind them. Cost efficiency and time-effectiveness will become more and more important discussions, influencing business decisions related to MLOPs.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/e356/A-CTO’s-guide-to-software-defined-vehicles-1200-×-600px-17.png" width="720" /> </noscript>


23 January, 2023 09:43AM

January 21, 2023

hackergotchi for ARMBIAN

ARMBIAN

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.27 Beta available for testing


Are you using Kubuntu 22.10 Kinetic Kudu, our current stable release? Or are you already running our development builds of the upcoming 23.04 (Lunar Lobster)?

We currently have Plasma 5.25.90 (Plasma 5.27 Beta) available in our Beta PPA for Kubuntu 22.10 and for the 23.04 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:



DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.



5.27 Beta packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.26?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel IRC channel on libera.chat if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on libera.chat
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel


21 January, 2023 11:24AM

January 20, 2023

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2023.01 + Persistence

There is a new, an extra release of Sparky 2023.01 of the (semi-)rolling line out there.

A next, regular rolling released is planned in March, as every 3 mounts I do, but I decided to publish 2023.01 MinimalGUI to let you test a new feature just implemented to Sparky rolling only so far.

The Sparky tool that creates Live USB disk (sparky-live-usb-creator) has gotten a new feature which lets you make a live USB disk with persistence. It means, you can boot Sparky Live system from a USB disk and save your work, new installed applications, etc. to the same USB disk.

The ‘sparky-live-usb-creator’ 0.2.1 is available to Sparky rolling (7) users so far, and it works with Sparky 2023.01 MinimalGUI iso image only so far.

Installation/upgrade on Sparky rolling:
sudo apt update
sudo apt install sparky-live-usb-creator

Then launch from menu-> System-> Live USB Creator Persistence and follow the app.
This operation requires extra 2 GB hard disk space to re-create Sparky iso image in your /tmp directory.

Make sure that Live USB Creator menu entry still lets you create Live USB disk without persistence as before.

Make also sure (2) it is an experimental feature, improved and tested last whole week, but can be not perfect yet.
Test it and report at our forums whatever you find, please.

Make sure (3) only live boot menu entries with “Persistence’ name can boot with the ‘persistence’ kernel option.

Sparky live
Sparky live

The ‘sparky-usb-formatter’ has been also updated to erase an USB disk after creating persistent partition for Sparky live as well.

The live system works on Linux kernel 6.1.4, and got updates from Debian and Sparky testing repos as of January 20, 2023.

The Sparky 2023.01 iso image can download from the download/development page.

20 January, 2023 11:03PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu 18.04 LTS End Of Life – keep your fleet of devices up and running

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/2X-HjLXitEaXURTduDyP9NErLs2xWSJ3me_LSYD10OtVh849vGb7H7ZoCFOiwt-mzjkzSGvQPht4Ai3CbUzbD1rDGr52gAK9wpA0gf_ZnpbAHoQSNJFK8oxdNwH3qRbkF5LXutdMc_vdgBEUj7amzYN97kF4R6Yas7o2tiRGJN-G-1ROrX8_yI7nLAKLwQ" width="720" /> </noscript>

Ubuntu 18.04 ‘Bionic Beaver’ is reaching End of Standard Support this April, also known sometimes as End Of Life (EOL). This distribution of Ubuntu was installed by millions of users and powers up thousands of devices. From kiosks and appliances to IoT devices and robots, 18.04 helped many companies deploy innovations to the world. As with all other Ubuntu LTS releases that reach their end of standard support, Bionic Beaver will transition to Extended Security Maintenance (ESM). This blog post will help developers and companies evaluate their options for devices currently running Ubuntu 18.04 LTS. It will also cover how you can enable ESM in case you choose to extend the support window with this service. Before we jump in, let’s cover a burning question: why do Ubuntu releases reach EOL?  

Why do Ubuntu releases reach the End of Standard Support?

Every single Ubuntu LTS comes with 5 years of standard support. During those five years, we provide bug fixes and security patches to more than 2,300 packages. This obviously requires a great engineering effort from Canonical. Even more if you consider all the critical infrastructures where Ubuntu is being used today.

But our users also look forward to the new release of our operating system with the latest and greatest. So, as we release new distributions of Ubuntu, we also need to relocate our resources. And with this, we obviously need to move distributions to the ESM period. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/O88ANiY5eBHfYRRlyAj0_wgH-bkvG7-MTYgjI3NJ1z_36bH-F09iAmC6NaT6Wtq20nKWBIMnnuYTpFZK1ShZB1u4jiRX_-gZlVTsSrIhSDaMOpxE9j4UnvS4ixUgddzZeDZn0VC-YPrkml1qO_Taf152Dk_zK-jkVSquABxGgeO-Lkh2eNOU6PdJuo_9og" width="720" /> </noscript>

ESM enables continuous vulnerability management for critical, high and medium Common Vulnerabilities and Exposures (CVEs). During this period, we no longer improve the distribution, but we keep it secure. We offer ESM for the benefit of our users. Some cannot migrate and need to keep their infrastructure running reliably and securely. Therefore, we provide 5 more years of critical security to those organisations. This is a paid service, as engineering time and resources are still needed to provide these updates. A paid service that is still cheaper than the actual cost for organisations to do all of this maintenance in-house. 

Now that we covered the relationship between End of Standard Support and ESM, let’s explore what comes next.

Migrate to a supported LTS distribution 

It’s never too late to start thinking about migration. Soon your 18.04 fleet will stop receiving updates, including security patches. That will put you and your final user at a security risk. So if you want to keep your device compliant with security maintenance and the latest and greatest software, migration is one way to go. 

For device manufacturers, we advise you to have a look at Ubuntu Core. While Ubuntu Desktop and Server will fulfil their purpose for edge and IoT devices, Ubuntu Core was developed and has been optimised for these use cases. With out-of-the-box features such as OTA update control, low-touch device recovery, strict confinement, secure boot, and more, it makes it easier to deploy and manage devices. It also comes with a longer window of standard support: 10 years. This will help you avoid reading another blog about this for quite a while.

And the migration shouldn’t be painful. If you are short on resources, you can always package your application and bundle all your dependencies. We recommend the use of snaps as a container solution to bundle all your dependencies. Snaps won’t create another abstraction layer. They allow you to access the system’s resources through dedicated interfaces. Once your application is snapped, you can easily run it on any Ubuntu LTS, Core, Desktop or Server. You name it.  

Can’t migrate? Get 18.04 ESM

Sometimes migration is not straightforward. Dealing with dependency changes or simply recalling devices from the field can be troublesome. While the aim will be to migrate, you might need some time. So if you need more time and want to keep devices compliant, ESM gives you 5 extra years. 

ESM is part of the Ubuntu Pro subscription. ESM provides continuous vulnerability management and patching for critical, high and medium Common Vulnerabilities and Exposures (CVEs). This means that you will keep receiving security updates for more than 2,300 packages in Ubuntu Main. Here you find packages such as Python, OpenSSL, OpenVPN, network-manager, sed, curl, systemd, udev, bash, OpenSSH, login, libc… For the whole list of what’s included in Main, you can visit the Ubuntu Packages Search tool

But there is more. With the release of Ubuntu Pro, you can also get security coverage to an additional 23,000 packages beyond the main operating system. These are packages in Ubuntu Universe. For example, Boost, Qt, OpenCV, PCL, python-(argcomplete, opencv, pybind11, png…), cython, eigen, GTK, FFMPEG… are some of the many packages covered in Universe that are now getting security maintenance from Canonical. 

Ubuntu Universe also includes well-used applications such as the Robot Operating System (ROS), where Canonical provides services such as ROS ESM.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/WLUvFAfYb47O_2sLfdH9II_b7Z6WXRqIH4Rn-MYdk1zesGsYx6-SM_vux8B1PlskQqBqbs3nLSKjNxhuLzysmDOutLo40_7XVYks5msofVPgVy1J7qYGbgVoNNmE-Np0amjcNfDF5rPW7DpGWrsNzOLXK3cCkpgSt6GNiQ0XG2b4XyLPmco2ucmVpc3FmA" width="720" /> </noscript>

Option 1: Purchase ESM through the Ubuntu Pro store

If you have a few units to cover with ESM, we recommend you to purchase it directly from our store. ESM is part of the Ubuntu Pro subscription. Pricing for Ubuntu Pro depends on the volume of devices you want to cover and years of subscription to the service. To calculate pricing and make a purchase: 

  1. Go to the Ubuntu Pro Store
  2. Select IoT and Devices Category 
  3. Add the number of devices that you want to cover
  4. Select 18.04 LTS
  5. Pick whether you want only security updates for Main, or Main and Universe.
  6. Select if you want Enterprise Support
  7. Click Buy Now

Go to the Store

Option 2: Purchase ESM through Canonical’s Embedding Programme

If you have a large fleet of devices, or you need to add support to estates that grow over time, joining Canonical’s Embedding Programme might be a better option. It will not only grant you access to the Ubuntu Pro subscription, and so to ESM, but it will also apply a beneficial discount-based model.

To join the Embedding Programme you need to get in touch with a sales representative.

Get in touch

What is included in the Ubuntu Pro subscription 

ESM is part of the Ubuntu Pro subscription. You get access to this subscription through the Embedding programme or the Pro store. Besides getting ESM, customers can also enjoy other services like: 

For more information about Ubuntu Pro visit our webpage, the service description or get in touch with one of our sales representatives

How to enable ESM 

Security updates provided during the ESM period are accessed through a dedicated PPA. To access this PPA you need a token. Tokens will be available in your Ubuntu Pro subscription portal once you have completed the purchase of the service. Remember, the Ubuntu Pro subscription can be purchased through the Embedding Programme or directly through the Ubuntu Pro store.  

To enable ESM, you just need to follow the instructions in your welcome email: 

  1. Install the Ubuntu Pro client
  2. Attach your token to an Ubuntu machine
  3. Activate ESM 
  4. Run apt upgrade will now allow you to install available updates

For more detailed instructions please visit the Ubuntu Pro client discourse.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/jjnkWtccl382muubgRtMYOxW43Awgc9ZYtUiQph0OeB4GTRPCEwr3gww_zqriR_0tQEjRN0xeDdrofrt6EJ3kW4kF4t8WI-M0YUpymegwXjz9YgdgaRYyIGM39jLyVQrNlcVPlZMz0RuMJpRIUw3FvgUdU9p3NYZwz1y1Id2dBffJPePSly2v1voN_NVKQ" width="720" /> </noscript>

Enabling ESM on fleets of devices

Depending on your management infrastructure there will be different alternatives to enable ESM in your fleet of machines. An Ubuntu Pro subscription also gives you access to Landscape, which facilitates this process.   

Landscape is a management and administration tool for Ubuntu. It allows you to monitor your systems through a management agent installed on each machine. The agent communicates with the Landscape server to update an automatically selected set of essential health metrics. It also allows you to remotely update and upgrade machines and manage users and permissions.

Using remote script execution, Landscape can interact with the Ubuntu Pro client. It can also distribute tokens in air-gapped environments. 

Learn more about Landscape with our documentation.   

Summary

As 18.04 reaches the End of Standard Support in April of 2023, companies that have deployed devices with this LTS need to take action. Staying on an EOL distribution is a security risk that companies can’t afford. While migrating to a supported LTS is our main recommendation, we understand this is not always possible. If that’s the case for your organisation, ESM gives you more time.

Get in touch if you need advice on the best path for your company. 

20 January, 2023 04:37PM

January 19, 2023

Ubuntu Blog: Cloud storage pricing – how to optimise TCO

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/4828/josh-appel-NeTPASr-bmQ-unsplash-2.jpg" width="720" /> </noscript>

The flexibility of public cloud infrastructure allows for little to no upfront expense, and is great when starting a venture or testing an idea.  But once a dataset grows and becomes predictable, it can become a significant base cost, compounded further by additional costs depending on how you are consuming that data.

Public clouds were initially popularised under the premise that workloads are dynamic, and that you could easily match available compute resources to the peaks and troughs in your consumption, rather than having to maintain mostly idle buffer capacity to meet peak user demands.  Essentially shifting sunk capital into variable operational expense.

However, what has become more apparent is that this isn’t necessarily true when it comes to public cloud storage.  Typically what is observed in a production environment is a continual growth of all data sets.  Those that are actively used for decision making or transactional processing in databases, tend to age out but need to be retained for audit and accountability purposes.  Training data for AI/ML workloads grow and allow models to be more refined and accurate over time.  Content and media repositories grow daily, and exponentially with the use of higher quality recording equipment.

How is public cloud storage priced?

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/N5vNRnmhAU6DzzioHhkvFDu5gcurmx6Efam-bdaLCo0lahSqt-hcNBd1mg7tun1M3XEKARq4ZZXxEPpzBOUmFAODQqRV0OGsfuAGsFrZtrjXvlxRAfgfgp3n-TvxVE3sXXAky_zA9DCDlGvvk9DRKzSCtUicrxL7umXS2gQpWXXgbLNNiTvcVHjKsDhsUg" width="720" /> </noscript>

Typically there are three areas where costs are incurred.

  • Capacity ($/GB): this is the amount of space you use for storing your data or the amount of space you allocate/provision for a block volume.
  • Transactional charges when you interact with the dataset.  In an object storage context, this can be  PUT/GET/DEL operations.  In a block storage context, this can be allocated IOPs or throughput (MB/s).
  • Object storage can also incur additional bandwidth charges (egress) when you access your data from outside of a cloud provider’s infrastructure or from a vm or container in different compute regions. These charges can even apply when you have deployed your own private network links to a cloud provider!

If in the future you decide to move your data to another public cloud provider, you would incur these costs during migration too!

Calculating cloud storage TCO

Imagine you have a dataset that’s 5PB and you want to understand its total cost of ownership (TCO) over 5 years.  First we need to make some assumptions about the dataset and how frequently it will be accessed.

Over the lifetime of the dataset we will assume that it will be written to twice, so 10PBs of written data.  We will also assume that it will be read 10 times, and each object is an average of 10MB.

In a popular public cloud, object storage capacity starts at $0.023/GB, and as usage increases the price decreases to $0.021/GB.  You are also charged for the transactions to store and retrieve the data.  These costs sound low, but as you start to scale up, and then consider the multi-year cost they can quickly rise to significant numbers.

For the 5PB example, the TCO over 5 years is over $7,000,000, and that’s before you even consider any charges for compute to interact with the data, or egress charges to access the dataset from outside of the cloud provider’s infrastructure.

Balancing costs with flexibility

Is there another way to tackle these mounting storage costs, yet also retain the flexibility of deploying workloads in the cloud?

IT infrastructure is increasingly flexible, so with some planning it is possible to operate an open-source storage infrastructure based on Charmed Ceph that is fully managed by experts adjacent to a public cloud region and connected to the public cloud via private links to ensure the highest availability and reliability.  Using the same assumptions around usage as before, a private storage solution  can reduce your storage costs by more than 2-3x over a 3-5 year period.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/PM67hOld419cBeVPAIYZVKW0fkrgACO0BNXQsysyPcPvSD5h6jlTbCyeK9tBLd3bLBTgUdt63xjWvrrrZIBTIbIs1XRhqJt95fmQ9dxD5kBN_cyKZu9ViaHZfzVSjqu7yk-2VoET3T-yIqrLyvgecyd-UYIRl10dZhBj-mVlOa49_42z30RIzsCCl3KWdg" width="720" /> </noscript>

Having your data stored using open-source Charmed Ceph in a neutral location, yet near to multiple public cloud providers unlocks a new level of multi-cloud flexibility.  For example, should one provider start offering a specific compute service that is not available elsewhere, you can make your data accessible to that provider without incurring significant access or migration costs. As you would when accessing one provider’s storage from another provider’s compute offering.  

Additionally, you can securely expose your storage system to your users via your own internet connectivity, without incurring public cloud bandwidth fees.

Later this quarter we will publish a detailed whitepaper with a breakdown of all the costs of both of these solutions alongside a blueprint of the hardware and software used.  Make sure to sign up for our newsletter using the form on the right hand side of this page (cloud and server category) to be notified when it is released.

Learn more

19 January, 2023 04:10PM

Podcast Ubuntu Portugal: E230 Tu Tens Um Evento!

Dizem que Janeiro é o mês mais triste do ano! Até pode ser para muita gente mas não é certamente para a comunidade Ubuntu Portugal e basta olhar para a agenda deste mês para perceber que em Aveiro, depois em Sintra e continuando em Lisboa o que não falta é festa e animação. Olhando para o início de Fevereiro parece que a história se repete mas depois se verá! Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

19 January, 2023 12:00AM

January 18, 2023

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Доступан је Сербиан 2023 КДЕ






Доступна је за преузимање нова верзија оперативног система Сербиан ГНУ/Линукс 2023, са КДЕ графичким окружењем. Графички изглед овогодишњег издања посвећен је Народном позоришту у Београду. Сербиан долази подешен на ћириличну опцију, а кроз системске поставке може бити одабрана и латиница, као и ијекавица за обе варијанте. Као основа за образовање дистрибуције коришћен је Debian (Bookworm) у својој тестинг верзији, са КДЕ графичким окружењем. 

Сербиан 2023, као и претходних девет издања, намењен је свим корисницима који желе да имају оперативни систем на српском језику. Намењен је и као могући избор за садашње кориснике власничких оперативних система. Такође, постоје и корисници који не умеју све сами да подесе и који су до сада користили Линукс дистрибуције које важе као више пријатељске за употребу. Додатне снимке екрана можете погледати овде.




На новом издању, поред уобичајених програма који долазе уз КДЕ графичко окружење, налази се и колекција програма који ће корисницима омогућити квалитетно извршавање постављених задатакаСви преинсталирани програми су преведени на српски језик. Употребљен је кернел 6.0.12, а у односу на претходну верзију, побољшана је подршка за екстерне уређаје, додато је пар нових апликација, па садашњи избор изгледа овако:

Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје краће од 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. По инсталацији, Сербиан ће заузети нешто изнад 6 ГБ, тако да би било пожељно да му приликом партиционисања, за удобно деловање доделите већи простор од 20 ГБ.

                 


Када вам се подигне тек инсталирани систем, прочитајте приложени текстуални документ где је записано неколико савета. Као битније, треба истаћи да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. У нашој ризници софтвера могу се пронаћи и инсталирати пакети: teamviewer, viber, veracrypt, dropbox, yandex-disk, greatlittleradioplayer, anydesk, megasync итд.

На крају, хвала свим читаоцима ових редова, корисницима који имају или ће имати Сербиан на свом рачунару, као и свим медијима и појединцима који су дали свој допринос популаризацији оперативног система на српском језику. Ако има заинтересованих који желе да помогну у промоцији, доступни су и банери за ту намену.



                                                           
    

                                                                   Алтернативни линк               

18 January, 2023 06:19PM by DebianSrbija (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Containerization vs. Virtualization : understand the differences

Over the last couple of decades, a lot has changed in terms of how companies are running their infrastructure. The days of dedicated physical servers are long gone, and there are a variety of options for making the most out of your hosts, regardless of whether you’re running them on-prem or in the cloud. Virtualization paved the way for scalability, standardization and cost optimisation. Containerization brought new efficiencies. In this blog, we’ll talk about the difference between the two, and how each is beneficial.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/voxSd1KgK_NW-JQiLS8qOWGPqF7F1vfwJmX6QV60Bazy1ziuB_fIA-nx_x8VRBqlUGhUdCJ9AWDbvHQOhsRfR-lS0Oi4DZeRDiY1XI8XPAbensgb3C-WohhuMVBLC-QDztAWrqZqxKEGCz_yiub5CnIOG2hKRF-mKwLy9YB4ss_VueUlpQLzRZ9nJgW_Ww" width="720" /> </noscript>
The difference between traditional and virtualized environment

What is virtualization?

Back in the old days, physical servers functioned much like a regular computer would. You had the physical box, you would install an operating system, and then you would install applications on top. These types of servers are often referred to as ‘bare metal servers’, as there’s nothing in between the actual physical (metal) machine and the operating system. Usually, these servers were dedicated to one specific purpose, such as running one designated system. Management was simple, and issues were easier to treat because admins could focus their attention on that one specific server. The costs, however, were very high. Not only did you need more and more servers as your business grew, you also needed to have enough space to host them. 

While virtualization technology exists since the 1960s, server virtualization started to take off in the early 2000s. Rather than having the operating system run directly on top of the physical hardware, an additional virtualization layer is added in between, enabling users to deploy multiple virtual servers, each with their own operating system, on one physical machine. This enabled significant savings and optimization for companies, and eventually led to the existence of cloud computing. 

The role of a hypervisor

Virtualization wouldn’t be possible without a hypervisor (also known as a virtual machine monitor) – a software layer enabling multiple operating systems to co-exist while sharing the resources of a single hardware host. The hypervisor acts as an intermediary between virtual machines and the underlying hardware, allocating host resources such as memory, CPU, and storage.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/blrSPWRq1BzDLYXbJTcNJb_6mCZRLrPiuhdP6JK1-oqJI2-z7q7YigyBP7lukB-jsAaOkpNe3zRZrUrLk1MVK2hbq-9rbETgoDdM8wohiqLCn5nR0F6qN8oVpauWX6XJ2pqzK-gn8TaaxZSL0sykxyk6SW6917sJmzQ0N0-VH46ziYPakVe_2cAqPqks3g" width="720" /> </noscript>
Type 1 and type 2 hypervisor comparison

There are two main types of hypervisors: Type 1 and Type 2. 

  • Type 1 hypervisors, also known as bare-metal hypervisors, run directly on the host’s hardware and are responsible for managing the hardware resources and running the virtual machines. Because they run directly on the hardware, they are often more efficient and have a smaller overhead than Type 2 hypervisors. Examples of Type 1 hypervisors include VMware ESXi, Microsoft Hyper-V, and Citrix XenServer. 
  • Type 2 hypervisors, also known as hosted hypervisors, run on top of a host operating system and rely on it to provide the necessary hardware resources and support. Because they run on top of an operating system, they are often easier to install and use than Type 1 hypervisors, but they might be less efficient. Examples of Type 2 hypervisors include VMware Workstation and Oracle VirtualBox.

Benefits and disadvantages

The biggest benefit of virtualization is, of course, the resulting cost savings. By running multiple virtual machines on a single physical server, you save money and space and are able to do more with less. Virtualization also allows for both better resource utilization as well as greater flexibility. Being able to run multiple VMs on a single server prevents some of your servers from standing idle. You can also easily create, destroy and migrate VMs between different hosts, making it easier to scale and manage your computing resources, as well as implement disaster recovery plans. 

In terms of disadvantages, virtualization does increase the performance overhead, given that it introduces an additional layer between the host and the operating system. Depending on the workload, the decrease in performance can be noticeable, unless significant RAM and CPU resources are allocated. In addition, while there is cost saving in the long run, the upfront investment can be burdensome. Virtualization also adds some level of complexity to running your infrastructure, as you do need to manage and maintain both physical and virtual instances.

What is containerization?

Containerization also allows users to run many instances on a single physical host, but it does so without needing the hypervisor to act as an intermediary. Instead, the functionality of the host system kernel is used to isolate multiple independent instances (containers). By sharing the host kernel and operating system, containers avoid the overhead of virtualization, as there’s no need to provide a separate virtual kernel and OS for each instance. This is why containers are considered a more lightweight solution – they require fewer resources without compromising on performance.

Application vs. system containers

It is relevant to note that there are different types of containers: application, system, embedded.  They all rely on the host kernel, offering bare metal performance and no virtualization overhead, but they do so in slightly different ways.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/4pzCO01_I1KaUqoFYvDCLqU5GzEC2QhOrAfjoXO5v35YMV8AR4H9MPHBubGVcpawgTfIpllXySbNnqvBUy9oQdXFLceicsvs6UWCpamIEXQ5vAXZjQpwYnWX7DWS4-pt0jJU5ccAt2EbY-teeNoJWXJOwH5ucpbTvbJoGkmaXuLo1N5kNp2HB1qOJwL4_w" width="720" /> </noscript>
Application and system containers comparison
  • Application containers (such as Docker), also known as process containers, package and run a single process or a service per container. They are packaged along with all the libraries, dependencies, and configuration files they need, allowing them to be run consistently across different environments. 
  • System containers (as run by LXD), on the other hand, are in a way similar to a physical or a virtual machine. They run a full operating system and have the same behaviour and manageability as VMs, without the usual overhead, and with the density and efficiency of containers. Read our blog on Linux containers if you are curious about how they work.

Benefits and disadvantages

Containers are great due to their density and efficiency – you can run many container instances while still having the benefits of bare metal performance. They also allow you to deploy your applications quickly, consistently, and with greater portability given that all dependencies are already packaged within the container. Users have only one operating system to maintain, and they can get the most out of their infrastructure resources without compromising on scalability. 

While resource efficiency and scalability are really important, running numerous containers can significantly increase the complexity of your environment. Monitoring, observability and operations of thousands of containers can be a daunting task if not set up properly. In addition, any kernel vulnerabilities in the host will compromise everything that is running in your containers.

How does this relate to the cloud?

Virtualization and containerization are both essential for any cloud. Regardless of the type of cloud (public, private, hybrid), the essential mechanism is that the underlying hardware, wherever it may be located, is used to provide virtual environments to users. Without virtualization technologies, there would be no cloud computing. When it comes to running containers in the cloud, you can typically run them directly on bare metal (container instances) as well as on regular compute instances which are technically virtualized.

What is the right choice for you?

Whether you should go for virtualization, containerization or a mix of the two really depends on your use case and needs. Do you need to run multiple applications with different OS requirements? Virtualization is the way to go. Are you building something new from the ground up and want to optimize it for the cloud? Containerization is the choice for you. A lot of the same considerations are needed when choosing your cloud migration strategy, and we delve deeper into that in our recent whitepaper

Further reading:

https://ubuntu.com/blog/lxd-virtual-machines-an-overview

https://ubuntu.com/blog/chiselled-containers-perfect-gift-cloud-applications

https://ubuntu.com/blog/open-source-for-beginners-dev-environment-with-lxd

https://ubuntu.com/blog/open-source-cloud

18 January, 2023 09:00AM

January 17, 2023

hackergotchi for Purism PureOS

Purism PureOS

What We All Want

What we all want is pretty simple: We want individual freedom. We want peace of mind that we are safe. We want to control our own digital lives. We want to be free. The video references clips from “Why US trails China in Phone Manufacturing”, CNBC What we don’t want is equally simple: We don’t want to be […]

The post What We All Want appeared first on Purism.

17 January, 2023 09:48PM by Todd Weaver

January 16, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 770

Welcome to the Ubuntu Weekly Newsletter, Issue 770 for the week of January 8 – 14, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

16 January, 2023 10:24PM

Ubuntu Blog: Getting started with ROS security scanning

It’s a new year, and an especially great time to reflect on the security of your robots. After all, those interested in breaching it are probably doing the same. In previous articles, we discussed ROS security by detailing practical steps for securing your robots on Ubuntu and the challenges the community faces. In this blog post, I’ll give you some strategies, tips and open-source tools you can integrate into your development process today to ramp up the security of your project.

Increase ROS security with SAST scans

Static Application Security Testing (SAST), or static analysis, is a testing method that analyses your source code to find, track and fix security issues that make your application vulnerable before they become a real problem. It is a low cost way to dramatically increase the quality and security of your application, without needing to compile or run it. 

The main advantage of SAST is that it examines all possible execution paths and variable values, not just those invoked during normal execution. This way, it can reveal errors that may not manifest themselves for weeks, months or even years after the release of your application. In other words, you’ll catch these bugs long before someone else does.

<noscript> <img alt="" height="326" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_580,h_326/https://lh4.googleusercontent.com/LpwvqOsbl5HiElPWZuPRHH0jekcNsQ4IHW93xII83Jmfg2VvKEGDpeA9-1sePlP0FeSrJ57pxIi0iCAu7Alrydt0i6lwI48iMRwPOkssDVq8WCaN4SeljIO4Vsd8xRN5tUbfK2g1rtMF6Rc3adPQ4nn4Ceu4uOE2H6NuAsXrimNgkBukiKPXIko9Riq2Ew" width="580" /> </noscript>
The cycle of Static Application Security Testing (SAST) (Source)

You may wonder though, what are the best free SAST tools for your ROS project? And how can you make them an integral part of your development pipeline? We’ll take a deeper look at these questions next.​

Select the best tools for your application

As you might know, there are many tools out there, tailored to different languages and frameworks. Because each one has unique features, strengths and weaknesses, you want to combine different tools.

To get started picking the most effective one(s) for your ROS application, consider these important criteria:

  • As a prerequisite: they must support your programming language. Consider open-source tools that support at least Python and C++ in your ROS project.
  • Their effectiveness to detect vulnerabilities. For example, based on the Open Web Application Security Project (OWASP) Top Ten, or the Open Source Security Testing Methodology Manual (OSSTMM).
  • Their accuracy. Consider the False Positive/False Negative rates they have. False Positives are results that are erroneously identified in the code, while False Negatives are real issues that the tool does not pick up on. The OWASP Benchmark score measures how good these tools are at discovering and properly diagnosing security problems in applications. If the False Positive rate is too high, your team will spend too much time triaging results, while a high False Negative rate might leave you with more potentially critical undetected issues.
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/gCGzleKQqAQgUA_ohOAmxCTSXBcAWEtaHF08B84HarXzUMAvWmpEY5AxexpL_1PfbUIVh1ck54tIyoZxqfZ7hjcGTyRLDuAXzpnsw56twjNGt8t2wogZYpIuWtBfYpI5RhpkLUMcjzOjmkJ39esNpetHeDCBlYYtFt4Mq2ioqfSDKv12Q9ubnU9KQbVXsg" width="720" /> </noscript>
How we evaluate the accuracy of SAST tools (Source)

  • License cost. Some licenses are proprietary and require paying a fee. Some are fully open source, while others are free for open-source projects.
  • Integration with CI/CD tools. You want to be able to integrate scans into your development environment in an automated way, to keep track of any issues seamlessly. 
  • Interoperability of the output format they support. How will you integrate the output of different tools for analysis? Consider those that support SARIF (Static Analysis Results Interchange Format) output natively, or for which conversion tools are available. This standard makes it straightforward to combine the output of different tools, which usually complement each other.

Which brings us to…

Consider output formats of your ROS security scans

Since you will be using multiple tools concurrently, you might want to think about how you will integrate their results to run a comprehensive analysis. Every tool offers one or more output formats, commonly JSON and XML, and their file structures can vary widely from one tool to the next. This can be an issue when the time comes to remove duplicates, triage and analyze unique results, or generate metrics (for example, how many bugs exist in each program component).

What is the alternative?

The Static Analysis Results Interchange Format (SARIF) format was created to solve this problem. This JSON-based standard makes sure all relevant details for each result are presented in a predictable format. You can upload SARIF logs to DevOps tools, such as Github, to show issues directly in their web UI and convert the tool outputs into issues for tracking. Alternatively, you can visualise them locally in IDEs like Visual Studio Code using a specialised plugin

The good news is, many SAST tools now support SARIF, and there are toolkits available to transform results from other formats into SARIF. For example, Coverity Scan (v.2021.03 or later), a well-known tool used with C/C++, supports SARIF format natively. Others like Cppcheck support XML output, and provide a number of templates to reformat it for environments like Visual Studio – but not SARIF. In this case however, you can use the SARIF Multitool SDK to convert Cppcheck output into SARIF format​​.

Other functionality in this SDK is worth exploring too. For example, you can merge multiple SARIF files into one, rewrite from an older SARIF version to the most recent one, extract or suppress particular results, or compare new results to a previous baseline to detect new findings. These will be handy at different stages of your security scans, and can be easily integrated into your CI/CD pipeline. Next, we’ll look at how specific tools can be integrated.

Some tools we have tried for ROS security scanning

Here are some examples of open source tools we have worked with. You can integrate these easily into your CI/CD pipeline, to automatically keep your code secure.

Semgrep

In order to run many checks at once, we are using Semgrep. It is an open-source static analysis engine that can be customised to apply groups of rules, called ‘rulesets’. For example, the `p/gitlab-bandit` ruleset, maintained by GitLab, has very similar coverage to Bandit. Bandit is a widely used open source tool that finds security bugs in Python code:

# scan with hand-picked rulesets and output sarif format natively
semgrep --config p/gitlab-bandit --config p/security-audit --config p/secrets --config p/xss --metrics=off --output semgrep_output.sarif --sarif [<path>] || true

Cppcheck, and integrations for ROS

Cppcheck is an open source analyser for C/C++. These commands run Cppcheck on your codebase, then convert the XML output into SARIF format using the SDK:  

cppcheck [<path>] --xml --xml-version=2 2> cppcheck_output.xml
# convert output to SARIF format using SDK
npx @microsoft/sarif-multitool-linux convert cppcheck_output.xml -t CppCheck -o cppcheck_output.sarif

The Cppcheck CMake integration `ament_cmake_cppcheck` is also available for your ROS 2 project (on Ubuntu 22.04 and later). It will check files with the extensions .c, .cc, .cpp, .cxx, .h, .hh, .hpp, and .hxx. You can run the check from within a CMake ament package as part of the tests, or on the command line, with any CLI options available with Cppcheck:

ament_cppcheck [<path>] [options]

IKOS, and integrations for ROS

IKOS is an analyser for C/C++ based on LLVM and developed at NASA, originally designed to target embedded systems written in C. It allows for a lot of customisation, as you can see in the documentation. Also check out ament_ikos for your ROS 2 projects. This is a Python-based wrapper tool that is used to integrate IKOS into the ament-based build and test system.

These commands build and analyse a whole project, then generate a SARIF report from a package’s output database:

ikos-scan colcon build
ikos-report --format=sarif --report-file=results.sarif <output-file-name>.db

colcon mixins

Another tool for ROS developers to be aware of are colcon mixins. These are command line options that are tedious to write and/or difficult to remember. To make common ones easier to invoke, this repository makes these “shortcuts” available for the command line tool colcon. You can also create your own mixins and contribute them upstream. To apply mixins, pass the option –mixin to the colcon verb, followed by the names of the mixins you want to use. In the example below, we’re invoking AddressSanitizer conveniently via the asan mixin to analyse C code:

colcon build --mixin=asan

As mentioned, SARIF is growing in support, and GCC 13, soon to be released, will also include support for native SARIF output. You will then be able to use a flag like:

-DCMAKE_CXX_FLAGS=-fsanitize=address -fdiagnostics-format=sarif-file

Merge results

Now that you have the output of all your tools in the same format, use the SARIF SDK to integrate all *.sarif output into a handy single file:

npx @microsoft/sarif-multitool-linux merge $CI_PROJECT_DIR/*.sarif --recurse --output-file=integrated_scan_results.sarif

You’re all ready to review your results! 

In a future post, we will look at the process of analysing output by setting up a dashboard, triaging, and identifying the most critical issues in your codebase.

We’d like to hear from you about your ROS security efforts

As ROS Melodic and ROS 2 Foxy approach EOL in 2023, keeping your applications secure is more important than ever. We’d love to hear about how you’re securing your ROS-based software. And if you’re curious about our ROS Extended Security Maintenance offering for ROS 1 and ROS 2 distros, also get in touch with our Robotics team

16 January, 2023 01:08PM

hackergotchi for Pardus

Pardus

Bilişim Teknolojileri Öğretmeni İbrahim Paşa Akça Röportajı

Bilişim teknolojileri öğretmeni olarak Samsun bölgesinde dönüşüme büyük katkıları olan İbrahim Paşa Akça öğretmenimizi okulunda ziyaret ettik. Samsun ve çevre İllerde MEB’e bağlı okullardaki Pardus Dönüşüm çalışmaları ve Pardus Açık Kaynak Günleri ile ilgili konuştuk.

Röportaj Video

Kendinizi tanıtır mısınız?

İsmim İbrahim Paşa Akça 2004 yılından beri Milli Eğitim Bakanlığı’nda bilişim teknolojileri öğretmen olarak görev yapmaktayım. Bu görev sürem içerisinde de, 2008 yılından itibaren Milli Eğitim Bakanlığı’nda Özgür yazılım felsefesi, Pardus ile ilgili bilişim teknolojileri öğretmenlerine eğitimler vermekteyim.

Özgür yazılımları olan ilginiz nereden başladı?

Özgür yazılıları olan ilgimiz; Ben Meslek Lisesi mezunuyum meslek lisesinde bilgisayar bölümünde okurken o dönemki dağıtımlara çok merak salmıştık işte gelecek Linux, Mandrake gibi o dönemki dağıtımları incelerken böyle bir gönül bağ oluşmaya başladı daha sonra üniversite hayatında öğretmenlik hayatında bu bağ gittikçe arttı ve artmaya da devam ediyor.

Samsun ve etrafındaki illerde yer alan okullarda yoğun bir Pardus kullanımı var. hatta İlinizde yer alan etkileşimli tahtaların neredeyse tamamında Pardus yüklü. Okullarınızda bu Pardus dönüşümünü nasıl gerçekleştirdiniz?

Pardus dönüşümü ile ilgili ilk çalışmalar Antalya ilinde başlamıştı zaten, orada bu göç çalışmaları yapılırken İlimizde de o dönemki Milli Eğitim Müdürümüz tahtaları Pardus’a geçirmekle ilgili çalışma yapmak istediğini söyledi. O söyleyince bende bu çalışmaya dahil olabileceğimi belirttim sayın müdürümüze. Çok kısa bir süre içerisinde 15 gün içerisinde ilimizdeki 7200 tane etkileşimli tahtaya Pardus’u kurduk ve sahipli yazılımları tamamen kaldırdık. Tabii bunu ben tek başıma yapmadım. İlimizdeki Bilişim Teknolojileri alanında çalışan öğretmen arkadaşlarımla birlikte bu çalışmayı gerçekleştirdik. Biz yaptıktan sonra yine bizim gibi düşünen, dünyaya bu alanda aynı bakış açısıyla bakan diğer illerdeki arkadaşlarda göç çalışmasını yaptılar ve onlarda çok hızlı şekilde pardus dönüşümünü sağladırlar.

Bu kartopu gibi büyüyen hareket Türkiye’ye yayıldığında şu an 110 binden fazla tahtada Etap olduğunu ve bu sayının gittikçe arttığını görüyoruz tüm öğretmenlerimize destekleri için çok teşekkür ederiz.

Ben de hepsine inanılmaz teşekkür ediyorum.

Bu dönüşüm sırasında en çok nerede zorlandınız?

Dönüşüm sırasında ilk uğraştığımız problem öğretmenlerin ön yargısıydı; çünkü o güne kadar sadece sahipli yazılım görmüşler, onu kullanmışlar bir anda sınıflarda onlardan alıp yerine başka bir işletim sistemi kurduğumuzda bir ön yargı yaşadık, ön yargı duydular. Aslında yalnız kalacaklarını düşündüler. Çeşitli kılavuzlar hazırlayarak, eğitimler vererek bunu aştık ve kullanmaya başladılar; çünkü aslında kullandıkları şeyler temelde aynı şeyler.

İkinci yaşadığımız problem ise yayınevlerinden alınmış olan yardımcı kaynakları yani zenginleştirilmiş içeriklerdi. Burada da yayınevleri doğal olarak tüm çalışmalarını sahipli yazılımlara göre yapmışlar; bir anda Pardus kullanılmaya başlayınca o yayınlardan hiçbir çalışmadı. Fakat bizim sayımız o kadar çok büyüdü ki, o kadar arttı ki yayınevlerinin buna hayır deme ihtimalleri kalmadı; artık yayınevleri de çıkardıkları yayınlarda Pardus uyumlu olarak çalışma yapıyorlar.

Öğretmenlerimizin Etap ile ilgili şu anki görüşleri nedir?

Öğretmenlerimizin şu anki görüşleri bizim için çok kıymetli; çünkü biz göç çalışmasını yaptıktan sonra İl Milli Eğitim Müdürlüğü bir anket düzenledi ve öğretmenlerin memnuniyetlerini ölçmeye çalıştı. Öğretmenlerin %60’ından fazlası Pardus’u destekliyor, %80 civarı da Pardus’u ben bunu kullanamam zorlanırım dese bile Evet Pardus kullanılmalı diyor. İşte bu oran bu sonuçlar zaten ondan sonra gelen 3. faz tahtaların direk Pardus ile gelmesine ön ayak oldu. Bizde bundan daha da mutlu olduk.

Öğrencilerin Pardus’a olan yaklaşımları nasıl?

Eğitim bir süreç, bir anlık bir iş değil. Bu süreç içerisinde öğrencilerin bunu tanıması çok önemli. Çünkü eğer biz şimdi bir şirket olsaydık ve sadece o şirkette bir göç çalışması yapsaydık sadece o şirketteki bilgisayarın başına oturan memur veya amir Pardus’u görecekti. Fakat burada; bir sınıfta 30 öğrenci var 1. sınıfta sınıfa başlıyorlar ve karşılarında bir tane Kaplan görüyorlar, ben de bunu biliyorum diyor artık. Sonra bu çocuk büyüyor, büyüyor sürekli bunu görüyor, ve bir noktadan sonra eğer bu alanda merakı varsa artık incelemeye başlıyor. Nasıl kullanıldığını, nasıl değiştirilebileceğini öğrenmeye çalışıyor; çünkü sınıfta aslında bununla beraber yaşıyor, günün belki üçte birini bu tahtayla beraber yaşıyor çocuk. Bu açıdan da ister istemez öğrenme İhtiyacı duyuyor ve öğrencilerimiz öğrenmeye daha açık olduğu için daha fazla bilgi sahibi oluyorlar.

Samsun İl Milli Eğitim müdürlüğümüz, TÜBİTAK ULAKBİM ve siz değerli öğretmenlerimizin ortak çalışmasıyla artık geleneksel hale geldi diyebileceğimiz Pardus Açık Kaynak (PAK) günleri yapıyoruz. Bu etkinliğimizde sizin de emeğiniz çok büyük.

Pardus Açık Kaynak günleri ile ilgili neler söylemek istersiniz?

Pardus Açık Kaynak Günleri; Türkiye’de ilk ve tek olma özelliği taşıyor. Çünkü yapılan tüm çalışmalar, tüm eğitimler genelde üniversite öğrencilerine ve mezunlarına yönelik yapılıyor. Biz ilk defa, lise öğrencilerine eğitim vermek istedik. Her okuldan 3 öğrenci aldık. Bu öğrencilerin ulaşımlarını, yemeklerini TÜBİTAK ULAKBİM ile beraber karşıladık. Gençlik Merkezleri eğitim olanakları ve eğitim yeri açtılar bizim için. 3 gün boyunca bu eğitimleri verdik.

Oradaki tabii tek meramımız bizim öğrencilerimizin Pardus sistemini öğrenmesi değil; hem açık kaynak/ özgür yazılım felsefesini öğrenmeleri, hem de İl dışından gönüllü olarak gelen ;ki çok kıymetliler bizim için; gönüllü mühendis eğitmenlerimizin onlarla tanışmaları, onlarla konuşmaları, onlardan neler yapılabildiğini öğrenmeleri. Mesela şöyle düşünün bir LibreOfis geliştiricisi geliyor ve çocuklarla tanışıyor ve “İşte bakın bunları ben yaptım, şurayı ben düzelttim” diyebiliyor. Bu inanılmaz bir şey, yani öğrenciler birilerinin bir şey yaptığını görebiliyorlar.

Ayrıyeten Bilişim dünyasında hiç olmaması gereken çok absürt bulduğumuz, kadın erkek eşitsizliği noktasında da, cinsiyet ayrımcılığı noktasında da liseden itibaren öğrencilerimize böyle olmadığını göstermek için çok fazla miktarda kadın mühendis davet ediyoruz. Örneğin bu yıl 7 tane sınıfımız vardı. 7 sınıfın 5 tane eğitmeni kadın mühendislerdi. Öğrencilerde böyle bir algı şimdiden yok olsun istiyoruz.

Gelecek sene yine PAK yapılacak mı?

Gelecek sene PAK yapmayı çok istiyoruz. Çünkü Pardus Açık Kaynak Günleri bizim için çok özel bir yere sahip. Ama artık büyütmek istiyoruz. Biz kendi ilimizde bunu düşündük, geliştirdik, defalarca yapabileceğimizi kanıtladık. Artık bunu tüm Türkiye’de yapmak istiyoruz. Her ilden öğrenci ve öğretmeni Samsun’a davet edip onlara burada bir eğitim vermek istiyoruz. Böyle geleneksel bir hale gelsin istiyoruz; daha sonra belki her ilde bu yapılabilir. Umudumuz o yönde diyebilirim.

Umarım bunları birlikte gerçekleştiririz İbrahim öğretmenim. Değerli vaktinizi ayırdığınız için çok teşekkür ederiz.

Sizin nezdinizde Pardusa gönül veren tüm öğretmenlerimize, destek veren İl Milli Eğitim Müdürlüğü’müze Pardus’un yaygınlaşmasına yönelik emekleri için çok teşekkür ediyoruz.

16 January, 2023 12:59PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Cloud optimisation: cut your 2023 cloud spending by two-thirds

Cloud optimisation enables organisations to significantly lower their cloud spending while ensuring the desired performance and necessary compliance. It’s a process that every business should adopt when choosing cloud infrastructure as a foundation for their applications. It’s been proven that applying cloud optimisation best practices leads to a total cost of ownership (TCO) reduction of two-thirds under certain circumstances. 

But let’s step back for a second. How exactly does cloud optimisation help you avoid high costs? What exactly does it involve? And finally, why should you care about your cloud spending at all? These are all critical questions. Questions that deserve good answers. Let’s answer them as best as we can.

Cloud costs keep growing every year

One of the biggest paradoxes of cloud computing is that while leading public cloud providers have constantly been reducing their prices every year, most organisations have seen their cloud bills rise continually since their first cloud instance was launched. For example, Amazon Web Services (AWS) has reduced prices a total of 107 times since it was established in 2006. Meanwhile, in the Cloud Pricing Survey run by Canonical last year, more than 80% of the respondents said that their organisation had seen an increase in cloud infrastructure spending over the last two years.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/O_shTFl3Tg_gkK8UMcpXn4D7kP7_QL5tx7luCZiCsjbeEhKIyarS-JuiuJLYUo4W-iaU3-4OWOIDlPoSNI7c-XbpQi0ul4biI2C-BKkUO6lJlETHzDk3uTlh2KQYEkyzpwBuSO4PmGOjj8CeYJOalearBWoXQAIeeduwFVZ1WmHgZhY-KxbnLxdwAW0aPg" width="720" /> </noscript>
Source: Canonical’s Cloud Pricing Report 2022

There are many reasons for this phenomenon. First of all, the spectacular success of cloud computing encouraged organisations to invest more in it. As a result, initial proof of concept (PoC) environments quickly became surrounded by developer virtual machines (VMs) and production workloads. Furthermore, as organisations keep growing, their resource demand also keeps growing. They run more applications, generate more data, etc. And finally, over time, many cloud resources usually become underutilised or even completely wasted. Precisely in the same way as any other types of resources, such as computer monitors or even office space.

By providing immediate access to highly scalable compute and storage resources, public clouds are a compelling option for organisations of any size. However, blindly consuming those resources without proper cost monitoring is unsustainable. It is not uncommon to see businesses embracing public clouds and sinking in costs a few years later. This is why adopting cloud optimisation practices is so important.   

What is cloud optimisation?

In short, cloud optimisation is a process of assigning the right resources to cloud workloads, taking compliance, performance and cost conditions into account. This starts with very fundamental decisions, such as choosing the right region. For example, an instance running on the East Coast might be up to 20% cheaper than the same instance running in California. Moreover, leading public cloud providers offer dozens of optimisation options, such as reserved and spot instances, which allow for significant cost savings.

<noscript> <img alt="" height="144" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_250,h_144/https://lh3.googleusercontent.com/pI3tNR5-5FoBIUa1W6RnFJ7clBNy-xjvJ_RXO0s9uwyqqBS-pfzHGOfXL6iC_agKnnsb-slbxyTSJgsj2MTDjSOMOkXOiuWTf9YZS2xWIviG_3uE81ZsTQoR8xbYci0LKfZhgt6Z4aTDSR3pbBk815obmcHkA8GUEnEBurEWnenpLCs97X7il8uPFV5-nA" width="250" /> </noscript>

But optimising public cloud workloads is just the first step in the cloud optimisation process. Organisations should monitor their spending constantly and look beyond their existing cloud provider. For example, it’s been proven that adopting hybrid multi-cloud architecture leads to cost optimisation when running workloads in the long term and on a large scale. Thus, they should watch their budget and be ready to migrate their workloads to a cloud platform that is the most cost-effective at a given time.

It’s also important to mention that cloud optimisation is not a one-time project. Instead, it should rather be adopted as an ongoing process. Businesses should run recurring audits, inspect their cloud workloads, tear down inactive services and shut down corresponding VMs. They should also review their cost optimisation strategies and take reactive actions if needed. Only when it becomes a habit can cloud optimisation guarantee long-term cost savings.

Cut your 2023 cloud spending by two-thirds

Adopting the cloud optimisation process has a significant impact on organisations’ budgets. This is because cloud spending is relatively high these days. For example, Computer Economics / Avasant Research estimates that average spendings on cloud infrastructure account for 5.7% of an organisation’s capital budget. At the same time, Gartner estimates that they account for 41% of the IT budget. Therefore, any potential savings resulting from following cloud optimisation best practices are meaningful from the budget point of view.

The exact amount always depends on the use case, the environment being used, workloads and their scale. Canonical customers noticed cost savings of up to two-thirds by following our official recommendations. As a result, cloud optimisation is something that no organisation should hesitate to consider in their 2023 plans. Especially when still wondering how to meet the new year’s budget.

Canonical’s solution was a third of the price of the other proposals we’d received. The solution is also proving to be highly cost-effective, both from CAPEX and OPEX perspectives.

Georgi Georgiev, CIO at SBI BITS.

Learn more about cloud optimisation

Now that you have a basic understanding of cloud optimisation concepts, you are probably wondering where to find more information about it. In this case, we have prepared several interesting follow-up materials for you:

  • Join our webinar on 25 January to learn how to effectively use the native features of public clouds to pay less for the same amount of resources and how to optimise costs in hybrid multi-cloud environments.
  • Read our cloud optimisation playbook to make sure you apply all available cloud optimisation techniques to your workloads so that the next invoice from your cloud service provider does not surprise you.
  • Check Canonical’s Cloud Pricing Report to get instant access to cloud list prices and service fees from leading public and private cloud providers, sample cost scenarios and exclusive commentary from industry experts.
  • Use our TCO calculator to estimate potential cost savings from migrating your workloads to a cost-effective private cloud platform.

REMEMBER: Cloud optimisation is an arduous process. Sometimes it might even require migrating some of your workloads to a different cloud platform. All of this trouble pays off in the long term, however. As a result, cloud optimisation is one of the best investments your organisation can make in the challenging market conditions of 2023. 

16 January, 2023 07:00AM

hackergotchi for BunsenLabs Linux

BunsenLabs Linux

Experimental "kelaino" package repository is offline

As announced on 21st December, the experimental "kelaino" package repository has now been taken offline.

All the up to date Beryllium packages are available from the "official" repository:

deb https://pkg.bunsenlabs.org/debian beryllium main

See the previous announcement for more details.

16 January, 2023 12:00AM

January 15, 2023

hackergotchi for OSMC

OSMC

International shipping incident

Update: 23rd January 2023 - work has commenced to fix the issue at Royal Mail and some international items that were posted previously are in transit again. We hope to resume postage for orders from Monday, 30th January. We will keep this post updated.

On 11th January Royal Mail announced that they had suffered from a cyber incident that has prevented them from exporting goods outside of the UK.

As Royal Mail are our primary shipping partner for B2C shipments, customers are affected.

We continue to receive information from Royal Mail but they have not stated when this will be resolved. As a result, we are unable to ship items internationally at this time.

International customers can continue to place orders, but we cannot guarantee a dispatch date until further notice. For those that have placed an order:

  • You can contact us if you wish to cancel your order for a full refund
  • You can enquire as to whether your order has already been dispatched
  • You can contact us if you would like us to make alternative delivery arrangements

Domestic (UK) orders will be dispatched as usual with the normal delivery timeframes.

We will update this post as we receive more information.

Thank you for your understanding

15 January, 2023 12:52AM by Sam Nazarko

January 13, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The State of IoT – December 2022

Welcome to the December edition of the monthly State of IoT series. While we ended 2021 with a rosy outlook prompted by rising unit shipments and hardware spending, 2022 ended amidst supply chain disruptions, economic sanctions and an ongoing war. Despite this, we will likely remember 2022 as a transitional year in the ever-increasing adoption of embedded and IoT devices. Cybersecurity vulnerabilities in IoT devices, smart homes and a few major announcements took the headlines. Without further ado, let’s dive straight into the most prominent news across the IoT landscape from the last month of what proved to be an exciting year.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/8aP9qnQjKgABlGN0nZk6twzQN_w9HEki0tWXoiEe9GUjivWDkY_a4RiBOcrmGABUDDMKtcjBapUBUnl7rMG1C1IrIZFvHumEXLgaWPcu9eK1BQM-QDMFEqvjf56SBKGGqgbyWzKbeU08qjDZcIp_OHTpA5MPnm_NAx6Ikaqc6_M4PMISxKbDnsapGJcMEw" width="720" /> </noscript>

Open Source Robotics Corporation behind ROS gets acquired

Alphabet-owned Intrinsic acquired the commercial subsidiary of Open Robotics, the firm behind the popular Robot Operating System (ROS). ROS is an open-source framework helping researchers and developers build and reuse code between robotics applications. ROS is already used across numerous industries, from agriculture to medical devices, and is spreading to include all kinds of automation and software-defined use cases.

Open questions remain as to how the acquisition will affect the future of ROS. In July 2021, Intrinsic graduated from X, Alphabet’s moonshot factory, to become an independent company at Alphabet. According to Intrinsic CEO Wendy Tan White, her company’s mission is to democratise access to robotics by building a software platform for developers, entrepreneurs and businesses. As per the leadership team behind ROS, the acquisition was a conscious move to help the ROS community at scale, by leveraging resources from Intrinsic and Alphabet’s track record of support for open-source software projects.

Supply chain risks in IoT devices

The Boa web server has been discontinued since 2005, but it’s still pervasive in IoT. Device vendors and SDK publishers still implement it across embedded devices, from routers to cameras. To this day, there are over 1 million Internet-exposed Boa web servers on devices to access settings, management consoles, and sign-in screens.

The global success of the discontinued server clashes with the threats it poses, as Boa servers are affected by several vulnerabilities, from arbitrary file access in routers to information disclosure from corporate and manufacturing assets connected to IoT devices.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/gaSqHqq3yAxhLcme4bgSbO_9SnmIcBZ0631rnQh600uC_w67NuhwECLvH1-pKiMhVVEJchKHWrJHXDRnTjAexXSivRz0kGqW1Cy5iuWad4k55ChE35jlLY7mO1UW9Um9UGZOGp6FZODgkvvUDUcVDGn9L2sYwZRB-vyyL946tiis-k1Y2ZWiPGG_qXkdCQ" width="720" /> </noscript>
Vulnerable SDK components lead to supply chain risks in IoT and OT environments

A new investigation by Microsoft demonstrates how Boa’s vulnerabilities are distributed downstream to organisations and their assets in an insecure IoT device supply chain. Microsoft recommends patching vulnerable devices and reducing their attack surface, among other preventive measures. 

IoT botnet upgraded to target unpatched Apache servers

Zerobot is a Go-based botnet primarily affecting various IoT devices, including firewall devices, routers, and cameras. Zerobot operators continuously expand the malware with the latest capabilities, enabling it to add compromised systems to a distributed denial of service (DDoS) botnet.

As per the above, web-exposed IoT devices are often vulnerable to exploitation by malicious actors. Cybercriminals target unpatched IoT devices exposed to the internet due to their continuous growth. 

Zerobot was recently upgraded to version 1.1, featuring novel capabilities, including new attack methods and exploits for support architectures. The improved botnet is thus expanding its reach to different types of devices, including the ability to target vulnerabilities on unpatched Apache servers.

Microsoft’s Security Threat Intelligence team recently disclosed the new attack capabilities, including a set of recommendations to defend devices and networks against Zerobot.

Ericsson offloads IoT division

Ericsson’s Connected Vehicle Cloud is an automotive platform offering fleet management systems, OTA updates and telematics capabilities. Despite connecting more than 5.5 million vehicles, the communications service providers announced they will sell their Connected Vehicle Cloud business to Aeris. The transaction is expected to close in the first quarter of 2023, and it further includes Ericsson’s IoT Accelerator unit, a cloud-based connectivity management platform.

The sale has the potential to refocus the internal strategy at Ericsson while doubling down on Aeris’ value proposition as a leading provider of a host of services supporting IoT connectivity. Aeris offers technology expertise in low- to mid-bandwidth communication, 5G, and eSIMs for global connectivity, among their initiatives. This helps businesses bring new IoT programs to market, as well as replace 2G and 3G technologies by embracing 5G.

Amazon to support Matter on Echo devices

Amazon, Apple, Comcast, Google, SmartThings, and the Connectivity Standards Alliance came together in 2019 to develop and promote Matter, a royalty-free, IPv6-based connectivity standard defining the application layer deployed on devices with Wi-Fi support, Thread, and BLE. 

The CSA announced the official release of the Matter 1.0 specification In October. Since the announcement, several companies have pledged to join the CSA and help set new security and reliability standards for IoT. For instance, Canonical joined the CSA in September as the first company to offer an independent Linux distribution.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/A3tlWmYoFmV_ffwFKXfuL40Hf1A7S1RgpALxu8ePHX6Ohi4WYWBod6iyOhx61KzYj93C1pznoC78rqQ7YYpqYN_s7heGA4l9573DJau8j1zv1wp-UV6FaWHvnzUXqXrdjM7ovStHUdvHZaKBfrldbZvUlmPvbO1oQWrC4UYp6GB8Y_3apr0DLnwJtLskjw" width="720" /> </noscript>
Building Matter devices with Alexa today

Recently, Amazon announced it will support Matter on several Echo devices. As a founding member and key contributor of Matter, Amazon stated they will bring broader Matter availability to more devices early next year.

InsurTech market targets the smart home

Pepper is a leading consumer IoT platform enabling corporations to deliver the latest in smart home solutions to their customers. Pepper’s Platform as a Service provides enterprises access to a robust, fully configurable IoT platform featuring integrations, cloud-based capabilities and revenue-generating services like installation and warranty. Notion, on the other hand, is a smart property monitoring system, dedicated to helping de-risk a property and reduce the complexities of property ownership.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/GKVupldQa7UF28DkvCjx-jNXQ9ZSK7UvaBfmJo6XHvfNBKxKEJ3QKvQMhXrtvhqmCdwhJEv6kyBjAaWlEQ8GfnDYLpavwERpvpf16ma66uebzeu_mUziBJb5G2IbN_JkqHAEyPv2CPbzEsDsG6gNb5o5PI82vnQWWHSI9GVW-rCfE5jf04Zh1RFoZlndow" width="720" /> </noscript>
Notion smart monitoring system

Recently, Pepper’s consumer IoT Platform and Notion combined to create a leading IoT platform company, fueled by investment from Comcast. The merger aims to accelerate growth for Pepper’s IoT Platform and enable insurance carriers to deploy seamless IoT solutions.

Z-Wave source code is open and available

Z-Wave is a mesh network and wireless technology for smart homes. After losing the spotlight to the release of the Matter 1.0 specification, the Z-Wave source code is now available to all Z-Wave Alliance members. The Alliance stated opening the development of the protocol will enable members to contribute code, thus helping to shape its future.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/KNbJvmZe2kqpAPF51-z99J9yi7yV42JjXFwEAblc1MRkwzoDxPeoy5-4Mre72QRI5aSU98wHqgs2bJhbplgd1FNonV4UyiPRtp-meQbSaw5GgRq0LybAcQNKjqkKmR9xr4JxU76pFsR570lD2dIssE_6MyE21DvL7jddJ1sxw2d-clkx24eU9IlK1fekbw" width="720" /> </noscript>
Z-Wave Alliance Announces Z-Wave Source Code Project Is Complete, Now Open And Widely Available To Members

According to the leadership team at the Z-Wave Alliance, the protocol currently holds the largest certified interoperable ecosystem in the market. For instance, earlier in March, the Ecolink 700 Series Garage Door Controller was introduced as the first Z-Wave Long Range device. Whether the source code announcement will generate momentum similar to Matter’s is yet to be seen. That 2022 was the Year of the Smart Home, however, is becoming increasingly apparent.

Lexi launches third-generation platform

The Lexi platform provides a solution for enterprise customers to deploy smart home offerings and commercial IoT projects. The platform is designed for controlling smart buildings and home configurations and is interoperable with 3rd-party devices. It claims to work across multiple home ecosystems.

After more than two years in development, Lexi recently launched its third-generation platform, including new white-label mobile apps, white label cloud and an agnostic, universal IoT gateway. Lexi’s multi-protocol, edge-to-cloud offering enables companies to work with leading ecosystems across major wireless protocols. IoT gateways are usually required in smart home protocols, whereas a universal IoT gateway supposedly interconnects different wireless protocols.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/lO6mzXrzX28OQ2MJWUjk6dyGeGt2027OXKMyDzSU_gdoBNYsGU4aSIZ2V7wq1-Cm9d1EOBMIKlyYUNIeLoypRhnyly4sjxiLo26nex1h1C9UeLvMdDGObEVuio0oIBxT9IZ6_frABQhb0rnTPrmflxdVGdhUSnZmvAvYKvaRXwAbuYYJF3gEtBqjzInRaQ" width="720" /> </noscript>
The Future of IoT is Multi-Protocol (even with Matter!)

It will be interesting to monitor whether the LEXI IoT Platform will deliver on the promise of easing the IoT experience by connecting products across ecosystems.

Next-gen satellite IoT data service is available

Iridium is a global satellite communications company that provides access to voice and data services. Iridium’s satellite constellation creates a global mesh of coverage over the planet. Its satellites are cross-linked to provide reliable, low-latency, weather-resilient connectivity unmatched by other communications providers.

Recently, Iridium introduced a next-generation data service, aiming to simplify the development of satellite IoT services. Iridium Messaging Transport (IMT) is a two-way cloud-native networked data service enabling the addition of satellite connections to existing or new IoT solutions. IMT enables Iridium terminals to send and receive data over the Iridium Network utilising industry-standard terrestrial IoT and data transaction methods.

Being connected everywhere is not only expected, but essential. Yet,  vast parts of our world are still outside the bounds of traditional communications technologies. With the addition of IoT services, it will be exciting to see whether Iridium will succeed in delivering reliable, global connectivity.

LiDAR maker files for bankruptcy

Quanergy Systems is a provider of LiDAR sensors and smart 3D solutions for IoT and automotive applications. Quanergy’s smart LiDAR products enable enterprises to leverage real-time, 3D insights in multiple industries, including industrial automation and smart cities. Last month, Quanergy announced an industry-first 3D LiDAR movement-based false alarm reduction solution. The latest Qortex DTC 3D aimed to improve classification accuracy and reduce critical infrastructures’ physical security costs.

Despite the new release, this month Quanergy filed for bankruptcy and is now looking for a buyer. Citing volatile capital market conditions, the whole LiDAR industry has been going through a rough financial period. Whether innovations and growth opportunities in LiDAR sensors for IoT resume in 2023 remains to be seen.

Stay tuned for more IoT news

We will soon be back with next month’s summary of IoT news. Meanwhile, join the conversation in our IoT Discourse to discuss everything related to IoT and tightly connected, embedded devices. 

Further reading

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/VJhR7duo-fWAN-3bS_221nr80UT3aH8b9dFwO-YPlbUbCNVwo1KUhSKOkXIgRTBi3-ZRrXgsla4rM4ae3G0-hJ52sjfJftSi7qipLzf-gYu3qlj5URiTKXQZLblW_PL9NJbVXCGjLE5K5dNjyK9hsrI8tt4LzN1C1CNHebimVxogCI3hPBwbZ72h2PNQhw" width="720" /> </noscript>
Need help choosing an embedded Linux distribution? Get guidance here
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/VPCo8zxqrHiBZaN2OCRchkUzlxMznGIwB4SfaxuYfG-e17mBu1U0XmdTNyRx475hQW_I9H9we7Op64mVIhyWuxGgBaUy9PlyjU3MiiQwDy4wKzZJUSP5a4uDsHDeQHNh5fBIhYtBRQhq7Xea4XnhFKuxhhH3ND3oj77tGEDYHuvrzJCzEiYMl3x6prYKfw" width="720" /> </noscript>
Discover how to accelerate industrial compute with Ubuntu on Intel processors

13 January, 2023 02:40PM

January 12, 2023

hackergotchi for Volumio

Volumio

Hi-fi journalist Hans Beekhuyzen reviews the Volumio Integro

Well-known hi-fi journalist Hans Beekhuyzen just released a very detailed and enthusiastic review of the Volumio Integro streaming amplifier. A must-see video for anyone who is considering the Integro! 

#hifi #futurefi #audiophile #music #stereo 

The post Hi-fi journalist Hans Beekhuyzen reviews the Volumio Integro appeared first on Volumio.

12 January, 2023 04:33PM by Graham

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E229 Actualidades Múltiplas

Mais um típico episódio cheio de outras cenas e com algum Ubuntu! Ele foi: correcções do episódio anterior, noções de electrotecnia do ensino secundário de 1992, um novo episódio da saga ‘O Carrondo a as aplicações de notas’, 2 novos podcasts de origem duvidosa mas com princípios muito válidos, algumas dicas úteis sobre automação doméstica e uma agenda recheada. Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

12 January, 2023 12:00AM

January 11, 2023

hackergotchi for Purism PureOS

Purism PureOS

Spreading Awareness about Purism in 2022

2022 has been a phenomenal year, full of ups and downs (thankfully more ups than down!). For Purism, we have seen several milestones, none of which could have been possible without the support of our team, customers, supporters and investors. I am writing this post to highlight the three major achievements in the past year, […]

The post Spreading Awareness about Purism in 2022 appeared first on Purism.

11 January, 2023 08:26PM by Yavnika Khanna

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: ASRock Industrial Partners with Canonical for Ubuntu-Certified Industrial Platforms

Taipei, Taiwan (Jan. 11, 2023) ASRock Industrial, the leading manufacturer of Edge AIoT solutions, announces a partnership with Canonical to certify Ubuntu on its industrial systems and motherboards. The awaited collaboration allows ASRock Industrial to provide Ubuntu-certified devices with all Ubuntu functionality and long-term support with security updates from Canonical. Through extensive testing and validation, the new iEP-5000G Industrial IoT Controller is now an Ubuntu-certified platform. With the internationally recognised certification, customers can gain confidence in products’ seamless integration with Ubuntu while accelerating the time-to-market of application development. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/d_OaC4hDARoXmzLjTD8qA4w8z0wFHgjF_H0_Y_XbVGyHqYTIw4IEVsOwaIcYQNmflgE1FCkeESppWtnlu2rviFG0Ap5ReL8emXYn9th6qOBEwnSewCtOnIJ0jhSZtL16jVXd6YwmFSdYOzscGKf2sDQMgPb3YU87k74xjyJK7VIREiM8vfUz4lM-gBrP" width="720" /> </noscript>

Alongside the rapid growth of Edge AIoT and its adoption in the open-source community, Ubuntu operating system (OS) is now one of the most popular OSs worldwide, extensively used in applications dedicated to developing and verifying AIoT solutions. Choosing Ubuntu for your next machine learning or AI project can offer various benefits, such as being open-source, fast AI model training, significant community support, receiving the latest updates with solid security, and more. 

Through the collaboration with Canonical, ASRock Industrial’s newest iEP-5000G Industrial IoT Controller is now an Ubuntu-certified platform. It features high computing power with flexible IOs and expansions under a compact and rugged design as Edge Controller and IoT Gateway in various Edge AI applications, including smart manufacturing, process automation, and smart poles in smart cities. The Ubuntu-tested and validated hardware enables long-term support with up to 10 years of security updates from Canonical, bringing the best and most reliable customer experience.

“ASRock Industrial and Canonical work together to provide Ubuntu certified devices and ensure developers have the best out-of-the-box Ubuntu experience,” said Taiten Peng, IoT Field Engineer of Canonical. “In response to the booming AIoT market, it is our goal that through an extensive testing and review process, to support the open-source community with a safe and smooth operation. With maintenance and security updates guaranteed by Ubuntu Certification, Canonical looks forward to seeing the growth and innovation of Edge AIoT solutions brought by ASRock Industrial customers.” 

“The new partnership with Canonical in the growing global market of Edge AIoT enables us to provide better service and flexibility to our customers and speeds up the development and time-to-market process of their solutions,” said James Lee, President of ASRock Industrial. “ASRock Industrial is looking forward to the next steps stemming from this great opportunity to combine our products and solutions with Ubuntu operating system in the Edge AIoT proliferation throughout major industry verticals.” 

For further information regarding ASRock Industrial’s Ubuntu-certified devices, visit our website at www.asrockind.com.

About ASRock Industrial

ASRock Industrial Computer Corporation was established as an independent company in July 2018, focusing on the fields of motherboards, edge computers, and other products for the manufacturing, business, and retail industries. It is the world’s leader in Industrial PC motherboards, with customers located around the globe. Previously, it had been a business unit of ASRock Inc. (est. 2002) which was set up in 2011.
By becoming an independent company ASRock Industrial can devote all its resources to B2B activities. Our vision is to cocreate an intelligent world, which encompasses the ASRock Industrial’s core focus in CARES (Commerce/ Automation/ Robot/ Entertainment/ Security) industries. With a central R&D design team of almost 50% of total staff, ASRock Industrial has the resources to develop reliable, leading-edge products for your business needs. All products can be bought off-the-shelf or customized to the demands of OEMs/ODMs.

About Canonical

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars, and advanced robots. Canonical provides enterprise security, support, and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

Ubuntu certified program – ensuring long term support and reliability

Canonical Ubuntu certified program is the top choice for differentiation, reliability, and product visibility. Ubuntu certified hardware has passed an extensive testing and review process, ensuring that Ubuntu runs well out of the box. It is a guarantee of quality, functionality and maintenance. Canonical also provides continuous regression testing throughout the Ubuntu release life cycle and guarantees 5 years of maintenance updates and 5 years of expanded security updates through an Ubuntu Pro subscription.
For more information on Ubuntu Certified program and Ubuntu Pro, visit Ubuntu website at www.ubuntu.com.

11 January, 2023 02:41AM

January 10, 2023

hackergotchi for Volumio

Volumio

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: DIY chiselled Ubuntu: crafting your own chiselled Ubuntu base image

In a previous post, I explained how we made our Ubuntu image 15 times smaller by chiselling a specific slice of Ubuntu for .NET developers. In this blog, I will provide step-by-step instructions on customising your chiselled Ubuntu base images for any use case.

  • Chiselled Ubuntu containers combine Distroless and Ubuntu to create smaller, more secure containers.
  • The reduced size of the containers reduces the overall attack surface. Combined with the support and content quality from the Ubuntu distribution, chiselled Ubuntu is a significant security improvement.
  • Chisel provides a developer-friendly CLI to install slices of packages from the upstream Ubuntu distribution onto container filesystems.
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/SUggGVNM6aSByCTD2_-gumUKLw17dyOI-J3aLxYj9LC_7mYlX9KybBl-X-TIGS2IlDTv45ZAH8dLU_Jp5o_FXmaRbpLFXQriOPp1xJkUNQ0Jnv0cGC2BXk-G5xY6lVguxi0UAuovigOLji1UtsiEAp46uL_7W115SxOFjdMO1Nocx2Qga-gI-27gTR-_WA" width="720" /> </noscript>
That’s what chiselled Ubuntu containers could look like… well, if you ask the DALL-E2 AI.

I don’t believe in a perfect container base image anymore. I remember thinking that Google’s Distroless base was pretty close, but it turned out that the most perfect base image would in fact be FROM scratch:  only exactly what you need and installed from a popular, well-maintained and supported Linux distribution. Here’s how you can build your own.

Step 1: Build Chisel with Docker

To build chiselled Ubuntu images, you first need a chisel. The chisel package slicing tool used to craft chiselled Ubuntu base images is a Go application that currently doesn’t provide pre-built releases. Therefore, you’ll need to build it using the Golang SDK.

I provided a 20-line Dockerfile that shows how to build Chisel and package it as a container image using Docker, the Go SDK and Chisel itself! Once built, the output will be a less than 16MB chiselled Ubuntu-based chisel image, which is excellent proof of the effectiveness of chiselled Ubuntu images. The final chisel OCI image contains a custom chiselled Ubuntu base (mostly Glibc and CA certificates) and the Go-compiled Chisel tool, ready to be used in our future container builds.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/mSKUIzAsbkvTSq_yG-9VCVoy0-Af5R7lElZnFKIgsFya15dDKjjP69cWaDpw1ZopfrN2s4S7vg8p5Ir1vL5CErp37V5aVY7bCAlAKiO8VxfyUziNPacrIoavgktan62Cl0VamvW2afumEDorjg48EHdCjZI7l0SNbjY19-6DDDWp7UrRlqpG4WN01iViXg" width="720" /> </noscript>

Step 2: DIY chiselled Ubuntu

To create your own chiselled Ubuntu base image, you will start with a FROM scratch base image and add the necessary chiselled Ubuntu bits from your selected package dependencies.

Inspired by Google’s Distroless base image, for this example, I chose the following package slices from the Ubuntu distribution: base-files_base, base-files_release-info, ca-certificates_data, and libc6_libs.

To do this, use the following 5-instruction Dockerfile:

# chiselled-base.dockerfile

# "chisel:22.04" is our previous "chisel" image from Step 1
# we built and tagged it locally using the Docker CLI
FROM chisel:22.04 as installer

WORKDIR /staging
# Use chisel to cut out the necessary package slices from the
# chisel:22.04 image and store them in the /staging directory
RUN ["chisel", "cut", "--root", "/staging", \
    "base-files_base", \
    "base-files_release-info", \
    "ca-certificates_data", \
    "libc6_libs" ]

# Start with a scratch image as the base for our chiselled Ubuntu base image
FROM scratch
# Copy the package slices from the installer image
# to the / directory of our chiselled Ubuntu base image
COPY --from=installer [ "/staging/", "/" ]

Once you have created this Dockerfile, you can build your new chiselled Ubuntu image using the command: docker build . -t chiselled-base:22.04 -f chiselled-base.dockerfile

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/1ASjH8Ub15pUOY4OO2c16zB3SVyiAbNEQyWR0w_ZaIMrG7-9wuew6rS58NPYFIhZl_w6KaViLWJFsNz16trQp5FaslYfFxtehZAUNQ7tDNVpofiC2m4GA7Z5JByW_7WKgGdIwN-CZJ5tfzk-kDCJqIVi7_pHrK9o3YigAGztdzoehRFLU7-Tc3BdkD5WbA" width="720" /> </noscript>

Your custom chiselled Ubuntu base image should be around 5MB and is ready to run many C/C++, Golang, or other dynamically linked self-contained programs. You can test it using the provided sample Dockerfile that layers a C program on top of your new base image.

Step 3: Add SSL support for your base image

If your application requires SSL support, you can easily add it to your chiselled Ubuntu base image by adding the following 30 characters to your previous Dockerfile:

# chiselled-ssl-base.dockerfile

# "chisel:22.04" is our previous "chisel" image from Step 1
# we built and tagged it locally using the Docker CLI
FROM chisel:22.04 as installer
WORKDIR /staging
RUN ["chisel", "cut", "--root", "/staging", \
   "base-files_base", \
   "base-files_release-info", \
   "ca-certificates_data", \
   "libc6_libs", \
   "libssl3_libs", \
   "openssl_config" ]

FROM scratch
COPY --from=installer [ "/staging/", "/" ]

To build your new chiselled Ubuntu base image with SSL support, use the command: docker build . -t chiselled-ssl-base:22.04 -f chiselled-ssl-base.dockerfile

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/TADxIKzqHkTiUHGL0aWl0bizRg7PotA8Fo8fsjH9GqT7c8wKxrOgjFKnrxeCMfUdDaYWfV0-8vpoveetmQTUkCZpz1BmEbfhGw15Q1t88s-DaCpL09jOB8F7ovwj4AkLxlsr3fRQMACoRETU02UTSeWgoyHBGrfzmYKsdJjkPX1Qy49yoj3xkZSxWVwjOw" width="720" /> </noscript>

Your new base image with SSL support should be less than 11 MB and is ready for use in applications that require SSL.

This simple process allows you to easily add and remove package dependencies and customise your chiselled Ubuntu base image to fit your specific needs and requirements. For more examples and use cases, including creating extra package slices, check out our examples git repo.

Conclusion

Chiselled Ubuntu container images offer the benefits of a well-known and well-maintained Linux distribution combined with the advantages of ultra-small Distroless-type container images, offering a secure and efficient foundation for building and deploying containerised applications.

So why not try chiselled Ubuntu container images and see the benefits for yourself? As they say, the proof is in the pudding – or in this case, the size of your container image!

10 January, 2023 08:07AM

January 09, 2023

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 769

Welcome to the Ubuntu Weekly Newsletter, Issue 769 for the week of January 1 – 7, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

09 January, 2023 10:00PM by guiverc

hackergotchi for Pardus

Pardus

Pardus 21.4 Eğitim Sürümü Yayımlandı

Değerli öğretmenlerimiz ve öğrencilerimiz için hazırlamış olduğumuz yeni Pardus Eğitim Sürümü ile eğitim, multimedia ve programlama ile ilgili gerekli tüm uygulamaları bir araya topladık. Pardus 21.4 GNOME işletim sistemi dağıtımı temel alınarak hazırlanmış olan Pardus 21.4 Eğitim Sürümü, öğretmenlerimizin ve öğrencilerimizin ihtiyaç duyduğu uygulamaları içermenin yanında, MEB’e bağlı okullardaki bilgisayar laboratuvarlarında da kullanımını kolaylaştırmak için MEB tarafından okullarda kullanın sertifika ve tarayıcı konfigürasyonları ile birlikte yayımlanmaktadır.

Bu yeni sürüm ile birlikte öğrenci ve öğretmenlerimiz etkileşimli tahtalarda Pardus ETAP sürümünü kullanırken masaüstü ve diz üstü bilgisayarlarında Pardus 21.4 Eğitim sürümümüzü kullanabilecekler.

https://www.pardus.org.tr/surumler/ adresinde Pardus 21.4 Eğitim Sürümünü indirebilir ve bilgisayarınıza yükleyebilirsiniz.

Pardus 21.4 Eğitim Sürümünde yer alan başlıca uygulamalar:

Eğitim

  • Cantor
  • Drawio
  • Gbrainy
  • GCompris
  • GeoGebra
  • Gperiodic
  • KAlgebra
  • Kalzium
  • KBrunch
  • KDE Marble
  • KGeography
  • Kig
  • KStars
  • PardusKalem
  • Periyodik Cetvel
  • Satranç
  • Scratch
  • Stellarium
  • Step
  • Taquin
  • Tux Math
  • Tux Guitar
  • ZeGrapher

Multimedya

  • AppImage Launcher
  • Aseprite
  • Audacious
  • Blender
  • Bitmap to Component Converter
  • Celluloid
  • Chromium Web Tarayıcı
  • Color Picker
  • Deluge
  • Darktable
  • Eeschema
  • Flameshot
  • FreeCAD
  • Fritzing
  • Foliate
  • ImagOP
  • GerbView
  • Glade
  • Google Chrome
  • Google Earth Pro
  • gThumb
  • Kazam
  • KDE Connect
  • Kdenlive
  • KiCad
  • Krita
  • MPV Media Player
  • MuseScore 3
  • OpenShot Video Editor
  • OBS Studio
  • PCB Calculator
  • PCBnew
  • Peek
  • Peynir
  • Pinta
  • QMPlay 2
  • Qshutdown
  • Raw Therapee
  • Scribus
  • Shotcut
  • SMPlayer
  • System Profiler and Benchmark
  • Ultimaker Cura
  • Zoom

Programlama

  • Android Studio
  • Apache NetBeans
  • Arduino IDE
  • Atom
  • Bless Hex Editor
  • Btop
  • Builder
  • DBeaver
  • Devhelp
  • Eclipse Yükleyici
  • FileZilla
  • GHex
  • Godot Engine
  • gWakeOnLAN
  • Kutular
  • Oracle VM Virtualbox
  • Processing
  • PyCharm CE
  • Remmina
  • Spyder
  • SQLite Studio
  • Squeak
  • Sublime Text
  • Sysprof
  • Unity Hub
  • Veyon Master
  • VSCode
  • VSCodium
  • Ventoy
  • Wine 7.0
  • X11VNC Server

09 January, 2023 01:48PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Chiselled Ubuntu containers: the benefits of combining Distroless and Ubuntu

Last August, we announced 6 MB-size Ubuntu base images designed for self-contained .NET applications — we called them “chiselled Ubuntu”. How did we make our Ubuntu base images 15 times smaller? And how can you create your own chiselled Ubuntu images?

In this blog, I explain the idea behind Distroless container images, which inspired us to create chiselled Ubuntu images — adding the “distro” back to distro-less! In a follow-up blog, I will then provide a step-by-step example of how you can create your own chiselled Ubuntu base container images, built for your specific needs.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/dead/Copy-of-2022.11.07-Have-your-own-slice-of-cake-and-eat-it-too-—-Ubuntu-Summit-—-Valentin-Viennot.png" width="720" /> </noscript>
At-scale comparison of popular base images’ sizes (compressed and uncompressed)

Introduction to Distroless container images

Thanks to the widespread use of container images, developers now have an easy way to package an application and its dependencies into a self-contained unit compatible with any platform supporting the “OCI” standard (for example, Docker, Kubernetes or one of The 17 Ways to Run Containers on AWS). Container images make it easy to package and run applications in a consistent environment without worrying about differences in the host operating system or runtime environment.

Distroless container images are ultra-small images that only include an application and its runtime dependencies without additional libraries or utilities from a Linux distribution. This makes them smaller and more secure than traditional container images, which often include many libraries and utilities that are not needed by the application. In particular, traditional images often have a package manager and shell that give them their “look and feel”. In opposition, we could call them “distro-full”.

Minimal and mighty: the benefits of Distroless container images

Smaller container images have a de facto smaller attack surface, decreasing the likelihood of including unpatched security vulnerabilities and removing opportunities for attackers to exploit. But this probabilistic approach needs to consider how well-maintained the remaining content is. A large image with no CVEs and regular security updates is safer than an ultra-small unstable unmaintained one.

The ultimate security of a containerised application depends on various factors, including how it is designed and deployed and how it is maintained and updated over time. Using a well-maintained and supported Linux distribution like Ubuntu can help improve the security of containerised applications.

Additionally, smaller container images can save time and resources, especially in environments with limited storage capacity or where many container images are being used.

The best of both worlds: introducing Chiselled Ubuntu container images

Chiselled Ubuntu is a variation of Distroless container images built using the packages from the Ubuntu distribution. Chiselled Ubuntu images are carefully crafted to only fit the minimum required dependencies. They are constructed using a developer-friendly package manager called “Chisel”, which is only used at build time and not shipped in the final image. This makes them smaller and more secure than traditional Ubuntu container images, which often include many additional libraries and utilities.

Chiselled Ubuntu images inherit the advantages of the Ubuntu distribution: regularly updated and supported, offering a reliable and secure platform for creating and operating applications. On the other hand, they suppress the downsides of using a “distro-full” image when shipping to production.

“Breaking the Chisel” – how Chisel works

Chisel uses an open database of package slices, which supersedes the Debian packages database with specific file subsets and edited maintainer scripts for creating ultra-small runtime file systems. Chisel is a sort of “from-scratch package manager” that creates partial filesystems that just work for the intended use cases. The information contained in a Package slice is what image developers used to define manually at the image definition level when crafting Distroless-type images. With Chisel, community developers can now easily reuse this knowledge – effortlessly.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2028/Copy-of-2022.11.07-Have-your-own-slice-of-cake-and-eat-it-too-—-Ubuntu-Summit-—-Valentin-Viennot-1.png" width="720" /> </noscript>
Illustration of Package slices, reusing upstream Ubuntu packages information

Chiselled Ubuntu container images are a new development that offers many benefits, including a consistent and compatible developer experience. They are slices of the same libraries and utilities in the regular Ubuntu distribution, making it easy to go from using Ubuntu in development to using chiselled Ubuntu in production. As a result, multi-stage builds work seamlessly with chiselled Ubuntu images.

Next steps: creating your own chiselled Ubuntu base container images

In conclusion, chiselled Ubuntu container images combine the benefits of Distroless containers with those of Ubuntu to create smaller, more secure containers that are easier to use. In this blog, I have explained the idea behind Distroless containers and introduced the concept of chiselled Ubuntu images. In the next blog, I will provide a step-by-step guide for creating chiselled Ubuntu base container images built for your specific needs. Ready? Keep reading!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/uFbjlmoqg2ePFC9zWJSRcmOE2gVKBfROdv_3o5zXhfOlSRWsapi3fswlmhgB4YF-2T6gnn5v3M1S7t1ClvV0zKRvGdpwXmoaTe2Cqz09uPs1Qi88V0GE-UODiNPf3WmmVwm23MzhbV5xnUQNynv4-jBUtZI8DogMnzaJBJWA72KF_1l8rmrVeMKbLZWrYQ" width="720" /> </noscript>
That’s what chiselled Ubuntu containers could look like… well, if you ask the DALL-E2 AI.

09 January, 2023 09:14AM

hackergotchi for Purism PureOS

Purism PureOS

Phosh 2022 in Retrospect

I wanted to look back at what changed in phosh in 2022 and figured I could share it with you. I’ll be focusing on things very close to the mobile shell, for a broader overview, do watch Evangelos upcoming FOSDEM talk in February 2023. Some numbers We’re usually aiming for a phosh release at the end of […]

The post Phosh 2022 in Retrospect appeared first on Purism.

09 January, 2023 02:05AM by Guido Günther

January 08, 2023

hackergotchi for Maemo developers

Maemo developers

Back to Maemo!

New year, new job. After leaving Canonical I'm back to working on the same software platform on which I started working back in 2006: Maemo. Well, not exactly the vanilla Maemo, but rather its evolution known as Aurora OS, which is based on Sailfish OS. This means I'm actually back to fixing the very same bugs I introduced back then when I was working in Nokia, since a lot of the middleware has remained the same.

At the moment OMP (the company developing Aurora OS) is mostly (or even exclusively, AFAIK) targeting business customers, meaning corporations such as the Russian posts and the railway company, whereas the consumer market is seen as something in the far away future. Just in case you were curious whether there were any devices on sale with Aurora OS.

I should also explain why I've refused several very well paying job opportunities from Western companies: it's actually for a reason that has been bothering me since last March, and it's a very simple one. The fact is that because of the sanctions against Russia I already had to change bank once (as the one I was using fell under sanctions), and in these months I've always been working with the fear of not being able to receive my salary, since new sanctions are introduced every month and more and more banks are being added to the blacklist. That's why I've restricted my job search to companies having an official presence in Russia; and to my surprise (and from some point of view, I could even say disappointment) the selection and hiring processes were so quick that I received three concrete offers while I was still working my last weeks at Canonical, and I joined OMP on that very Monday after my last Friday at Canonical.

I mean, I could have rested a bit, at least until the Christmas holidays, but no. ☺ Anyway, I'm so far very happy with my new job, and speaking Russian at work is something totally new for me, both challenging and rewarding at the same time.

0 Add to favourites0 Bury

08 January, 2023 07:48PM by Alberto Mardegan (mardy@users.sourceforge.net)

hackergotchi for Ubuntu developers

Ubuntu developers

Scarlett Gately Moore: Debian: Coming soon! MycroftAI! KDE snaps update.

I am excited to announce that I have joined the MycroftAI team in Salsa and working hard to get this packaged up and released in Debian. You can track our progress here:

https://salsa.debian.org/mycroftai-team

Snaps are on temporary hold while we get everything switched over to core22. This includes the neon-extension, that requires merges and store requests to be honored. Hopefully folks are returning from holidays and things will start moving again. Thank you for your patience!

I am still seeking work if you or anyone you know is willing to give me a chance to shine, you won’t regret it, life has leveled out and I am ready to make great things happen! I admit the interviewing process is much more difficult than in years past, any advice here is also appreciated. Thank you for stopping by.

08 January, 2023 03:31PM

hackergotchi for Mobian

Mobian

A look back in the mirror... And a glimpse of the future!

2022 has been an extremely busy year for Mobian developers, and a lot has happened throughout this period. As we’re entering a new development cycle year, now is a good time to look back at what we achieved in 2022.

Foundation work

One of our goals has always been for Mobian to slowly dissolve into Debian. As such, we aim at moving packages from the downstream Mobian archive into the upstream Debian repositories.

Summer migration

As a consequence, we decided to move all the source code used in Mobian from gitlab.com to Salsa, Debian’s own GitLab instance, during last summer. With the exception of non-free firmware blobs of unclear origins, Mobian is now fully maintained within Salsa’s Mobian-team group. It should be noted that, thanks to GitLab’s export/import features, this important migration has been mostly painless!

Feeding the Debian package servers

We also welcomed the contributions of Debian developers who helped us both for improving Mobian and for moving our downstream packages to Debian upstream. Together, we were able to push a great number of new packages to the Debian archive, including (but not limited to):

This past year also saw us upload calamares-settings-mobian and plymouth-theme-mobian to Debian, making those the first Mobian-specific packages to make it into the Debian archive!

Less downstream tweaks

Over the past year, one of our main areas of work was getting rid of as much custom scripts and tweaks as possible, and rely on existing upstream solutions instead.

A good example of such improvements if the switch to systemd services for growing the root filesystem on initial boot: while previously relying on cloud-initramfs-tools and a downstream initramfs script, we now hand this task over to systemd-repart and systemd-growfs.

We also got rid of device-specific kernel cmdline parameters, as we can now rely on the device tree for providing this information, such as the default serial console; similarly, by working with upstream to automatically compute display scaling in phoc, we could drop our downstream, per-device phoc.ini files.

Overall, those changes allowed us to get rid of most of our per-device tweaks, up to the point where now we barely need any. This not only makes it easier to envision out-of-the-box Debian support for the devices we currently work on, but will also make new device bring-up incredibly easier!

Being good FLOSS citizens

Over the past year, Mobian developers have also tried to fix issues and add features upstream rather than carrying patched packages. Notable upstream projects we contributed to in 2022 include, among others:

We also keep maintaining software that are an important part of the mobile Linux ecosystem, such as eg25-manager and callaudiod. Lately, we published 2 additional projects, not only because we felt those were needed for Mobian, but also because they might be useful to other distros and users.

Those projects are droid-juicer (more on this one below) and phog, which is a GUI login/session startup application aimed specifically at mobile devices (you can think of it as “GDM for phones”). It relies on greetd for performing the actual session startup, so phog doesn’t interact with pam nor systemd directly.

Bringing a graphical login manager to mobile devices allows our users to have multiple user accounts instead of the single (and so-far hardcoded) mobian user. They can also use phog to select which graphical environment they want to start, so they could very well setup one user for work running Phosh, and another user for their everyday life running SXMO, for example. Finally, phog ensures the GNOME keyring is unlocked when the session is started so we can get rid of the annoying “keyring password” dialogs on session startup :)

Not quite there yet

Unfortunately, time is not extensible: as volunteers we can’t spend as much time working on Mobian as we’d like, meaning we sometimes have to prioritize things, and delay some of the work we had hoped to achieve.

This is exactly what happened with the universal images that we hoped to implement in 2022, so they could be ready, tested and debugged come 2023, and finally be made our new default during the 1st quarter of this year.

To this effect, we started working on fundational pieces of software, starting with tweakster, a utility to manage device-specific config at runtime. It, in turn, gave birth to droid-juicer, a tool designed to extract proprietary firmware from vendor partitions on Android devices: that way, we wouldn’t have to package non-free firmware for those devices anymore, which increases our workload while decreasing the chances of getting such devices properly supported by vanilla Debian.

The good point is that the initial release of droid-juicer has been uploaded to Debian before the year ended; the less-good point is that tweakster and consequently universal images have been delayed by this work and therefore will have to wait for the Trixie development cycle. However, we might have a few more tricks up our sleeves to make at least some of it happen earlier ;)

Community vitality

Finally, 2022 showed once again the mobile Linux community vitality through numerous events and collaborations. In particular, FOSDEM 2022 was the first edition including a “FOSS on Mobile Devices” devroom, which ended up being incredibly successful.

It went so well, actually, that this year’s (in-person) FOSDEM will feature both the devroom and a “Linux on Mobile” stand where you’ll be able to discuss with members of several projects, including Mobian, postmarketOS, UBPorts, Sailfish OS and more!

We also agreed with other mobile distros to switch to Tow-Boot for supported devices, eventually leading PINE64 to ship the PinePhone Pro with this booloader factory-flashed. We expect such cross-distro collaboration to keep covering more areas over time and we trust this, along with the great relationship we have with developers from postmarketOS and Purism, will lead to even more spectacular improvements to the ecosystem we’re shaping up together.

We’re also seeing two important trends both in the market and community, starting with new manufacturers joining the dance:

  • Juno Computers released an Intel-based Linux tablet shipping with either Mobian or Manjaro as the default OS
  • FydeOS announced its FydeTab Duo, a 12" tablet based on the Rockchip RK3588 flagship SoC; although it comes with FydeOS, a ChromiumOS derivative, the tech specs page mentions optional compatibility “with other open-source operating systems, including popular Linux distributions”

Another trend is the fact the PinePhone Pro is being adopted widely as a development platform for showcasing various projects, as can be seen from CodeThink’s recent work on a mobile version of GNOME OS.

Both those trends attract more users and developers to our platform(s), improving the likelihood that someone (who might just be you!) will work on solving your particular problem in a way that will benefit the whole community.

Our plans for 2023

With the Bookworm freeze process starting in just a few days, this year will be very special regarding the way we work on Mobian: uploads to Debian will likely slow down starting Feb 12th (soft freeze) and come to a near-halt 1 month later (hard freeze). This implies we will then be only able to focus on Mobian-specific packages until Bookworm is finally released.

Keep moving while frozen

Fortunately (or not?), there are always a lot of non-packaging tasks pending, so we have at least some idea how we’re going to spend our time: not working on Mobian/Debian packages means we have then more time to hunt down bugs and improve the overall system stability.

This will also be a good opportunity to finally get to the upstream contributions we’ve been holding back for quite some time. This includes, for example, getting Tow-boot to support the PineTab and Librem 5 (no promise though), or pushing more of our kernel patches upstream.

We’ll also have more time to work on new hardware support, the PineTab 2 and FairPhone 4 being good candidates for official Mobian support.

Mobian Stable

With the release of Bookworm in 2023 the first Mobian stable release will finally happen! This version will follow the same release cycle as Debian, providing only bugfix and security updates. Most of those will come directly through Debian, including for Mobian-specific packages we still hope to get accepted before the soft freeze begins, miniramfs being the most important of those.

We’ll still need to carry downstream packages though, and will do our best to provide timely security updates and bugfixes for those. To this effect, we settled to ship Linux 6.1 on all our devices as it should be the next LTS kernel, and are committing to keeping it as up-to-date as possible.

The day(s) after

Once Bookworm is released, we’ll switch our testing and staging repos to track Debian Trixie and will keep bringing you up-to-date software during the whole Trixie development cycle.

Now that we have SXMO available in Debian, and good progress is being made towards packaging Plasma Mobile, we also hope the Trixie development cycle will give us the opportunity to provide our users with more graphical environment options. We’ll also closely track the work of the Debian UBPorts team with the hope that we can ultimately provide Lomiri-based images at some point.

Finally, we’ll keep working towards making universal images happen, while still looking at and packaging mobile-friendly software, such as Organic Maps among others.

What we’d love to see happening in 2023

As mentioned earlier, a lot is constantly happening throughout the whole ecosystem, which in itself is very positive. However, we’ve noticed some encouraging activity in recent months which fills us with hope for this new year.

After the initial submission, the PinePhone Pro is getting better and better upstream support as time goes by. As several different developers have contributed over these past few months, we hope this trend will carry on, and maybe even join forces so we can finally have a device fully supported by the mainline kernel.

The other area triggering our interest is all the work currently happening in libcamera: developers from this project envision mobile Linux devices as a perfect showcase for libcamera, and therefore do their best in order to ease contributions, fix camera sensor drivers for better libcamera support, and more generally improve the camera situation on Linux. We can’t wait to see where we’ll be standing a year from now, and are confident new applications and better support for our devices isn’t too far now!

Final word

We hope you’re as excited as we are about the current state and foreseeable evolutions of the mobile Linux ecosystem, and wish you all a happy year 2023!

08 January, 2023 12:00AM

A look back in the mirror... And a glimpse of the future!

2022 has been an extremely busy year for Mobian developers, and a lot has happened throughout this period. As we’re entering a new development cycle year, now is a good time to look back at what we achieved in 2022.

Foundation work

One of our goals has always been for Mobian to slowly dissolve into Debian. As such, we aim at moving packages from the downstream Mobian archive into the upstream Debian repositories.

Summer migration

As a consequence, we decided to move all the source code used in Mobian from gitlab.com to Salsa, Debian’s own GitLab instance, during last summer. With the exception of non-free firmware blobs of unclear origins, Mobian is now fully maintained within Salsa’s Mobian-team group. It should be noted that, thanks to GitLab’s export/import features, this important migration has been mostly painless!

Feeding the Debian package servers

We also welcomed the contributions of Debian developers who helped us both for improving Mobian and for moving our downstream packages to Debian upstream. Together, we were able to push a great number of new packages to the Debian archive, including (but not limited to):

This past year also saw us upload calamares-settings-mobian and plymouth-theme-mobian to Debian, making those the first Mobian-specific packages to make it into the Debian archive!

Less downstream tweaks

Over the past year, one of our main areas of work was getting rid of as much custom scripts and tweaks as possible, and rely on existing upstream solutions instead.

A good example of such improvements if the switch to systemd services for growing the root filesystem on initial boot: while previously relying on cloud-initramfs-tools and a downstream initramfs script, we now hand this task over to systemd-repart and systemd-growfs.

We also got rid of device-specific kernel cmdline parameters, as we can now rely on the device tree for providing this information, such as the default serial console; similarly, by working with upstream to automatically compute display scaling in phoc, we could drop our downstream, per-device phoc.ini files.

Overall, those changes allowed us to get rid of most of our per-device tweaks, up to the point where now we barely need any. This not only makes it easier to envision out-of-the-box Debian support for the devices we currently work on, but will also make new device bring-up incredibly easier!

Being good FLOSS citizens

Over the past year, Mobian developers have also tried to fix issues and add features upstream rather than carrying patched packages. Notable upstream projects we contributed to in 2022 include, among others:

We also keep maintaining software that are an important part of the mobile Linux ecosystem, such as eg25-manager and callaudiod. Lately, we published 2 additional projects, not only because we felt those were needed for Mobian, but also because they might be useful to other distros and users.

Those projects are droid-juicer (more on this one below) and phog, which is a GUI login/session startup application aimed specifically at mobile devices (you can think of it as “GDM for phones”). It relies on greetd for performing the actual session startup, so phog doesn’t interact with pam nor systemd directly.

Bringing a graphical login manager to mobile devices allows our users to have multiple user accounts instead of the single (and so-far hardcoded) mobian user. They can also use phog to select which graphical environment they want to start, so they could very well setup one user for work running Phosh, and another user for their everyday life running SXMO, for example. Finally, phog ensures the GNOME keyring is unlocked when the session is started so we can get rid of the annoying “keyring password” dialogs on session startup :)

Not quite there yet

Unfortunately, time is not extensible: as volunteers we can’t spend as much time working on Mobian as we’d like, meaning we sometimes have to prioritize things, and delay some of the work we had hoped to achieve.

This is exactly what happened with the universal images that we hoped to implement in 2022, so they could be ready, tested and debugged come 2023, and finally be made our new default during the 1st quarter of this year.

To this effect, we started working on fundational pieces of software, starting with tweakster, a utility to manage device-specific config at runtime. It, in turn, gave birth to droid-juicer, a tool designed to extract proprietary firmware from vendor partitions on Android devices: that way, we wouldn’t have to package non-free firmware for those devices anymore, which increases our workload while decreasing the chances of getting such devices properly supported by vanilla Debian.

The good point is that the initial release of droid-juicer has been uploaded to Debian before the year ended; the less-good point is that tweakster and consequently universal images have been delayed by this work and therefore will have to wait for the Trixie development cycle. However, we might have a few more tricks up our sleeves to make at least some of it happen earlier ;)

Community vitality

Finally, 2022 showed once again the mobile Linux community vitality through numerous events and collaborations. In particular, FOSDEM 2022 was the first edition including a “FOSS on Mobile Devices” devroom, which ended up being incredibly successful.

It went so well, actually, that this year’s (in-person) FOSDEM will feature both the devroom and a “Mobile Linux” stand where you’ll be able to discuss with members of several projects, including Mobian, postmarketOS, UBPorts, SailfishOS and more!

We also agreed with other mobile distros to switch to Tow-Boot for supported devices, eventually leading PINE64 to ship the PinePhone Pro with this booloader factory-flashed. We expect such cross-distro collaboration to keep covering more areas over time and we trust this, along with the great relationship we have with developers from postmarketOS and Purism, will lead to even more spectacular improvements to the ecosystem we’re shaping up together.

We’re also seeing two important trends both in the market and community, starting with new manufacturers joining the dance:

  • Juno Computers released an Intel-based Linux tablet shipping with either Mobian or Manjaro as the default OS
  • FydeOS announced its FydeTab Duo, a 12" tablet based on the Rockchip RK3588 flagship SoC; although it comes with FydeOS, a ChromiumOS derivative, the tech specs page mentions optional compatibility “with other open-source operating systems, including popular Linux distributions”

Another trend is the fact the PinePhone Pro is being adopted widely as a development platform for showcasing various projects, as can be seen from CodeThink’s recent work on a mobile version of GNOME OS.

Both those trends attract more users and developers to our platform(s), improving the likelihood that someone (who might just be you!) will work on solving your particular problem in a way that will benefit the whole community.

Our plans for 2023

With the Bookworm freeze process starting in just a few days, this year will be very special regarding the way we work on Mobian: uploads to Debian will likely slow down starting Feb 12th (soft freeze) and come to a near-halt 1 month later (hard freeze). This implies we will then be only able to focus on Mobian-specific packages until Bookworm is finally released.

Keep moving while frozen

Fortunately (or not?), there are always a lot of non-packaging tasks pending, so we have at least some idea how we’re going to spend our time: not working on Mobian/Debian packages means we have then more time to hunt down bugs and improve the overall system stability.

This will also be a good opportunity to finally get to the upstream contributions we’ve been holding back for quite some time. This includes, for example, getting Tow-boot to support the PineTab and Librem 5 (no promise though), or pushing more of our kernel patches upstream.

We’ll also have more time to work on new hardware support, the PineTab 2 and FairPhone 4 being good candidates for official Mobian support.

Mobian Stable

With the release of Bookworm in 2023 the first Mobian stable release will finally happen! This version will follow the same release cycle as Debian, providing only bugfix and security updates. Most of those will come directly through Debian, including for Mobian-specific packages we still hope to get accepted before the soft freeze begins, miniramfs being the most important of those.

We’ll still need to carry downstream packages though, and will do our best to provide timely security updates and bugfixes for those. To this effect, we settled to ship Linux 6.1 on all our devices as it should be the next LTS kernel, and are committing to keeping it as up-to-date as possible.

The day(s) after

Once Bookworm is released, we’ll switch our testing and staging repos to track Debian Trixie and will keep bringing you up-to-date software during the whole Trixie development cycle.

Now that we have SXMO available in Debian, and good progress is being made towards packaging Plasma Mobile, we also hope the Trixie development cycle will give us the opportunity to provide our users with more graphical environment options. We’ll also closely track the work of the Debian UBPorts team with the hope that we can ultimately provide Lomiri-based images at some point.

Finally, we’ll keep working towards making universal images happen, while still looking at and packaging mobile-friendly software, such as Organic Maps among others.

What we’d love to see happening in 2023

As mentioned earlier, a lot is constantly happening throughout the whole ecosystem, which in itself is very positive. However, we’ve noticed some encouraging activity in recent months which fills up with hope for this new year.

After the initial submission, the PinePhone Pro is getting better and better upstream support as time goes by. As several different developers have contributed over these past few months, we hope this trend will carry on, and maybe even join forces so we can finally have a device fully supported by the mainline kernel.

The other area triggering our interest is all the work currently happening in libcamera: developers from this project envision mobile Linux devices as a perfect showcase for libcamera, and therefore do their best in order to ease contributions, fix camera sensor drivers for better libcamera support, and more generally improve the camera situation on Linux. We can’t wait to see where we’ll be standing a year from now, and are confident new applications and better support for our devices isn’t too far now!

Final word

We hope you’re as excited as we are about the current state and foreseeable evolutions of the mobile Linux ecosystem, and wish you all a happy year 2023!

08 January, 2023 12:00AM

January 07, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Carter: What are the most important improvements that Debian need to make?

“What are the most important improvements that Debian need to make?” – I decided to see what all the fuss is about and asked ChatGPT that exact question.

It’s response:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

There are many areas where the Debian project could make improvements, and the most important ones will depend on the priorities and needs of the project and its community. Some potential areas for improvement could include:

  1. Improving the user experience: Debian is known for its stability and security, but it is not always the easiest distribution to use, especially for new users. Improving the user experience and making Debian more accessible to a wider audience could be an important goal.
  2. Keeping up with the rapid pace of change in the software ecosystem: With over 50,000 packages, Debian is a large distribution, and it can be difficult to keep up with the constant stream of updates and new software being released. Developing tools and processes to help manage this complexity and ensure that new packages are tested and integrated smoothly could be an important focus.
  3. Maintaining the stability and security of the distribution: Ensuring that Debian remains a stable and secure platform is a top priority for the project. This includes testing new packages thoroughly before they are included in the distribution, responding quickly to security vulnerabilities, and investing in infrastructure and processes to support these efforts.
  4. Ensuring the long-term sustainability of the project: Debian is a volunteer-driven project, and it can be challenging to attract and retain contributors over the long term. Developing strategies to support and encourage participation and collaboration within the community could be an important focus.
  5. Enhancing the tools and infrastructure that support the project: The Debian project relies on a wide range of tools and infrastructure to support its development and distribution efforts. Improving and modernizing these resources could help to make the project more efficient and effective.

I find that to be a great response, and I’m curious where it came from, so I asked “How did you come to that conclusion?”, it replied:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

Well, one thing is for sure, the IRC chat bots of the future are going to be much more complex than the ones I grew up with :-)

07 January, 2023 12:19PM

hackergotchi for SparkyLinux

SparkyLinux

Tokodon

There is a new application available for Sparkers: Tokodon

What is Tokodon?

Tokodon is a Mastodon client for Plasma and Plasma Mobile

Installation (Sparky 7 amd64/i386):

sudo apt update
sudo apt install tokodon

License: GNU GPL v3
Web: invent.kde.org/network/tokodon

 

07 January, 2023 11:32AM by pavroo

January 06, 2023

hackergotchi for Univention Corporate Server

Univention Corporate Server

Migration of the Identity Provider in UCS – Keycloak App now Part of the Support Scope

We reached an important milestone in the step-by-step migration to Keycloak as an Identity Provider in UCS: In August 2022, we made Keycloak available as an app in the UCS App Center for integration in UCS in an initial version and have improved it ever since. With the latest release in December, the app is now also part of our official product support, and thus ready for productive use.

In this article, I would like to give you an overview of the possibilities that Keycloak will offer you, the current status of the migration of the Identity Provider from UCS based on Keycloak, as well as an outlook on further features yet to come in the next few months.

What functions does the Identity Provider offer?

The Identity Provider (IdP) handles authentication and, optionally, authorization of identities that want to access IT services. For authentication, the Identity Provider checks whether correct credentials are available – in the case of a user, for example, the user ID, password and, if necessary, other factors – and can then decide which services the person is allowed to access (authorization). To ensure that the process remains convenient for users, the Identity Provider implements single sign-on technologies such as SAML or OpenID Connect, which enable one-time authentication with the same credentials to access many services.


Keycloak Login Page

How is the Identity Provider currently provided by UCS?

UCS has long supported the SAML (Security Assertion Markup Language) and OpenID Connect protocols for single sign-on to web applications. For this purpose, we have integrated the software solutions “simpleSAMLphp” and “Kopano Connect” into UCS. These offer good and deep integration with UCS user management (Univention Directory Manager) and the Identity Store (OpenLDAP), but compared to other implementations, the configurable range of functions is limited.

Therefore, after comparing different open source implementations for identity providers, we decided to perform a step-by-step migration to the Keycloak software as the new Identity Provider and set it as the future standard in UCS.

Keycloak is much more extensively configurable than the previous Identity Provider and comes with many additional functions that we have gradually activated and will activate in UCS. Keycloak is used by a large number of organizations worldwide and is therefore also being developed much more actively than the previously integrated solutions, which have stagnated or even been discontinued.

What functions should UCS with Keycloak offer in the future, and where does the current app stand?

The feature set of Keycloak is very extensive and extensible through plugins and APIs. The current feature set of the UCS Keycloak app provided in the App Center already significantly exceeds the scope of the old IDP, but for a replacement in all previous deployment scenarios some functions are still missing. We therefore plan to continue developing the app step by step before it will eventually replace the previous IDP completely as a new standard.

In the following table, I provide an initial overview of the existing and upcoming features of the Keycloak app and compare their current implementation status to the functionality of the old IDP.

Functionality Implementation Status Keycloak App Available for previous IDP (SimpleSAMLPHP / Kopano Connect)
Single sign-on via SAML and OpenID Connect Implemented Implemented
Redundant operation on multiple UCS instances Implemented Implemented
Integrated 2-factor authentication (OTP) Implemented, but not yet in scope of support Not available
Advanced 2-factor authentication via PrivacyIdea Implemented, but not yet in scope of support Implemented
Identity Federation for external IDPs and their service providers Implemented, but not yet in scope of support Limited availability
Identity federation for service provider provisioning on UCS for external IDPs Implemented, but not yet in scope of support Not available
Single sign-on with Kerberos for workstations in UCS Kerberos/Samba domains Not yet implemented Implemented
Brute Force Login prevention (Abweisen extrem häufiger Anmeldeversuche) Under implementation Limited availability
Configurable login mask (theme, links) Under implementation Implemented
User guidance for expired passwords and for onboarding after self-service registration In planning Implemented

The exact description of the functions – especially the Identity Federation – would go beyond the scope of this blog article. We will go into this in more detail in further blog articles.

In general, a native Keycloak offers even more possibilities than are currently usable in the app for UCS. The exact status of the current version of the UCS Keycloak app can be found in the UCS documentation, as well as information about the range of functions covered by our support.

When will the Keycloak App become the new default IDP?

We refer to the implementation that is automatically rolled out for new installations of UCS as the “standard IDP”. For all releases of UCS 5.0-x, this will continue to be the existing solution with SimpleSAMLPHP, for which you can optionally install the Kopano Connect App for OpenID Connect as in the past.

We plan to switch to the Keycloak App as the new standard IDP with the next minor release, i.e. UCS 5.1. Presumably, with UCS 5.1 it will no longer be possible to continue using the previous IDP based on the old technologies. It is therefore advisable to migrate to the UCS Keycloak app beforehand. With the release of the first version covered by our support now, you can begin now and have enough time to make the switch. An exact release date for UCS 5.1 has not yet been determined, but it is certain that we will provide support and security updates (maintenance) for UCS 5.0 and thus also for the previous IDP for holders of an Enterprise subscription at least until the end of 2023.

We are pleased to introduce the Keycloak App, a significant enhancement to the capabilities and features of UCS. I look forward to hearing your feedback and questions about it – here on the blog, at help.univention.com, or in person at the Univention Summit on January 17-18, 2023!

Der Beitrag Migration of the Identity Provider in UCS – Keycloak App now Part of the Support Scope erschien zuerst auf Univention.

06 January, 2023 01:52PM by Ingo Steuwer

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Kubescape brings a new level of security to Charmed Kubernetes

The popular open-source platform Kubescape by ARMO has been recently announced as a fully managed operator called a Charm for Canonical’s Charmed Kubernetes distribution. This collaboration between Canonical and ARMO is exciting for the solution it enables for end users, ultimately resulting in hardened and more secure Kubernetes environments.

A need for cloud-native security

<noscript> <img alt="" height="146" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_221,h_146/https://ubuntu.com/wp-content/uploads/a5d6/Screenshot-2023-01-06-at-12.23.31.png" width="221" /> </noscript>
Securing the cloud

According to the Cloud Native Computing Foundation’s 2021 annual report, 79% of organisations are now utilising a certified Kubernetes platform to manage their container workloads. This growth is only set to continue as ContainerD, a key open-source technology for running containers saw a 500% increase in adoption. However, with a rapid increase in use come challenges in maintaining a strong security posture, whilst educating on misconfiguration and vulnerability mitigation.

Partnering together, Canonical’s Charmed Kubernetes and Kubescape, look to provide a first-rate Charm for individuals and organisations who want to take control of their cluster-wide security proactively. When combined together, Canonical Kubernetes and Kubecape provide a first-rate security solution for ensuring workloads are run safely.

Kubescape

<noscript> <img alt="" height="191" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_208,h_191/https://lh6.googleusercontent.com/_5NvSJS_x1wdLaNgzepu2aXa6Qj27dKzlT4SiD8ZFACBpTIOz8Eyht2TlH4T2-w-9Fklx2xpeGrhOPgV_BD0AQfzMlu_Xp3udj3WCPjZJ8z8OB18WaPkWE7Tg1EZ5KXpIkxPmFDg9E6tAoX2ExTRs2YuoODOFjmh4o0voDNkZSLTeS9PAY5Li51kGXnK8A" width="208" /> </noscript>

Starting life in August 2021, Kubescape is an open-source Kubernetes and CI/CD security platform providing a multi-cloud single pane of glass for risk analysis, security compliance, RBAC visualiser, image vulnerability scanning and CICD security.

Kubescape scans K8s clusters, manifest files,  code repositories, container image registries, worker nodes and API servers. It has capabilities for detecting misconfigurations according to multiple frameworks (such as the  NSA-CISA, CIS, MITRE ATT&CK and more), software vulnerabilities, and RBAC (role-based-access-control) violations at early stages of the CI/CD pipeline, calculates risk score instantly and shows risk trends over time.

For those looking to adopt Kubescape at enterprise, ARMO provides an enterprise-grade solution that can offer additional features and support.

Charmed Kubernetes

<noscript> <img alt="" height="190" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_217,h_190/https://lh6.googleusercontent.com/yScldw94jZL5CfrYk0TUW2B9EVp_m_eB65Ono7selPm0gNacjwPPhrGOm7U_WwTfGcIwQK3FZlPxG6vlQlp79LscSe69yuAgDDIF4rnFC_a7sY7nR26SE5md9gGFjrEfhy-Oa4srZ4PU5sJaRdoE5MiOMRIGg9AbXCYdVUfhQBfjtZdv8FpOcgu70VeSew" width="217" /> </noscript>

Charmed Kubernetes is a CNCF conformant distribution of Kubernetes that centres around model-driven, lifecycle-managed applications through Juju. It boasts carrier-grade hardware acceleration features and a suite of interoperable charms for CNI, CRI, CSI and CMI. Perfect for the enterprise, Charmed Kubernetes adheres to multiple security standards including CIS and as of 2023 FIPS. 

Leveraging other open-source technologies maintained and curated by Canonical, Charmed Kubernetes enables users to operate independently of their public cloud, ensuring that they have access to out-of-the-box integrations but none of the vendor lock-in commonly associated with public cloud providers. When complemented with Ubuntu as the host operating system and kernel live patching, the Charmed Kubernetes story is extremely compelling for operators who want peace of mind that the infrastructure they are deploying workloads to is not only secure at installation but is continuously updated as new vulnerabilities are discovered and patched.

Common use cases

Hardening

According to a recent Gartner report, by 2025 more than 99% of cloud breaches will have a root cause with customer misconfiguration. These mistakes open up opportunities for bad actors to leverage attack surfaces on systems that have not been sufficiently secured. Without any form of system hardening, this can lead to data loss, system subversion and other malicious activities.

As such, hardening systems through automated tooling has become not only a practice adopted by security professionals on occasion but a continuous activity to ensure that components are locked down as much as possible. This is amplified when combining Kubernetes and host operating systems as it comprises two levels of substrate that need their access and relationship with each other monitored and controlled.

Compliance

Many organisations strive to achieve compliance standards such as SOC2 or ISO27001.
With these standards comes rigorous risk assessment and safeguard implementation that typically would apply to all levels of infrastructure. Kubernetes presents a new challenge as maturing standards not only combine controls with existing infrastructure but create new safeguards that need to be required when combining multiple systems together. 

Identifying and remediating these issues is time-consuming and often a manual process, tooling such as Kubescape looks to make that journey easier by identifying controls within Kubernetes and assisting a practitioner to target and remediate those that are necessary for the compliance standard.

Installation and operation

Kubescape for Charmed Kubernetes is offered as an operator that manages a collection of in-cluster components, packaged as a Charm. To install the Charm and take advantage of all the security benefits that Kubescape brings to the table, a user would need to sign up for ARMO Cloud for Kubescape and connect the Kubescape Charm to their account in the Cloud.

First, a user should log into their ARMO Cloud account, then click on the Account icon and copy the ID. Once they have the ID copied, the next step is installing the Charm into the cluster.

To install the Charm, our user would need a working Juju CLI installation. Once Juju is set up, running, and connected to the cloud of your choice, Kubescape can be installed by running the following command:

juju add-model kubescape juju deploy --trust kubescape --config clusterName=`kubectl config current-context` --config account=<YOUR_CLOUD_ACCOUNT_ID>

This command will deploy the Kubescape Charm inside the user’s cluster, and give it the name of the current kubectlcontext and associate the named cluster with the given account ID.

Once the Kubescape charm is up and running, it will scan the cluster, upload the results and display them on the user’s ARMO Cloud dashboard. From this point on, the user can inspect the dashboard to assess the security risk scores as calculated by supported frameworks and see risk trends over time. They can also jump into the details to review the detected security issues together with the fixes Kubescape suggests, and any software vulnerabilities that Kubescape has found in the deployed artefacts. Moreover, ARMO Cloud for Kubescape provides the ability to examine the visual representation of the cluster’s RBAC rules, so the user can detect and address authorisation misconfigurations easier.

06 January, 2023 12:27PM

hackergotchi for Maemo developers

Maemo developers

Cat’s Panic

It’s been 8 years since the last time I wrote a videogame just for personal fun. As it’s now become a tradition, I took advantage of the extra focused personal time I usually have on the Christmas season and gave a try to Processing to do my own “advent of code”. It’s a programming environment based on Java that offers a similar visual, canvas-based experience to the one I enjoyed as a child in 8 bit computers. I certainly found coding there to be a pleasant and fun experience.

So, what I coded is called Cat’s Panic, my own version of a known arcade game with a similar name. In this version, the player has to unveil the outline of a hidden cute cat on each stage.

The player uses the arrow keys to control a cursor that can freely move inside a border line. When pressing space, the cursor can start an excursion to try to cover a new area of the image to be unveiled. If any of the enemies touches the excursion path, the player loses a life. The excursion can be canceled at any time by releasing the space key. Enemies can be killed by trapping them in a released area. A stage is completed when 85% of the outline is unveiled.

Although this game is released under GPLv2, I don’t recommend anybody to look at its source code. It breaks all principles of good software design, it’s messy, ugly, and it’s only purpose was to make the developing process entertaining for me. You’ve been warned.

I’m open to contributions in the way of new cat pictures that add more stages to the already existing ones, though.

You can get the source code in the GitHub repository and a binary release for Linux here (with all the Java dependencies, which weight a lot).

Meow, enjoy!

0 Add to favourites0 Bury

06 January, 2023 03:32AM by Enrique Ocaña González (eocanha@igalia.com)

January 05, 2023

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Security in the smart home: considerations for device makers

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/54eb/Security-in-the-smart-home-considerations-for-device-makers-1.png" width="720" /> </noscript>
Cybersecurity: What should device makers prioritise?

When people think of home security they usually think of an alarm system with a keypad next to the door. These days, however, home security should have two meanings. I’m here to talk about the second: cybersecurity. In other words, security in the smart home.

A recent investigation found that a shocking number of leading smart home devices contained outdated SSL libraries. An outdated SSL could leave the door open for malicious actors to listen in on network traffic. In the smart home context, that traffic could include extremely personal information such as when you’re at home or away. This kind of security threat is far from being the only one; consumer device security breaches are consistently in the news. Clearly, this is a significant issue.

Cybersecurity in the consumer space

Cybersecurity has long been a weak point for the smart home industry. Typically, smart home devices are made on a tight budget and a fast development cycle. This doesn’t leave a lot of room for “extras” like security. What’s more, these devices aren’t being used in safety-critical or high-value environments. The consequences of a smart toaster being compromised don’t begin to compare to the consequences of a factory robot being compromised. These facts have led to a certain complacency in the industry.

While the industry may have gotten away with some complacency until today, the consequences of poor cybersecurity in the smart home are much higher today than they were ten years ago.

Big data = personal data

The amount of data generated by the typical smart home today is orders of magnitude larger than it was five or ten years ago. Most smart homes these days have multiple microphones and cameras on the inside of the home, something that would have been unthinkable in the 2000s. Additionally, many devices contain a variety of cloud services and applications, each with their own associated data sets. 

This data enables some of the most advanced functionality we’ve seen in the smart home to date. Take ambient computing as an example of the possibilities offered by a large set of data from interoperable devices. Unfortunately, this data is also the reason that smart home cybersecurity matters now more than ever. A compromised smart home opens up a world of possibilities for bad actors – it could lead to identity theft, devices becoming part of botnets, or leaking of private information such as videos from inside the home.

How companies should respond

The problem may be widespread, but the good news is that companies operating in this space can very easily avoid making their devices a soft target for attackers. Companies should apply regular updates to their application and OS and should ensure that devices are properly isolated.

Robust and regular over-the-air updates

The first step towards having secure devices is having a robust update policy. Many devices in today’s smart homes do not receive updates without manual intervention by the end user. Realistically, that means they do not receive updates at all. This leaves the door open to an unknowable number of future threats.

Both application and OS updates are important here. Application vulnerabilities will be specific to each device, and it is up to the device maker to find and solve potential vulnerabilities to this software. Patches to OS vulnerabilities, on the other hand, will need to come from the maintainer of the operating system. In the case of Ubuntu and Ubuntu Core, Canonical can provide security maintenance and a number of other services.

Isolated systems

A second measure companies can take to protect their devices, especially in newer-generation devices that potentially run many applications and services, is to ensure that each of these applications is fully isolated so that vulnerabilities cannot spread. Ubuntu Core, for example, enforces this isolation system-wide, removing any such security threat.

With enough time and resources, attackers can likely access any system. Most likely, they will try to exploit the low-hanging fruit. The key for businesses in this space is to make the cost of attacking their devices higher than the benefit to attackers. 

To discuss how to increase your smart home device’s security posture, get in touch with us

Further reading

Canonical is a member of the Connectivity Standards Alliance. Ubuntu Core complements the Matter standard, providing polished solutions for over-the-air updates and security maintenance. Read more.

05 January, 2023 10:59AM

Alan Pope: Year of The Broken Desktop

This morning I attempted to start work on my desktop PC and couldn’t. The screen is black, it doesn’t want to wake up the displays. I used the old REISUB trick to restart, and it boots, but there’s no output on the display. I did some investigation and this post is mainly to capture my notes and so others can see the problem and perhaps debug and fix it. The setup is an Intel Skull Canyon NUC connected to an external GPU enclosure which contains an NVIDIA GeForce RTX 2060.

05 January, 2023 09:00AM

Podcast Ubuntu Portugal: E228 Balanço De 2022

Eis o primeiro combate do ano, perdão… episódio. Início do ano é normalmente altura para olhar para trás e validar se as nossas convicções de há 12 meses estavam certas ou erradas. A luta foi taco a taco e, como não podia seixar de ser, o final é completamente inesperado. De referir ainda que as previsões dos ouvintes não forma esquecidas. Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

05 January, 2023 12:00AM

January 04, 2023

hackergotchi for Purism PureOS

Purism PureOS

Purism to Participate in CES 2023

Purism will be at CES® 2023, one of the most influential technology events in the world. Owned and produced by the Consumer Technology Association (CTA)®, CES features innovative ideas from the technology industry. Purism will partner with Teksun Inc, a IoT and AI solutions provider to exhibit this year at Las Vegas. The CES is set to welcome 100,000 participants, 4700 […]

The post Purism to Participate in CES 2023 appeared first on Purism.

04 January, 2023 06:24PM by Purism

hackergotchi for Volumio

Volumio

Volumio Integro announced Product of the Year!

We are delighted to announce that the Volumio Integro streaming amplifier was just awarded Product of the Year by Stereo & Video magazine.

 

The post Volumio Integro announced Product of the Year! appeared first on Volumio.

04 January, 2023 02:19PM by Graham

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: WSL and Ubuntu: 2022 year in review

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/ba3a/Windows-Subsystem-for-Linux-6.png" width="720" /> </noscript>

In 2022, Windows Subsystem for Linux (WSL) exploded in popularity, with millions of users taking advantage of the ability to develop, create and administrate inside a native Linux environment deeply integrated with Windows.

This is thanks in large part to continued investment from Microsoft, consistently rolling out new features and updates for the platform on both Windows 10 and Windows 11. As the most popular distribution for WSL, the Ubuntu team is committed to supporting and building on these features to ensure that we deliver a polished and powerful Ubuntu WSL experience.

We’re looking forward to enhancing Ubuntu WSL in 2023 with new features for enterprise users and developers. But for now let’s take a look back at our 2022 highlights and round up all of the changes and updates that landed in the last 12 months!

To kick things off, check out Microsoft PM Craig Loewen’s Ubuntu Summit talk on the past, present and future of WSL.

April: Tools and Tutorials for Data Scientists

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/QSBU78cb32dIN5FjXYRt4mCqSEE8UtxXbbzgFG2T1ysqiAGKGSUJgjqQP1afZea-hKTM8_JW3jd0BoYWtQXXwo-LsgWx_Dw5XHjWskEQFeqWkQ_IsWG1zVW7fnkmRaS8cWXvUq3C8mG4XXJWgA" width="720" /> </noscript>

For Data Scientists and AI/ML engineers working in a Windows-centric institution, WSL provides them with access to the full suite of open source data science tools optimised for Linux, including PyTorch, Tensorflow and scikit-learn. Thanks to WSL’s integration with the host machine they can spin up Python notebook environments in Linux and develop with them in their Windows browser or VS Code. Linux distributions in WSL also have access to the machine’s GPU via native Windows drivers, meaning frameworks like NVIDIA CUDA can take full advantage of your hardware with minimal performance impact when running inside Linux on Windows.

We’ve put together a number of pieces looking at data science workflows in Ubuntu WSL, including our Webinar from April 2022, and the following whitepapers and tutorials.

For more insight into other users of WSL, check out the Stack Overflow section later in this post.

May: Ubuntu Preview is released on the Windows Store

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Jw7iurVkXbIhD02EthOwA-OKebZzwHpw8LLB5qtI33eL0K5fBEUnfpqeAEx641mcmyXpAqkep-GyTsaAuMYD1v_iUOREYb8EINyd1grCKLCWBJ83DTlA7J_5eZyXWOazcN2TqN8QOU1cJjVZp6_zH5f4qECqMTkEknqj8Bwh2wykZmiKrX8QAhZ6u-EFbQ" width="720" /> </noscript>

The default Ubuntu application on the Microsoft Store always provides the latest LTS release of Ubuntu after the first point release (for example Ubuntu 22.04.1 LTS and later).  Previous Ubuntu LTSs are also available to install for users who want to remain on a particular release; these provide a stable development base for users, with over five years of Long Term Support.

Ubuntu Preview, launched in May 2022, now delivers the latest daily builds of Ubuntu to WSL users and offers a way for developers to experiment with the latest toolchains and features available in interim or upcoming Ubuntu releases. It is not designed for production development but for those happy to explore and report issues they encounter to help improve future releases of Ubuntu.

June: Stack Overflow developers love WSL

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/fzRW_njBjdQ_pX_m5tgqLBgGRCg76knL-xdnrMhfteDYaajeFDXajTNJ3slKmvqtGH6HszmWLx-vH-WD6djglhmql5FR0HVzSZ1a8cmNbpOW72dblBjmSthUJQCZparfy7EKOhckSs90Zl-eMV69z4ajh2KvQ4bRoOqWwjCKaA8eg-s4EsG7mzPyHODKxw" width="720" /> </noscript>

The Stack Overflow Developer Survey landed in June 2022 and showed that the percentage of developers using WSL as their primary operating system had seemingly grown by approximately five times since 2021, from 3% of professional developers to over 14% in 2022. This growth is hard to conclude definitively however, since 2022 shifted to a multiple choice answer framework for this category.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/fRjYl8iNrs4UIzpqlVYSJC8vQvmmEVqNoLGJdExxSfyb8xnJ1-p99o7nxMH1rr6SvbHYPdNX_6JroCEfKAdxnRE0DYpXm2DWQgU7CXO-FSo1TkdDh4gfB9M-_fQmZhLGGxtprq0dwD9FRIEaK1prLATzaoVnAdUdmUqf_NOYxqi4rUd8urx-S1LQL8Fn7g" width="720" /> </noscript>

We did some digging into these numbers to try and figure out what kind of developers were really driving this success. When filtering responses to only include users who specified WSL as their primary operating system we saw a strong preference for web development, devops and cloud engineers. However this mirrored the overall distribution of developers who answered the survey.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/3HIr4K5w58URxleAl--ZOdVpWrEyFanDECdY6cxfp791BwLxzPh16UBFU6hIcJTNAPO1jvTUMkhN7bQ-LBJGHF5L-lCo9B2dsUsxDYsUE5IlYIx60-FWs0rm0UCzcLHAgo4-uWCPate7bswOcp-hDsXlzb1VubKxCCLkFgCPxc8vwsvjzpKWUoTZV_VCfA" width="720" /> </noscript>

The most interesting view was to look at the number of WSL users as a percentage of respondents in each category. This gave us a slightly different picture, with cloud and devops engineers now leading the pack, closely followed by security professionals and more administrative roles.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/EDdegasaelV33_MrjF1bKM1SpTg-oWfIDgut_5zKBdErXp_t8I8iTR9R-uG09bZ5T2Zob-sGslFRLpRd00tVa_Rup0jvuy5CWrAWiT_amdvznu2T87h6DKnQwcr29nVYDJYMPKljVybx3s6P7SJH0DEvWVEuW2ACKU8dUMz3pEkJC4uAnxR2jX70h08Qig" width="720" /> </noscript>

During the Ubuntu Summit talk at the top of this post, Craig mentioned use-cases from both Blizzard and Electronic Arts and it’s notable to see 20% of game developers taking advantage of the improved workflows that come with Linux and Windows interoperability.

In terms of tools it’s no surprise to see Docker used by the vast majority of WSL users thanks to its deep integration with Docker Desktop, followed by npm for Node.js web development.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/pI05TlDa_6Z_Wm9lU4XpriFNA3UWWA2CKxz0xFEakZXHbJE0qsxvv6nhlt67raKRbMXNPRD6f53roPSOBMpuBM35o8QmpdlNUW84Eflaorwz9kie9NPE2bkFH_Akw1rYTDFGRucj4x6-k0xnaPzY_SgKjVYHXlkoGo_3L-rExNxXMNI6lm8JryYvEmeJAg" width="720" /> </noscript>

September: Systemd Support

In September, a major piece of the puzzle to deliver a fully fledged Ubuntu experience landed in WSL with the arrival of official Systemd support. Systemd makes developing and working with service applications easier than ever and enables support for Snapd and Snap applications like LXD and MicroK8s out of the box.

This functionality is enabled by default in Ubuntu Preview and will become the default for our other Ubuntu applications later on this year. To enable it today, simply edit /etc/wsl.conf inside your Ubuntu WSL distribution and add the following lines:

[boot]
systemd=true

Then restart your instance!

For more tips on working with Systemd in WSL, check out the following links:

November: WSL at the Ubuntu Summit

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/GCHEEfOAS6mKkwqeta1awoiB0UtMmXpVtOxqqK7U-i8E0ZVgVDakGpp95N7tJqH2jOm971TprsSEbNXq2SLMB5A7ll2tULI72WP3lbjurlaustHf9yUThM6tv3n7OYi-ILqQuozXS6nPWjPsurALAyaQx_zAMpm1KOjyORLzHOw1QIFj99fLagjmE0AlSA" width="720" /> </noscript>

The new Ubuntu Summit launched in November attracting hundreds of developers and community members from all over the world to share their work and ideas for the future.

WSL had a strong presence with the talk from Microsoft that opened this blog followed by a number of smaller workshops.

Long-time contributor Dani Llewellyn gave a whistlestop tour of her Linux journey that spanned the earliest days of snaps and WSL to the present.

Canonical developer Carlos Nihelton talked about the challenges and workarounds when building cross-platform Flutter applications for both Windows and Linux.

There was even a lightning talk about using WSL to resurrect old printers on Windows, for a detailed walkthrough on this topic check out the Open Printing blog.

November: WSL2 transitions to the Windows Store App on Windows 10

Over the past 18 months, the Windows 10 and Windows 11 implementations of WSL2 had diverged significantly. Windows 11, delivering WSL as a Windows Store application, was able to move more quickly and include additional features such as WSLg and Systemd support ahead of the Windows feature-based implementation in Windows 10.

That ended in November as Windows 10 finally migrated WSL2 to a Windows Store application, unifying the Windows 10 and 11 experience.

Now, regardless of your Windows version, you can install WSL via the Windows Store, or by running wsl --install in Powershell (or wsl --update if you want to upgrade an existing version).

With this change, new updates can be delivered more quickly to all Windows users, meaning the pace of progression for WSL is only going to increase going into 2023.

Keep in touch for the latest Ubuntu WSL news

As you can see, 2022 was a transformative year for WSL with the addition of key foundational features alongside a significant growth in developer adoption. In 2023 we’ll continue to publish informative content to help developers get the most out of Ubuntu WSL. In addition we’ll be focussing on enterprise features to help IT administrators securely enable Linux developers whilst remaining a part of their organisation’s existing Windows ecosystem.

We’ll close this blog with a final video from Edu Gomez Escandell with his recent introduction to numerical computation applications using WSL and some links on how to get started.

To stay up to date with news, tutorials and guides sign up to our newsletter using the form on the right!

Getting started:

Find out more at Ubuntu.com/wsl

04 January, 2023 01:58PM