July 19, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What is Open RAN?

You may have heard of the term Open Radio Access Networks (RAN) widely used in the telecom industry in recent years. In this blog, we are going to explain what Open RAN is, why it represents an important technology transformation, and how it will impact the telecom ecosystem. It is the first part of a series of blogs that will discuss this popular topic.

Mobile telecommunication networks

Understanding the importance of Open RAN requires some insight into the foundation it’s deployed on, so let’s start with a quick definition of a mobile network. In simplest terms, mobile networks are a connectivity solution that provides wireless communication capabilities to devices. This could be your phone, tablet, or laptop, but also any type of machines, devices and IoT products with communications capabilities.

<noscript> <img alt="" height="634" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_988,h_634/https://ubuntu.com/wp-content/uploads/972b/mobileNetwork.png" width="988" /> </noscript>
Figure 1. Simple illustration of a mobile network.

A mobile network is important not just because it provides voice calls and Internet connectivity to your phone or laptop, but also various types of data services to a vast range of devices, machines, people and businesses. Whether those devices are on the move or not, the network is designed to provide them with seamless connectivity. This is possible thanks to the radio access network (RAN) infrastructure deployed over a wide area, such as an entire country, and a core network located centrally at a data centre acting as the “brain” of the entire mobile network. The core network performs control operations on user data traffic to ensure that services that users have subscribed to receive the agreed quality of service (QoS) levels.

Radio access networks

Now let’s dive in a little bit more into RAN, which is simply a collection of radio towers and data processing elements. At a high level, RAN is the bridge between mobile devices and the core network, providing information exchange between a user’s mobile device and data services hosted elsewhere. It delivers user requests for services in the uplink to the mobile network as well as content uploaded by users, and content downloaded by user devices, such as video streams.

In the “uplink” from mobile devices to data networks, radio waves carry the information and signals sent by devices to the network. These radio signals are received by radio equipment hosted on radio towers, and then converted into digital signals in the RAN. Information in these signals is relayed to the mobile core network in data packets. The core then forwards these packets to services running over the Internet or on data networks. 

In the opposite direction, which is the “downlink” from data networks to mobile devices, the reverse process takes place: data from services on the Internet or other data networks are processed by the core network and the RAN, and then delivered to mobile devices.

Disaggregated RAN

Traditionally, a RAN is built with appliance-like purpose-built hardware. Part of the RAN is installed on radio towers as radio units and the rest is deployed at data centres to perform central data processing operations. The RAN hardware at data centres runs the entire telecommunications software stack of the RAN as a single processing unit, performing all processing except for the lowest level radio frequency (RF) operations carried out at radio units. In LTE/4G, the processing unit of a RAN is called a baseband unit (BBU), and in 5G, it is called a gNodeB.

<noscript> <img alt="" height="630" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1622,h_630/https://ubuntu.com/wp-content/uploads/4db4/RAN_disagregation.png" width="1622" /> </noscript>
Figure 2. Disaggregation of RAN: from traditional RAN running the complete protocol stack into one where the stack is disaggregated and run across edge sites.

The latest technology transition in the RAN is to disaggregate the software stack of a gNodeB into separate components. One can think of this transition as similar to the migration in modern software systems, from monolithic software architectures to more composable ones consisting of micro services, each of which perform a set of correlated services. The idea in having a disaggregated RAN is to achieve more modularity, offering several benefits to telecom operators and the telco ecosystem overall.

Benefits of a disaggregated RAN

The first benefit of a disaggregated RAN is that it allows for relocating parts of the software stack to locations further away from the central cloud towards the radio towers, and deploy them at edge clouds. This makes it possible for those parts of the software stack that need quick interaction with radios to be located closer to the radios, providing them with a shorter time to send and receive data.

If an operator were to deploy a complete gNodeB radio stack replicated at each and every radio site, it would be an extremely costly and inefficient deployment strategy. It would also be impractical to maintain and operate such a large number of gNodeBs over a network of thousands radio sites located at remote and hard to reach locations dispersed over a wide geographical area. 

Instead of a complete gNodeB software stack, a disaggregated RAN allows for only the parts of the stack that can be pushed towards the edge, deployed at edge clouds and shared among multiple closely located radio sites. This deployment structure strikes the balance between achieving higher performance by running some of the radio stack at the edge and lowering CAPEX and OPEX by sharing common parts of the stack across groups of radio sites in a hierarchical architecture.

Another benefit is that because a disaggregated RAN stack is implemented as a set of separate units, multiple vendors can offer their innovative solutions specifically for different parts of the software stack. This incubates the RAN technology ecosystem with new players entering the market and generates more competition. By creating a larger marketplace to source equipment from, operators can reduce their CAPEX in RAN deployments, thanks to higher competition leading to lower equipment prices. 

Finally, separate parts of the radio stack can be updated and upgraded independently without having to touch the entire software stack each time a specific part of it has to change. This modularity lowers the OPEX in equipment updates and upgrades, and makes it possible to undertake more granular operations carried out at different parts of the RAN independent from others.

Open RAN

The benefits of a disaggregated RAN can only be realised if the industry can work in harmony in delivering the components that make up these different components which as a whole builds a complete RAN. This is achievable through standardisation where components are fully interoperable. Just as in any system where separate parts of the system may be sourced from different vendors, agreeing on standard interfaces between disaggregated RAN components is the vital cornerstone of interoperability and smooth system integrations.

<noscript> <img alt="" height="520" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1330,h_520/https://ubuntu.com/wp-content/uploads/c257/open_ran_architecture.png" width="1330" /> </noscript>
Figure 3. Simplified diagram of Open RAN architecture.

Open RAN was born with a simple objective: achieve disaggregated RAN with open standard interfaces. The Third Generation Partnership Project (3GPP), which is the industry standardisation body that publishes mobile telecommunication network standards, defined different options as to how the radio software stack can be disaggregated into separate components. This was then further pioneered by the O-RAN Alliance, defining standard and open interfaces for the disaggregated RAN components.

With open standard interfaces, it is now possible for vendors to implement different parts of the radio stack as either hardware or software, with the assurance that products from different vendors can talk to each other as building blocks of a complete RAN system.

Summary

Operators look for new ways to achieve cost-efficiency in their infrastructure, where RAN is accountable for a large portion of their deployment and operational costs. Open RAN offers a new architecture, targeting at delivering on this cost reduction goal in CAPEX and OPEX that operators seek. With a disaggregated architecture, Open RAN incubates a larger RAN vendor ecosystem, lowering equipment costs in the long term by bringing more competition to the market with more innovative products offered for different parts of the radio software protocol stack. A disaggregated RAN will also bring in new capabilities to deploy, update, upgrade, and maintain RANs more effectively, lowering OPEX. The flexibility in deploying parts of the RAN at shared edge cloud locations brings performance benefits to modern edge computing services that require quick interactions. 

In the next part of this blog series, we will talk about how network functions virtualisation and cloud-native software operation principles complement and further enhance the benefits of Open RAN for operators and the telecom industry.

Contact us

<noscript> <img alt="" height="167" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_248,h_167/https://ubuntu.com/wp-content/uploads/06f4/contact-us.png" width="248" /> </noscript>

Get in touch with us for your telco deployment needs and your transition to open source in mobile networking. Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at ubuntu.com/telco.

Further reading

What is a telco cloud?

Bringing automation to telco edge clouds at scale

Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu for 5G URLLC scenarios

19 July, 2024 09:30AM

Ubuntu Blog: Charmed OpenSearch Beta is here. Try it out now!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXf_eGmo4DXQX_7vCwXlKd8hPh2zsm23d-mEImXo01mS5wTWp65Fg7GFAY818eDHImPEjlcvngSX5CUro9P8FAmiOh8KxgcZessk3VVyDxaGScuPIVOg93P-0LOGNn9ANcjFmQ3FOObpK9-50__c8LDgwH_m?key=leRjOoMK9rXIuxMrw8ddvA" width="720" /> </noscript>

Canonical’s Data and AI portfolio is growing with a new data and analytics tool. We proudly announce that Charmed OpenSearch version 2.14 is now available in Beta. OpenSearch®  is a comprehensive search engine and analytics suite solution that thousands of organisations use for various use cases in search engines, security, and AI/ML.

From today, data engineers, scientists, analysts,  machine learning engineers, and AI enthusiasts can take the Charmed OpenSearch beta for a drive and share the feedback directly with us and in our beta programme.

Join our beta program

What is OpenSearch®?

OpenSearch is a thriving open source project and community with over 600 million project downloads, over 300 Github contributors and over 9100 Github stars. It is used in multiple cases, such as search, observability, event management, security analytics, visualisation, AI, machine learning and more. Let’s take a brief look at how companies are taking advantage of OpenSearch’s robust engine capabilities.

Search

Using OpenSearch as a search engine can be a powerful solution for various applications, ranging from e-commerce sites to large-scale enterprise data searches. One example would be customers needing to quickly find products from a large inventory with various data attributes like brand, price, and multiple categories. You can use OpenSearch for full-text search, faceted search with filter, autocomplete, suggestions, and personalisation.

Observability

Using OpenSearch as an observability tool is an excellent choice for monitoring, troubleshooting, and gaining insights into complex systems. You can use metrics, logs and traces for OpenSearch to understand a system’s health, performance and reliability.

Security Analytics

You can leverage OpenSearch to build a robust Security Information and Event Management (SIEM) system. This system enables real-time analysis of security alerts generated by hardware and software, network infrastructure, and applications. It can be used for data and log aggregation, alerting, threat detection, compliance reporting, etc.

Visualisation

OpenSearch has a Dashboard feature that can be used to create and customise visualisation tools to monitor and display data insights in real time. These tools can be used as charts, graphs, data filters, and customised panels.

Machine Learning

OpenSearch’s machine learning capabilities can be used for advanced data analysis, anomaly detection, predictive analytics, and improving search relevance. It can also be used as a vector database and model embeddings and is a good tool for Retrieval Augmented Generation (RAG) for large language model (LLM) projects.

What is Charmed OpenSearch?

OpenSearch is an all-in-one search engine, analytics and machine learning tool for multiple use cases. However, working with OpenSearch in production-grade use cases that process vast amounts of data and extensive data infrastructure can be challenging. Automating the deployment, provisioning, management, and orchestration of production OpenSearch data clusters can also be highly complex.

What if there was an easier way to get more out of OpenSearch through an operator – called Charmed OpenSearch? An operator is an application containing code that takes over automated application management tasks. Picture it as your technological virtuoso, orchestrating a grand performance that includes high availability, automated single multiple clusters deployment, implementing robust security measures like transport layer security (TLS), configuring initial user management, adding plug-ins and extension features, upgrades automation, observability of OpenSearch clusters, and even handling backup and restore operations. Charmed OpenSearch is an upgraded version of the OpenSearch upstream project that uses automation and can be deployed on any private, public or hybrid cloud.

With a primary mission of simplifying the OpenSearch experience, Charmed OpenSearch is your backstage pass to a world where OpenSearch isn’t just a search and analytics suite – it’s a seamlessly operated search engine and analytics powerhouse.

Try Charmed OpenSearch Beta today.

Are you a data engineer, analyst, scientist, or machine learning enthusiast interested in trying OpenSearch? Charmed OpenSearch can be:

  • Deployed on your local machine
  • Used for multiple use cases: observability, SIEM, visualisation and GenAI
  • Improved with your feedback.

To get started, you must run Ubuntu OS, meet the minimum system requirements, and be familiar with OpenSearch concepts.
Simple deployment steps for Charmed OpenSearch  in your Ubuntu VM:

juju deploy opensearch --channel 2/beta

Learn to use Charmed OpenSearch:

Access the tutorial here

Share your feedback

Charmed OpenSearch is an open-source project that is growing because of the care, time and feedback that our community gives. This beta release is no exception, so if you have any feedback or questions, please feel free to contact us.

Give us your feedback

Further Reading

Trademark Notice

OpenSearch is a registered trademark of Amazon Web Services. Other trademarks are the property of their respective owners. Charmed OpenSearch is not sponsored, endorsed, or affiliated with Amazon Web Services.


19 July, 2024 08:49AM

hackergotchi for Purism PureOS

Purism PureOS

Abside and Purism Partner to Deliver Secure Mobile Solution for U.S. Government and NATO Countries

Acton, MA, USA – July 19th, 2024 – Abside, a leading provider of US made secure networking solutions, and Purism, a secure computing and US phone manufacturer, today announced a collaboration to deliver a secure mobile solution for the U.S. government and NATO countries. This collaboration brings together Abside’s N79 5G private network solution, designed […]

The post Abside and Purism Partner to Deliver Secure Mobile Solution for U.S. Government and NATO Countries appeared first on Purism.

19 July, 2024 12:09AM by Purism

Purpose-Built Smartphones for Government vs. Commercial Off-The-Shelf Devices

The rise of the smartphone as both an indispensable consumer electronic device and an omnipresent tool for performing tasks at work has led to an inevitable question: Should enterprise and government centralized IT organizations deploy Commercial Off-the-Shelf (COTS) devices such as the iPhone, Google Pixel, Samsung Galaxy and attempt to secure them, or should organizations […]

The post Purpose-Built Smartphones for Government vs. Commercial Off-The-Shelf Devices appeared first on Purism.

19 July, 2024 12:03AM by Randy Siegel

July 18, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Let’s meet at AI4 and talk about open source and AI tooling

Date: 12 – 14 August 2024

Booth: 426

Autumn is a season full of events for the AI/ML industry. We are starting early this year, before the summer ends, and will be in Las Vegas to attend AI4 2024. This is North America’s largest industry event, and it attracts some of the biggest names in the AI/ML space to share deep discussions about initial exploration for AI, machine learning operations (MLOps) and AI at the edge. Join Canonical at our very first AI4, where you’ll be able to get in-person recommendations for innovating at speed with open source AI.

<noscript> <img alt="" height="694" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1236,h_694/https://ubuntu.com/wp-content/uploads/a06c/Screenshot-2024-07-18-at-15.04.41.png" width="1236" /> </noscript>

Canonical is the publisher of Ubuntu, the leading Linux distribution that has been around for 20 years. Our mission to provide secure open source software extends well into the machine learning space. We have been providing solutions since the earliest days of AI/ML, for example our official distribution of Kubeflow, an end-to-end MLOps platform that helps organisations productise their ML workloads. Nowadays, Canonical’s MLOps portfolio includes a suite of tools that help you run the entire machine learning lifecycle at all scales, from AI workstations to the cloud to edge devices. Let’s meet to talk more about it!

Engaging with industry leaders, open source users, and organisations looking to scale their ML projects is a priority for us. We’re excited to connect with attendees at AI4 to meet, share a cup of coffee, and give you our insights and lessons in this vibrant ecosystem.

Innovate at speed with open source AI

At Canonical, ever since we launched our cloud-native apps portfolio, our aim has been to enable organisations to run their Data & AI projects with one integrated stack, on any CNCF-comformant Kubernetes distribution and on any environment, weather it is on-prem or on any major public cloud.  As you might know already, we enable you to run your projects at all scales:

  • Data science stack for beginners: Many people are trying to upskill in data science and machine learning these days. However, beginners spend more time setting up their environment than developing their capabilities in this field. Data science stack (DSS) is an easy-to-deploy solution that can run on any workstation. It gives access to an environment to quickly get started with data science or machine learning. You can read more about our Data Science Stack here.
  • Charmed Kubeflow for ML at scale: Kubeflow is an end-to-end MLOps platform used to develop and deploy models at scale. It is a cloud-native application that runs on any CNCF-conformant Kubernetes, including MicroK8s, AKS, EKS or GKE. It integrates with leading open source toolings such as MLflow, Spark or OpenSearch. Try it out now!
  • Ubuntu Core and Kserve for Edge AI: Devices are often a vulnerable point and running a secure OS is crucial in order to protect all the artefacts, regardless of the architecture. Open source tools such as KServe enable AI practitioners to deploy their models on any edge device,

During AI4, we have prepared for a series of demos to show you how open source tooling can help you with your data & AI projects. You should join our booth if you:

  • Have questions about AI, MLOps, Data and the role of open source
  • Need help with defining your MLOps architecture
  • Are looking for secure open source software for your Data & AI initiatives
  • Would like to learn more about Canonical and our solutions

Get your infrastructure ready for GenAI

In 2023, the Linux Foundation published a report which found that almost half of surveyed organisations prefer open source tooling for their GenAI projects. Despite this rapid adoption, enterprises are still facing challenges that are related to security, transparency and accessibility and costs. While initial experimentation seems handy due to the large number of solutions available on the market, taking GenAI projects to production obliges organisations to upgrade their AI infrastructure and ensure the data and model protection.

Join my talk, “GenAI beyond the hype: from experimentation to production“ at AI4 on Wednesday, August 14, 2024  from 11:35 AM. During the presentation, I will guide you through  how to move GenAI projects beyond experimentation using open source tooling such as Kubeflow or OpenSearch. We will explore the key considerations and common pitfalls as well as challenges that organisations face when starting a new initiative. Finally, we will analyse the ready-made ML models and scenarios to determine when they are more suitable than building your own solution. 

At the end of it, you will be better equipped to run your GenAI projects in production, using secure open source tooling. 

<noscript> <img alt="" height="684" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1234,h_684/https://ubuntu.com/wp-content/uploads/ebc6/Screenshot-2024-07-18-at-15.16.06.png" width="1234" /> </noscript>

Join us at Booth 426 

If you are attending AI4 2024 in Las Vegas, US between 12- 14 August, make sure to visit booth 426. Our team of open source experts will be available throughout the day to answer all your questions about AI/ML and beyond.

You can already book a meeting with our team member [SDR name] using the link below.

18 July, 2024 12:14PM

hackergotchi for Deepin

Deepin

New Progress! deepin M1 Project Updated to deepin RC2 Version

Last July, we successfully made deepin initially compatible with Apple M1. This year, as deepin V23 beta enters the RC2 version, the deepin M1 project naturally follows with updates. Additionally, this adaptation work not only upgrades the system environment version but also updates some system underlying component versions, optimizes the packaging process of various project modules, and partially adds timers to build content weekly for developers to experience firsthand. Now, let's dive into the specific updates in this release. 《deepin adapts to Apple M1, what have we experienced? (Part 1)》 《deepin adapts to Apple M1, what have we experienced? (Part ...Read more

18 July, 2024 05:57AM by aida

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.1.3-16 Released

This release of Clonezilla live (3.1.3-16) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.1.3-11

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2024/Jul/15).
  • Linux kernel was updated to 6.9.9-1.
  • Partclone was updated to 0.3.32.
  • ocs-resize-part: add option "-f" for fatresize in batch mode.
  • Replace the command to clean super blocks of fakeraid. Now mdadm is used, and no more using dmraid since dmraid is not maintained anymore.

BUG FIXES

18 July, 2024 12:35AM by Steven Shiau

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E308 Trocas & Baldrocas

Esta semana o Miguel andou a brincar com Kubuntu, automações falhadas em Home Assistant e clonou um disco com DD com consequências inesperadas. O Diogo anda a ler um novo livro empolgante (Privacidade 404), que recomenda a toda a gente; abjurámos erros cometidos em episódios anteriores; dissecámos penosamente o rastreio publicitário da Mozilla no Firefox; descascámos na Apple e na Google e ainda ressuscitámos a saudosa Cândida Branca Flor, graças a uma clamorosa falha de segurança em OpenSSH.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

18 July, 2024 12:00AM

July 17, 2024

hackergotchi for GreenboneOS

GreenboneOS

June 2024 Threat Tracking: Cybersecurity On The Edge

Before this year, 3,000 CVEs (Common Vulnerabilities and Exposures) had never been published in a single month. 2024 has been a string of record breaking months for vulnerability disclosure; over 5,000 CVEs were published in May 2024. While June offered a lapse from the storm, some may be questioning whether delivering a secure software product is simply impossible. Even vendors with the most capital and market share – Apple, Google, Microsoft – and vendors of enterprise grade network and security appliances – Cisco, Citrix, Fortinet, Ivanti, Juniper, PaloAlto – have all presented perpetually insecure products to market. What insurmountable hurdles could be preventing stronger application security? Are secure software products truly an impossibility?

One possible truth is: being first to market with new features is considered paramount to gaining competitive edge, stealing priority from security. Other suggestions are more conspiratorial. The Cyber Resilience Act [1][2], set to be enforced in late 2027, may create more accountability, but is still a long way down the road. Cyber defenders need to stay vigilant, implement cybersecurity best practices, be proactive about detecting security gaps, and remediate them in a timely fashion; easy to say, but a monstrous feat indeed.

In this month’s edition of Greebone’s Threat Tracking blog post we will review culprits in a recent trend – increased exploitation of edge network devices.

Edge Devices Are Hot Targets For Cyber Attack

Cyber threat actors are increasingly exploiting vulnerabilities in network perimeter services and devices. The network perimeter refers to the boundary that separates an organization’s internal network from external networks, such as the internet and is typically home to critical security infrastructure such as VPNs, firewalls, and edge computing services. This cluster of services on the network perimeter is often called the Demilitarized Zone, or DMZ. Perimeter services serve as an ideal initial access point into a network, making them a high value target for cyber attacks.

Greenbone’s Threat Tracker posts have previously covered numerous edge culprits including Citrix Netscaler (CitrixBleed), Cisco XE, Fortinet’s FortiOS, Ivanti ConnectSecure, PaloAlto PAN-OS and Juniper Junos. Let’s review new threats that emerged this past month, June 2024.

Chinese APT campaign Attacking FortiGate Systems

CVE-2022-42475 (CVSS 9.8 Critical), a severe remote code execution vulnerability, impacting FortiGate network security appliances has been implicated by the Dutch Military Intelligence and Security Service (MIVD) in a new cyber espionage campaign targeting Western governments, international organizations, and the defense industry. The MIVD disclosed details including attribution to a Chinese state hacking group. The attacks installed a new variant of an advanced stealthy malware called CoatHanger, specifically designed for FortiOS that persists even after reboots and firmware updates. According to CISA, CVE-2022-42475 was previously used by nation-state threat actors in a late-2023 campaign. More than 20,000 FortiGate VPN instances have been infected in the most recent campaign.

One obvious takeaway here is that an ounce of prevention is worth a pound of cure. These initial access attacks leveraged a vulnerability over a year old, and thus were preventable. Cybersecurity best practices dictate that organizations should deploy regular vulnerability scanning and take action to mitigate discovered threats. Greenbone Enterprise feed includes detection for CVE-2022-42475.

P2Pinfect Is Ransoming And Mining Unpatched Redis Servers

P2Pinfect, a peer-to-peer (P2P) worm targeting Redis servers, has recently been modified to deploy ransomware and cryptocurrency miners as observed by Cado Security. First detected in July 2023, P2Pinfect is a sophisticated Rust-based malware with worm capabilities meaning that recent attacks exploiting CVE-2022-0543 (CVSS 10 Critical) against unpatched Redis servers, can automatically spread to other vulnerable servers.

Since CVE-2022-0543 was published in February 2022, organizations employing compliant vulnerability management should already be impervious to the recent P2Pinfect ransomware attacks. Within days of CVE-2022-0543 being published, Greenbone issued multiple Vulnerability Tests (VTs) [1][2][3][4][5] to the Community Edition feed that identify vulnerable Redis instances. This means that all Greenbone users globally can be alerted and protect themselves if this vulnerability exists in their infrastructure.

Check Point Quantum Security Gateways Actively Exploited

The Canadian Centre for Cyber Security issued an alert due to observed active exploitation of CVE-2024-24919 (CVSS 8.6 High), which has also been added to CISA’s catalog of known exploited vulnerabilities (KEV). Both entities have urged all affected organizations to patch their systems immediately. The vulnerability may allow an attacker to access information on public facing Check Point Gateways with IPSec VPN, Remote Access VPN, or Mobile Access enabled and can also allow lateral movement via unauthorized domain admin privileges on a victim’s network.

This issue affects several product lines from Check Point, including CloudGuard Network, Quantum Scalable Chassis, Quantum Security Gateways, and Quantum Spark Appliances. Check Point has issued instructions for applying a hotfix to mitigate CVE-2024-24919. “Hotfixes” are software updates issued outside of the vendor’s scheduled update cycle to specifically address an urgent issue.

CVE-2024-24919 was just released on May 30th, 2024, but very quickly became part of an attack campaign further highlighting a trend of diminishing Time To Exploit (TTE). Greenbone added active check and passive banner detection vulnerability tests (VTs) to identify CVE-2024-24919 within days of its publication allowing defenders to swiftly take proactive security measures.

Critical Patches Issued For Juniper Networks Products

In a hot month for Juniper Networks, the company released a security bulletin (JSA82681) addressing multiple vulnerabilities in Juniper Secure Analytics optional applications, another new critical bug was disclosed; CVE-2024-2973. On top of these issues, Juniper’s Session Smart Router (SSR) was outed for having known default credentials [CWE-1392] for its remote SSH login. CVE-2024-2973 (CVSS 10 Critical) is an authentication bypass vulnerability in Session Smart Router (SSR), Session Smart Conductor, and WAN Assurance Router products that are running in high-availability redundant configurations and allows an attacker to take full control of an affected device.

Greenbone Enterprise vulnerability test feed provides detection for CVE-2024-2973 and remediation information is provided by Juniper in their security advisory (JSA83126). Finally, Greenbone includes an active check to detect insecure configuration of Session Smart Router (SSR), by verifying if it is possible to login via SSH with known default credentials.

Progress Telerik Report Server Actively Exploited

Last month we discussed how one of Greenbone’s own security researchers identified and participated in the responsible disclosure of CVE-2024-4837, impacting Progress Software’s Telerik Report Server. This month, another vulnerability in the same product was added to CISA’s actively exploited catalog. Also published in May 2024, CVE-2024-4358 (CVSS 9.8 Critical) is an Authentication Bypass by Spoofing Vulnerability [CWE-290] that allows an attacker to obtain unauthorized access. Additional information, including temporary mitigation workaround instructions are available from the vendor’s official security advisory.

Also in June 2024, Progress Software’s MOVEit Transfer enterprise file transfer tool was again in the hot seat with a new critical severity vulnerability; CVE-2024-5806, having a CVSS 9.1 Critical assessment. MOVEit was responsible for the biggest data breaches in 2023 affecting over 2,000 organizations.

Greenbone issued an active check and version detection vulnerability tests (VTs) to detect CVE-2024-24919 within days of their publication, and a VT to detect CVE-2024-5806 within hours, allowing defenders to swiftly mitigate.

Summary

Even tech giants struggle to deliver software free from vulnerabilities, underscoring the need for vigilance in securing enterprise IT infrastructure – threats demand continuous visibility and swift action. The global landscape is rife with attacks against perimeter network services and devices as attackers large and small, sophisticated and opportunistic seek to gain a foothold on an organization’s network.

17 July, 2024 12:34PM by Joseph Lee

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2024.07~dev0 with CLI Installer’s home encryption and Midori

This is an update of Sparky semi-rolling iso images (MinimalGUI and MinimalCLI only) of the Debian testing line, which provides 2 notable changes: 1. Sparky CLI Installer with home partition encrypting The Sparky CLI Installer got a new option which lets you encrypt and secure your separate home partition. If you choose this option, the Plymouth will be disabled, even it is installed…

Source

17 July, 2024 10:43AM by pavroo

hackergotchi for ARMBIAN

ARMBIAN

Armbian Leaflet #27

Dear Armbians,

This week, we bring you a roundup of exciting developments and updates from the Armbian community. From kernel enhancements to device-specific improvements, there’s plenty to dive into. Plus, we announce the winners of the Radxa Rock 5 ITX giveaway! Read on for all the details.

Kernel and Device Tree Updates:

  • sunxi-6.1: Fixed a kernel loading issue by reversing commit 75317a0.
  • linux-rk35xx-vendor.config: Added support for RTW89_8852be module.
  • linux-rk35xx-vendor.config: Updated kernel configuration settings.
  • arm64: dts: rockchip: Introduced support for radxa-e52c board.

Documentation and Configuration Enhancements:

  • firstlogin: Implemented quoting for values to handle spaces (#6942).
  • Desktops: Corrected missing packages in desktop environments.
  • wifi: Included rtl8852bs driver to support entire device families.
  • minor fixes (#446): Addressed various minor issues for documentation improvements.
  • networking: Enhanced networking documentation introducing NetPlan as common configuration point.
  • Add DNS and route to network config (#442): Improved network configuration guide by adding DNS and route setup instructions.

Device-Specific Updates:

  • rockchip-rk3588 / edge: Removed redundant patch and updated to latest release.
  • mainline-kernel: Updated to version 6.10-rc7 for improved compatibility and features.
  • thinkpad-x13s: Updated to jhovold’s work-in-progress branch for sc8280xp-6.10-rc7.
  • odroidm1: Ensured function names consistency and updated to u-boot v2024.07.
  • u-boot: Embedded armbian artifact version into CONFIG_LOCALVERSION for local configuration.
  • Bananapi M5: Upgraded u-boot to final release version v2024.07.
  • radxa-e52c: Added comprehensive support for the new radxa-e52c board integration.

Read more: https://github.com/armbian/build/releases

Radxa Rock 5 ITX Giveaway Winners:

Congratulations to the winners!

  • 1st prize: Alexandre B., France
  • 2nd prize: Jan-Hendrik W, Germany
  • 3rd prize: Steve H., USA

We wish them happy hacking with their new Radxa Rock 5 ITX boards.

That wraps up this edition of the Armbian newsletter. Stay tuned for more updates, tutorials, and community highlights in the coming weeks. Remember to join our forums and follow us on social media for the latest news and discussions.

Thank you for being part of the Armbian community!

The post Armbian Leaflet #27 first appeared on Armbian.

17 July, 2024 10:15AM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Charmed PostgreSQL enters General Availability

An enterprise-grade PostgreSQL you can rely on

Jul 17, 2024: Today Canonical announced the release of Charmed PostgreSQL, an enterprise solution that helps you secure and automate the deployment, maintenance and upgrades of your PostgreSQL databases across private and public clouds.

PostgreSQL is an open source database management system. It has been successfully used for more than 3 decades across all IT sectors. Its maturity and its vibrant community consistently make it a first-choice DBMS among developers.

Canonical’s Charmed PostgreSQL builds on this foundation with additional capabilities to cover all your enterprise needs:

  • Up to 10 years of security maintenance and support 
  • Advanced automation to simplify the management of your PostgreSQL fleet
  • Canonical-managed PostgreSQL service to reduce the burden on your team
  • Simple and predictable pricing model

Up to 10 years of security maintenance and support

Security and support for Charmed PostgreSQL can be purchased with an Ubuntu Pro subscription.  An Ubuntu Pro subscription provides up to 10 years of security maintenance not only for PostgreSQL server but also for some of the most popular extensions such as PostGIS, pgVector and pgAudit. You can also opt for 24/7 or weekday support. Conveniently, the subscription is priced per node, not per app. So users can benefit from a complete portfolio of data solutions with a predictable pricing model. 

Automate your PostgreSQL operations

When you opt for Canonical’s Charmed PostgreSQL, you benefit from our expert-crafted operator that lets you run PostgreSQL on Azure, AWS and Google Cloud, whether on top of VMs or their managed Kubernetes offerings. You can also run PostgreSQL in your private data centre on top of MAAS and MicroCloud, OpenStack, Kubernetes or VMWare. 

When using our operator, your PostgreSQL deployments will be:

  • Highly available with built-in replication and automatic failover.
  • Reliable with expert-crafted and self healing state-machine based automation.
  • Upgraded automatically during your maintenance windows and with minimum downtime.
  • Disaster recovery ready with built-in backup, restore and cluster-to-cluster replication capabilities to ensure business continuity.
  • Hybrid and multi-cloud ready with automation that supports deployment within and across private and public clouds.
  • Shipped with a holistic observability and alerting solution based on Prometheus, Loki and Grafana.

Self-managed or Canonical managed, your choice

Canonical adapts to your needs and constraints. You can opt for self-managed PostgreSQL operations while relying on our support, consultancy and training to ramp-up your team and back them up when needed. You can also offload management tasks related to your PostgreSQL servers to our experts for more peace of mind.

Try Charmed PostgreSQL today

Our PostgreSQL operator is open-source. You can get started with Charmed PostgreSQL using the following documentation:

Contact us

For all your PostgreSQL inquiries, please fill in this form.

Further reading

17 July, 2024 09:28AM

The Fridge: Ubuntu 23.10 (Mantic Minotaur) reached End of Life on July 11, 2024

This is a follow-up to the End of Life warning sent earlier to confirm that as of July 11, 2024, Ubuntu 23.10 is no longer supported. No more package updates will be accepted to 23.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

Additionally, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 23.10.

The supported upgrade path from Ubuntu 23.10 is to Ubuntu 24.04 LTS.
Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/NobleUpgrades

Ubuntu 24.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Tue Jul 16 17:56:46 UTC 2024 by Graham Inggs on behalf of the Ubuntu Release Team

17 July, 2024 01:43AM

July 16, 2024

hackergotchi for Pardus

Pardus

Pardus 23.2 Sürümü Yayımlandı

TÜBİTAK ULAKBİM tarafından geliştirilmeye devam edilen Pardus’un 23.2 sürümü yayımlandı. Pardus 23.2; Pardus 23 ailesinin ikinci ara sürümüdür.

En yeni Pardus’ u hemen şimdi indirebilir, bu sürüm hakkında detaylı bilgi edinmek için sürüm notlarını inceleyebilirsiniz.

16 July, 2024 02:06PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The guide to cloud storage security for public sector

<noscript> <img alt="" height="785" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1500,h_785/https://ubuntu.com/wp-content/uploads/a4d5/storage-security.png" width="1500" /> </noscript>

Cloud storage solutions can provide public sector organisations with a high degree of flexibility when it comes to their storage needs, either public cloud based, or in their own private clouds. In our previous blog post we looked at the economic differences between these two approaches.

In this blog we will explore some of the security best practices when using cloud storage, so that you can ensure that sensitive data remains securely stored and compliance objectives are met. The points we cover will be relevant to both on-premise storage and storage solutions in a public cloud.

Risks associated with storing data

In the public sector, is it very common to handle sensitive datasets, such as Personally Identifiable Information (PII) about citizens, medical information, or digital evidence for crime investigation purposes.

It is important to ensure that these data sets are only ever accessible to users with the correct permissions, and whenever transferred, that this is done across a network that cannot be eavesdropped upon. Similarly, whenever stored “at rest” the data should also be encrypted in case hardware is lost or stolen. Furthermore, being able to create point in time snapshots of datasets can ensure that even accidental changes do not cause destruction of important data.

Cloud storage best practices

Access control mechanisms exist in most IT systems, and storage is no different. On premise cloud storage solutions like Ceph, and public cloud storage systems like S3 can integrate with organisation wide authorisation systems like LDAP. This allows an organisation to centrally control access to storage resources and easily add or remove permissions when needed.

When using storage resources over external network connections, it is imperative to ensure that those communications are secure and that there is no possibility of a third party being able to intercept any information that has been transmitted. That goes for internal communications too: it is possible that a malicious actor could gain access to an internal network that previously may have been considered secure, so ensuring internal communication is always encrypted is paramount. Cloud storage systems are able to enforce the use of encrypted communications and reject insecure connections.

Sometimes it is necessary to prove that a dataset has not changed since it was stored, for example, digital evidence used in a criminal trial will need to be accompanied with guarantees that there has been no tampering. Cloud storage systems use solutions like snapshots of either a block volume or filesystem. Another solution they offer is versioning of objects to ensure that the original data can always be recalled. This kind of solution can also be useful as a defence mechanism against ransomware attacks, allowing an organisation to roll back to a known good state.

Once data has reached a storage system, there is another aspect to consider: what happens if the hardware used in that system is lost, recycled or stolen? Imagine a disk fails and needs to be sent back for warranty purposes – what if the data stored on it could be read? Could that lead to a breach of data security? Most modern storage systems allow for data to be encrypted before it is written to disk, so that data cannot be read by unauthorised parties.

Learn more

Both on-premise storage solutions (like Ceph) and public clouds have features that reduce the chances of unauthorised access or changes to the sensitive data stored in them. 

But which option is right for your organisation? Our recent whitepaper shows that there are significant savings by using an on-premise or cloud-adjacent approach that still provides the same high availability and performance that can be found in a public cloud. Find out more below:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXcadmcqkKx44m1MuxzGEXlpx6MWN3LxsVB2FqFR3Rnveuij0b1wK3MxCci5PClTiJGQRs5ZeuNXzQvICPDCec_Xror5xtOQp1j5WkAYtzimEr-qHPfEIfwVX6mzgw43G05ADEJ_o_poyC9qXzT3_YPScRc?key=XbztNRDD9LfpgORpSDpXGw" width="720" /> </noscript>

Additional resources

16 July, 2024 08:32AM

hackergotchi for Deepin

Deepin

Linglong Project Upgraded, Linyaps Officially Launched!

On July 13th, at the deepin Meetup in Shanghai, we officially announced the brand-new name of our project - Linyaps (hereinafter referred to as "Linglong"). Additionally, we shared the news that the project signed a donation agreement with the Open Atom Open Source Foundation on May 24, 2024. Linglong has now become an official incubator project of the foundation.   Linyaps: A New Independent Package Management Toolset In the development of the Linux open-source software ecosystem, the software ecosystem has faced numerous challenges, especially regarding software compatibility and security issues. Packaging and distributing applications across different operating systems not only ...Read more

16 July, 2024 03:45AM by aida

hackergotchi for Purism PureOS

Purism PureOS

Purism Announcing MiMi Robot Crowdfunding Campaign

Crowdfund the Ideal Robotics Future We at Purism want to revolutionize robotics and are seeking your support to fund this research and development. We believe robotics, AI, and technology as a whole need a new direction from what Big Tech is building. We consistently deliver on revolutionary technology, and with your support will invest to […]

The post Purism Announcing MiMi Robot Crowdfunding Campaign appeared first on Purism.

16 July, 2024 03:00AM by Purism

hackergotchi for Qubes

Qubes

XSAs released on 2024-07-16

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-459
    • Qubes OS does not use Xapi.

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

16 July, 2024 12:00AM

QSB-103: Double unlock in x86 guest IRQ handling (XSA-458)

We have published Qubes Security Bulletin (QSB) 103: Double unlock in x86 guest IRQ handling (XSA-458). The text of this QSB and its accompanying cryptographic signatures are reproduced below, followed by a general explanation of this announcement and authentication instructions.

Qubes Security Bulletin 103


             ---===[ Qubes Security Bulletin 103 ]===---

                             2024-07-16

          Double unlock in x86 guest IRQ handling (XSA-458)

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

On 2024-07-16, the Xen Project published XSA-458, "double unlock in x86
guest IRQ handling" [3]:
| An optional feature of PCI MSI called "Multiple Message" allows a
| device to use multiple consecutive interrupt vectors.  Unlike for
| MSI-X, the setting up of these consecutive vectors needs to happen all
| in one go.  In this handling an error path could be taken in different
| situations, with or without a particular lock held.  This error path
| wrongly releases the lock even when it is not currently held.

Impact
-------

An attacker who compromises a qube with an attached PCI device that has
multi-vector MSI capability (e.g., sys-net or sys-usb in the default
Qubes OS configuration) can attempt to exploit this vulnerability in
order to compromise Qubes OS.

Affected systems
-----------------

Both Qubes OS 4.1 and 4.2 are affected.

Patching
---------

The following packages contain security updates that address the
vulnerabilities described in this bulletin:

  For Qubes 4.1, in dom0:
  - Xen packages, version 4.14.6-10

  For Qubes 4.2, in dom0:
  - Xen packages, version 4.17.4-4

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.

Credits
--------

See the original Xen Security Advisory.

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-458.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: qsb-103-2024.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmaWVqUACgkQ1lWk8hgw
4GpsVRAAl7tx8Ur4I658gDws+17f3JC9pNCk5Fzo2OFCc49gAxtcuvIiMcjYrgji
YLqtOIvTI5VjizJvtelfP3xcNQT3eGmg9uknvAHBnfcjLEUgU1mnk4R2+mmSfjvn
Im9pJK2kUdJsVi38oTUY7DepIsg/ExM4GLZj9JK2s6CQX5xCUFOaJemp3Vyt4d4t
HvZCss4dt51zI9NNv/CV0QTI+1inE1X2l+8BROquEjtF16Vxj0yOWnE0xvn5or4H
thgOy1A7PPD0Shv992sKUF8atq1EXD1KEMwX9mOm8eIc6tCWQcaAg401TfZ4KV8J
2uHLPhiZQ6TC6PMBxv63W4DxKsK/hub/DZhpCFJrHGSVUKNrueQTh0mlfH4lJLnp
GZ3M6haJL9vLXGJp/erhhp/lZn/4Ho1EYrLJ4JM8kINnw5/Le9iSC555GPNkyqU2
x4kvkpFU6Ab4heKSijnM2L8sb3aP3NcrsBE3dSLxFuIOvpdIuQLIUz1ILH/AaMuK
JdbNMpiZGUtPV7xrHWf7di/sUmwOzCxfSpl8dLO1+tyoVCkBdAVs/UO4D2n6V3Tu
e/9ob6DP9Qj/FiD8O44qO326SGzxEIiDOHA9FX4qK090CxYAkJCG6UDLgk7eJWyB
FMlDC5X0XDXAclKDkPYkNQNCn0i1yQK0jQFz/d+JeuOJBBqsWs4=
=fupv
-----END PGP SIGNATURE-----

Source: qsb-103-2024.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmaWRBMACgkQSsGN4REu
FJDU0w//adiXcXjwXcL19qA53FUAZNbxnuQaV2W7pVBqVQwMj32aQRXb0TzWLzRP
cJtGHG3l4Ft1YCd7+m8cVXD3H5onb0ScyYkyNPdIZWLdA3uWEjWq2/9POHnVh4Ly
RU4BKFwnZuA3WPpMSnskgfJO/F6nIvNI12aOULDN+zxOUm8HJvigdbVOVi8hSshd
KTeox2GsoEmF++/LT9nMmuQp/pL5Tki1czmYZaEIk4muaV9Omb4bbY7Dxkya8R76
2zrbykR4tdWFUbMJhYepGiVilyU8JUd0FmbjZVVStxUJ9LhrI9n7VmEJlxpeULFg
ZsT3xw2f4Kox+d3gPwlhs2ndQ9z4SbpLOLfXTAz/cNda94XDDINMRiAnYERHs5fU
J0AU0eGrXRd6rCl4XlbKgZw6SOOkHSgTv2yXW1Z0wezTz+70nLvsWRJfEvUrN1tD
1KuP61xjHcAIV8/sIPEbPGAN7S3SMkdrubjK9OygPGXwycnuzPoujifFsINAyDNO
n5rpAPBK1L77rlBxaSG0ITicIZ650fDfl9pbHJ1UNdWAxYuys2r7d4nwcg9cY6zg
WAN6PIPV6y4bK1tFpO+mzcWFCXZ2PiqTJvPYZcuv4Ok4/PEdTLtjIdRl6RiG/fLY
anQB494vrm39hMQWZJeYu60JQFBM7ObFf+yPgF3TQSu64GEkYUg=
=ARv2
-----END PGP SIGNATURE-----

Source: qsb-103-2024.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-103), the commands are:

$ gpg --verify qsb-103-2024.txt.sig.marmarek qsb-103-2024.txt
$ gpg --verify qsb-103-2024.txt.sig.simon qsb-103-2024.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-103 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

16 July, 2024 12:00AM

July 15, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 848

Welcome to the Ubuntu Weekly Newsletter, Issue 848 for the week of July 7 -13, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 23.10 Reached End of Life: Here’s What You Can Do!
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Call for Volunteers: Core Dev Office Hours @ Ubuntu Summit 2024
  • LXD 6.1 has been released
  • The 2024.07.08 SRU Cycle started
  • Ubuntu & Flavour Member alias update
  • Ubuntu Desktop’s 24.10 Dev Cycle – Part 3: July Update
  • RISC-V Summit Munich 2024
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

15 July, 2024 10:44PM

Ubuntu Blog: Deploying and scaling Apache Spark on Amazon EKS

Introduction

Apache Spark, a framework for parallel distributed data processing, has become a popular choice for building streaming applications, data lake houses and big data extract-transform-load data processing (ETL). It is horizontally scalable, fault-tolerant, and performs well at high scale. Historically however, managing and scaling Spark jobs running on Apache Hadoop clusters could be challenging and often time-consuming for many reasons, but surely at least due to the availability of physical systems and configuring the Kerberos security protocol that Hadoop uses. But there is a new kid in town – Kubernetes – as an alternative to Apache Hadoop. Kubernetes is an open-source platform for deployment and management of nearly any type of containerized application. In this article we’ll walk through the process of deploying Apache Spark on Amazon EKS with Canonical’s Charmed Spark solution.

Kubernetes provides a robust foundation platform for Spark based data processing jobs and applications. Versus Hadoop, it offers more flexible security and networking models, and a ubiquitous platform that can co-host auxiliary applications that complement your Spark workloads – like Apache Kafka or MongoDB. Best of all, most of the key capabilities of Hadoop YARN are also available to Kubernetes – such as gang scheduling – through Kubernetes extensions like Volcano.

You can launch Spark jobs on a Kubernetes cluster directly from the Spark command line tooling, without the need for any extras, but there are some helpful extra components that can be deployed to Kubernetes with an operator. An operator is a piece of software that “operates” the component for you – taking care of deployment, configuration and other tasks associated with the component’s lifecycle.

With no further ado, let’s learn how to deploy Spark on Amazon Elastic Kubernetes Service (Amazon EKS) using Juju charms from Canonical. Juju is an open-source orchestration engine for software operators that helps customers to simplify working with sophisticated, distributed applications like Spark on Kubernetes and on cloud servers.

To get a Spark cluster environment up and ready on EKS, we’ll use the spark-client and juju snaps. Snaps are applications bundled with their dependencies, able to work across a wide range of Linux distributions without modifications. It is a hardened software packaging format with an enhanced security posture. You can learn more about snaps at snapcraft.io.

Solution overview

The following diagram shows the solution that we will implement in this post.

<noscript> <img alt="" height="2103" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_2077,h_2103/https://ubuntu.com/wp-content/uploads/458d/spark-on-eks.jpg" width="2077" /> </noscript>
Diagram illustrating the architecture of an Apache Spark lake house running on an Amazon EKS cluster.

In this post, you will learn how to provision the resources depicted in the diagram from your Ubuntu workstation. These resources are:

  • A Virtual Private Cloud (VPC)
  • An Amazon Elastic Kubernetes Service (Amazon EKS) Cluster with one node group using two spot instance pools
  • Amazon EKS Add-ons: CoreDNS, Kube_Proxy, EBS_CSI_Driver
  • A Cluster Autoscaler
  • Canonical Observability Stack deployed to the EKS cluster
  • Prometheus Push Gateway deployed to the EKS cluster
  • Spark History Server deployed to the EKS cluster
  • Traefik deployed to the EKS cluster
  • An Amazon EC2 edge node with the spark-client and juju snaps installed
  • An S3 bucket for data storage
  • An S3 bucket for job log storage

Walkthrough

Prerequisites

Ensure that you are running an Ubuntu workstation, have an AWS account, a profile with administrator permissions configured and the following tools installed locally:

  • Ubuntu 22.04 LTS
  • AWS Command Line Interface (AWS CLI)
  • kubectl snap
  • eksctl
  • spark-client snap
  • juju snap

Deploy infrastructure

You will need to set up your AWS credentials profile locally before running AWS CLI commands. Run the following commands to deploy the environment and EKS cluster. The deployment should take approximately 20 minutes.

snap install aws-cli --classic
snap install juju
snap install kubectl

aws configure
# enter the necessary details when prompted

wget https://github.com/eksctl-io/eksctl/releases/download/v0.173.0/eksctl_Linux_amd64.tar.gz
tar xzf eksctl_Linux_amd64.tar.gz
cp eksctl $HOME/.local/bin

cat > cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
    name: spark-cluster
    region: us-east-1
    version: "1.29"
iam:
  withOIDC: true

addons:
- name: aws-ebs-csi-driver
  wellKnownPolicies:
    ebsCSIController: true

nodeGroups:
    - name: ng-1
      minSize: 2
      maxSize: 5
      iam:
        withAddonPolicies:
          autoScaler: true
        attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
        - arn:aws:iam::aws:policy/AmazonS3FullAccess
      instancesDistribution:
        maxPrice: 0.15
        instanceTypes: ["m5.xlarge", "m5.large"]
        onDemandBaseCapacity: 0
        onDemandPercentageAboveBaseCapacity: 50
        spotInstancePools: 2
EOF

eksctl create cluster --ssh-access -f cluster.yaml

Verify the deployment

List Amazon EKS nodes

The following command will update the kubeconfig on your local machine and allow you to interact with your Amazon EKS Cluster using kubectl to validate the deployment.

aws eks --region $AWS_REGION update-kubeconfig --name spark-on-eks

Check if the deployment has created two nodes.

kubectl get nodes -l 'NodeGroupType=ng01'

# Output should look like below
NAME  STATUS  ROLES AGE  VERSION
ip-10-1-0-100.us-west-2.compute.internal   Ready    <none>   62m   v1.27.7-eks-e71965b
ip-10-1-1-101.us-west-2.compute.internal   Ready    <none>   27m   v1.27.7-eks-e71965b

Configure Spark History Server

Once the cluster has been created, you will need to adapt the kubeconfig configuration file so that the spark-client tooling can use it.

TOKEN=$(aws eks get-token --region us-east-1 --cluster-name spark-cluster --output json)

sed -i "s/^\ \ \ \ token\:\ .*$/^\ \ \ \ token\:\ $TOKEN/g" $HOME/.kube/config

The following commands create buckets on S3 for spark’s data and logs.

aws s3api create-bucket --bucket spark-on-eks-data --region us-east-1
aws s3api create-bucket --bucket spark-on-eks-logs --region us-east-1

The next step is to configure Juju so that we can deploy the Spark History Server. Run the following commands:

cat $HOME/.kube/config | juju add-k8s eks-cloud

juju add-model spark eks-cloud
juju deploy spark-history-server-k8s --channel=3.4/stable
juju deploy s3-integrator
juju deploy traefik-k8s --trust
juju deploy prometheus-pushgateway-k8s --channel=edge

juju config s3-integrator bucket="spark-on-eks-logs" path="spark-events"
juju run s3-integrator/leader sync-s3-credentials access-key=${AWS_ACCESS_KEY_ID} secret-key=${AWS_SECRET_ACCESS_KEY}
juju integrate s3-integrator spark-history-server-k8s
juju integrate traefik-k8s spark-history-server-k8s

Configure monitoring

We can integrate our Spark jobs with our monitoring stack. Run the following commands to deploy the monitoring stack and integrate the Prometheus Pushgateway.

juju add-model observability eks-cloud

curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/storage-small-overlay.yaml -O

juju deploy cos-lite \
  --trust \
  --overlay ./storage-small-overlay.yaml

juju deploy cos-configuration-k8s --config git_repo=https://github.com/canonical/charmed-spark-rock --config git_branch=dashboard \
  --config git_depth=1 --config grafana_dashboards_path=dashboards/prod/grafana/
juju-wait

juju integrate cos-configuration-k8s grafana

juju switch spark
juju consume admin/observability.prometheus prometheus-metrics
juju integrate prometheus-pushgateway-k8s prometheus-metrics
juju integrate scrape-interval-config prometheus-pushgateway-k8s
juju integrate scrape-interval-config:metrics-endpoint prometheus-metrics

PROMETHEUS_GATEWAY_IP=$(juju status --format=yaml | yq ".applications.prometheus-pushgateway-k8s.address")

Create and run a sample Spark job

Spark jobs are data processing applications that you develop using either Python or Scala. Spark jobs distribute data processing across multiple Spark executors, enabling parallel, distributed processing so that jobs complete faster.

We’ll start an interactive session that launches Spark on the cluster and allows us to write a processing job in real time. First we’ll set some configuration for our spark jobs.

cat > spark.conf <<EOF
spark.eventLog.enabled=true
spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
spark.hadoop.fs.s3a.connection.ssl.enabled=true
spark.hadoop.fs.s3a.path.style.access=true
spark.hadoop.fs.s3a.access.key=${AWS_ACCESS_KEY_ID}
spark.hadoop.fs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}
spark.eventLog.dir=s3a://spark-on-eks-logs/spark-events/ 
spark.history.fs.logDirectory=s3a://spark-on-eks-logs/spark-events/
spark.driver.log.persistToDfs.enabled=true
spark.driver.log.dfsDir=s3a://spark-on-eks-logs/spark-events/
spark.metrics.conf.driver.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
spark.metrics.conf.driver.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
spark.metrics.conf.driver.sink.prometheus.enable-dropwizard-collector=true
spark.metrics.conf.driver.sink.prometheus.period=1
spark.metrics.conf.driver.sink.prometheus.metrics-name-capture-regex=([a-zA-Z0-9]*_[a-zA-Z0-9]*_[a-zA-Z0-9]*_)(.+)
spark.metrics.conf.driver.sink.prometheus.metrics-name-replacement=\$2
spark.metrics.conf.executor.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
spark.metrics.conf.executor.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
spark.metrics.conf.executor.sink.prometheus.enable-dropwizard-collector=true
spark.metrics.conf.executor.sink.prometheus.period=1
spark.metrics.conf.executor.sink.prometheus.metrics-name-capture-regex=([a-zA-Z0-9]*_[a-zA-Z0-9]*_[a-zA-Z0-9]*_)(.+)
spark.metrics.conf.executor.sink.prometheus.metrics-name-replacement=\$2
EOF

spark-client.service-account-registry create --username spark --namespace spark --primary --properties-file spark.conf --kubeconfig $HOME/.kube/config

Start a Spark shell

To start an interactive pyspark shell, you can run the following command. This will enable you to interactively run commands from your Ubuntu workstation, which will be executed in a spark session running on the EKS cluster. In order for this to work, the cluster nodes need to be able to route IP traffic to the Spark “driver” running on your workstation. To enable routing between your EKS worker nodes and your Ubuntu workstation, we will use sshuttle.

sudo apt install sshuttle
eks_node=$(kubectl get nodes -l 'NodeGroupType=ng01' -o wide | tail -n 1 | awk '{print $7}')
sshuttle --dns -NHr ec2-user@${eks_node} 0.0.0.0/0
eks-node

Now open another terminal and start a pyspark shell:

spark-client.pyspark --username spark --namespace spark

You should see output similar to the following:

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 3.4.2
      /_/

Using Python version 3.10.12 (main, Nov 20 2023 15:14:05)
Spark context Web UI available at http://10.1.0.1:4040
Spark context available as 'sc' (master = k8s://https://10.1.0.15:16443, app id = spark-83a5f8365dda47d29a60cac2d4fa5a09).
SparkSession available as 'spark'.
>>> 

Write a Spark job

From the interactive pyspark shell, we can write a simple demonstration job that will be processed in a parallel, distributed manner on the EKS cluster. Enter the following commands:

lines = """Canonical's Charmed Data Platform solution for Apache Spark runs Spark jobs on your Kubernetes cluster.
You can get started right away with MicroK8s - the mightiest tiny Kubernetes distro around! 
The spark-client snap simplifies the setup process to get you running Spark jobs against your Kubernetes cluster. 
Spark on Kubernetes is a complex environment with many moving parts.
Sometimes, small mistakes can take a lot of time to debug and figure out.
"""

def count_vowels(text: str) -> int:
  count = 0
  for char in text:
    if char.lower() in "aeiou":
      count += 1
  return count

from operator import add
spark.sparkContext.parallelize(lines.splitlines(), 2).map(count_vowels).reduce(add)

To exit the pyspark shell, type quit().

Access Spark History Server

To access the Spark History Server, we’ll use a Juju command to get the URL for the service, which you can copy and paste into your browser:

juju run traefik-k8s/leader -m spark show-proxied-endpoints

# you should see output like
Running operation 53 with 1 task
  - task 54 on unit-traefik-k8s-0

Waiting for task 54...
proxied-endpoints: '{"spark-history-server-k8s": {"url": "https://10.1.0.186/spark-model-spark-history-server-k8s"}}'

You should see a URL in the response which you can use in order to connect to the Spark History Server.

Scaling your Spark cluster

The ability to scale a Spark cluster can be useful because scaling out the cluster by adding more capacity allows the cluster to run more Spark executors in parallel. This means that large jobs can be completed faster. Furthermore, more jobs can run concurrently at the same time.

Spark is designed to be scalable. If you need more capacity at certain times of the day or week, you can scale out by adding nodes to the underlying Kubernetes cluster or scale in by removing nodes. Since data is persisted externally to the Spark cluster in S3, there is limited risk of data loss. This flexibility allows you to adapt your system to meet changing demands and ensure optimal performance and cost efficiency.

To run a spark job with dynamic resource scaling, use the additional configuration parameters shown below.

spark-client.spark-submit \
…
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.shuffleTracking.enabled=true \
--conf spark.dynamicAllocation.shuffleTracking.timeout=120 \
--conf spark.dynamicAllocation.minExecutors=10 \
--conf spark.dynamicAllocation.maxExecutors=40 \
--conf spark.kubernetes.allocation.batch.size=10 \
--conf spark.dynamicAllocation.executorAllocationRatio=1 \
--conf spark.dynamicAllocation.schedulerBacklogTimeout=1 \
…

The EKS cluster is already configured to support auto scaling of the node group, so that as demand for resources from Spark jobs increases, additional EKS worker nodes are brought online.

View Spark job stats in Grafana

The solution installs Canonical Observability Stack (COS), which includes Prometheus and Grafana, and comes with ready to use Grafana dashboards. You can fetch the secret for login as well as the URL to the Grafana Dashboard by running the following command:

juju switch observability
juju run grafana/leader get-admin-password

Enter admin as username and the password from the previous command.

Open Spark dashboard

Navigate to the Spark dashboard. You should be able to see metrics from long running Spark jobs.

<noscript> <img alt="" height="2018" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3838,h_2018/https://ubuntu.com/wp-content/uploads/781a/Spark-Grafana-Dashboard.png" width="3838" /> </noscript>

Conclusion

In this post, we saw how to deploy Spark on Amazon EKS with autoscaling. Additionally, we explored the benefits of using Juju charms to rapidly deploy and manage a complete Spark solution. If you would like to learn more about Charmed Spark – Canonical’s supported solution for Apache Spark, then you can visit the Charmed Spark product page, contact the commercial team, or chat with the engineers on Matrix.

15 July, 2024 03:51PM

hackergotchi for Tails

Tails

Tails 6.5

Changes and updates

Fixed problems

  • Fix preparation for first use often breaking legacy BIOS boot and creation of Persistent Storage. (#20451)

  • Fix language of Tor Browser when started from Tor Connection. (#20318)

  • Fix connection via mobile broadband, LTE, and PPPoE DSL. (#20291, #20433)

For more details, read our changelog.

Known issues

  • It is impossible to connect using the default Tor bridges already included in Tails. (#20467)

    If you habitually use default bridges, try to connect without bridges: this is just as safe. If this fails, configure a custom bridge.

Get Tails 6.5

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 6.0 or later to 6.5.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 6.5 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 6.5 directly:

15 July, 2024 12:34PM

hackergotchi for Purism PureOS

Purism PureOS

Your Phone is Giving Away More Than You Ever Bargained For

The year 2007 represented a watershed moment for the modern smartphone industry as we know it. This was the year that Apple introduced its first iPhone device and Google announced Android via its participation in The Open Handset Alliance (which notably leveraged the Linux kernel and open-source code to create its mobile OS). Fast forward […]

The post Your Phone is Giving Away More Than You Ever Bargained For appeared first on Purism.

15 July, 2024 12:05AM by Randy Siegel

Federal Government Mobility Veteran Randy Siegel Joins Purism

Purism is pleased to announce that long-time mobility industry insider Randy Siegel has joined the company to direct our strategic government business development efforts. “Adding Randy to lead our secure mobile offering is a natural fit to expand our governmental focused Liberty Phone manufactured at our facility on US Soil. We are extremely excited that […]

The post Federal Government Mobility Veteran Randy Siegel Joins Purism appeared first on Purism.

15 July, 2024 12:00AM by Purism

July 14, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Salih Emin: uCareSystem 24.07.14: Improved System Restart Detection

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]

14 July, 2024 05:03PM

July 13, 2024

hackergotchi for Whonix

Whonix

Whonix 17.2.0.1 - All Platforms - Point Release!

Download

(What is a point release?)


Upgrade

Alternatively, in-place release upgrade is possible upgrade using Whonix repository.


This release would not have been possible without the numerous supporters of Whonix!


Please Donate!


Please Contribute!


Major Changes


Full difference of all changes

https://github.com/Whonix/derivative-maker/compare/17.1.3.1-developers-only…17.2.0.1-developers-only

1 post - 1 participant

Read full topic

13 July, 2024 10:26PM by Patrick

hackergotchi for Qubes

Qubes

Qubes OS 4.2.2 has been released!

We’re pleased to announce the stable release of Qubes OS 4.2.2! This patch release aims to consolidate all the security patches, bug fixes, and other updates that have occurred since the previous stable release. Our goal is to provide a secure and convenient way for users to install (or reinstall) the latest stable Qubes release with an up-to-date ISO. The ISO and associated verification files are available on the downloads page.

What’s new in Qubes 4.2.2?

For more information about the changes included in this version, see the Qubes OS 4.2 release notes and the full list of issues completed since the previous stable release.

Copying and moving files between qubes is less restrictive

Qubes 4.2.2 includes a fix for #8332: File-copy qrexec service is overly restrictive. As explained in the issue comments, we introduced a change in Qubes 4.2.0 that caused inter-qube file-copy/move actions to reject filenames containing, e.g., non-Latin characters and certain symbols. The rationale for this change was to mitigate the security risks associated with unusual unicode characters and invalid encoding in filenames, which some software might handle in an unsafe manner and which might cause confusion for users. Such a change represents a trade-off between security and usability.

After the change went live, we received several user reports indicating more severe usability problems than we had anticipated. Moreover, these problems were prompting users to resort to dangerous workarounds (such as packing files into an archive format prior to copying) that carry far more risk than the original risk posed by the unrestricted filenames. In addition, we realized that this was a backward-incompatible change that should not have been introduced in a minor release in the first place.

Therefore, we have decided, for the time being, to restore the original (pre-4.2) behavior by introducing a new allow-all-names argument for the qubes.Filecopy service. By default, qvm-copy and similar tools will use this less restrictive service (qubes.Filecopy +allow-all-names) whenever they detect any files that would be have been blocked by the more restrictive service (qubes.Filecopy +). If no such files are detected, they will use the more restrictive service.

Users who wish to opt for the more restrictive 4.2.0 and 4.2.1 behavior can do so by modifying their RPC policy rules. To switch a single rule to the more restrictive behavior, change * in the argument column to + (i.e., change “any argument” to “only empty”). To use the more restrictive behavior globally, add the following “deny” rule before all other relevant rules:

qubes.Filecopy    +allow-all-names    @anyvm    @anyvm    deny

For more information, see RPC policies and Qube configuration interface.

How to get Qubes 4.2.2

You have a few different options, depending on your situation:

  • If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.2.2 ISO and follow our installation guide.

  • If you’re currently on Qubes 4.1, learn how to upgrade to Qubes 4.2.

  • If you’re currently on Qubes 4.2 (including 4.2.0, 4.2.1, and 4.2.2-rc1), update normally (which includes upgrading any EOL templates you might have) in order to make your system essentially equivalent to the stable Qubes 4.2.2 release. No reinstallation or other special action is required.

In all cases, we strongly recommend making a full backup beforehand.

Reminder: new signing key for Qubes 4.2

As a reminder for those upgrading from Qubes 4.1 and earlier, we published the following special announcement in Qubes Canary 032 on 2022-09-14:

We plan to create a new Release Signing Key (RSK) for Qubes OS 4.2. Normally, we have only one RSK for each major release. However, for the 4.2 release, we will be using Qubes Builder version 2, which is a complete rewrite of the Qubes Builder. Out of an abundance of caution, we would like to isolate the build processes of the current stable 4.1 release and the upcoming 4.2 release from each other at the cryptographic level in order to minimize the risk of a vulnerability in one affecting the other. We are including this notice as a canary special announcement since introducing a new RSK for a minor release is an exception to our usual RSK management policy.

As always, we encourage you to authenticate this canary by verifying its PGP signatures. Specific instructions are also included in the canary announcement.

As with all Qubes signing keys, we also encourage you to authenticate the Qubes OS Release 4.2 Signing Key, which is available in the Qubes Security Pack (qubes-secpack) as well as on the downloads page.

What is a patch release?

The Qubes OS Project uses the semantic versioning standard. Version numbers are written as <major>.<minor>.<patch>. Hence, we refer to releases that increment the third number as “patch releases.” A patch release does not designate a separate, new major or minor release of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.2) inclusive of all updates up to a certain point. (See supported releases for a comprehensive list of major and minor releases.) Installing the initial Qubes 4.2.0 release and fully updating it results in essentially the same system as installing Qubes 4.2.2. You can learn more about how Qubes release versioning works in the version scheme documentation.

13 July, 2024 12:00AM

July 12, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Announcing Incus 6.3

This release includes the long awaited OCI/Docker image support!
With this, users who previously were either running Docker alongside Incus or Docker inside of an Incus container just to run some pretty simple software that’s only distributed as OCI images can now just do it directly in Incus.

In addition to the OCI container support, this release also comes with:

  • Baseline CPU definition within clusters
  • Filesystem support for io.bus and io.cache
  • Improvements to incus top
  • CPU flags in server resources
  • Unified image support in incus-simplestreams
  • Completion of libovsdb transition

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

12 July, 2024 05:05PM

Simos Xenitellis: Running OCI images (i.e. Docker) directly in Incus

One of the cool new features in Incus 6.3 is the ability to run OCI images (such as those for Docker) directly in Incus.

You can certainly install the Docker packages in an Incus instance but that would put you in a situation of running a container in a container. Why not let Docker be a first-class citizen in the Incus ecosystem?

Note that this feature is new to Incus, which means that if you encounter issues, please discuss and report them.

Launching the docker.io nginx OCI container image in Incus.

Table of Contents

Background

In Incus you typically run system containers, which are containers that have been setup to resemble a virtual machine (VM). That is, you launch a system container in Incus, and this system container keeps running until you stop it. Just like with VMs.

In contrast, with Docker you are running application containers. You launch the Docker container with some configuration to perform a task, the task is performed, and the container stops. The task might also be something long-lived, like a Web server. In that case, the application container will have a longer lifetime. With application containers you are thinking primarily about tasks. You stop the task and the container is gone.

Prerequisites

You need to install Incus 6.3. If you are using Debian or Ubuntu, you would select the stable repository of Incus.

$ incus version
Client version: 6.3
Server version: 6.3
$

Adding the Docker repository to Incus

The container images from Docker follow the Open Container Image (OCI) format. There is also a special way to access those images through the Docker Hub Container Image Repository, which is distinctive from the other ways supported by Incus.

We will be adding (once only) a remote for the Docker repository. A remote is a configuration to access images from a particular repository of such images. Let’s see what we already have. We run incus remote list which invokes the list command for the functionality about remotes (incus remote). There are two remotes, the images which is the standard repository for container images and virtual machine images for Incus. And then there is local, which is the remote of the local installation of Incus. Every installation of Incus has such a default remote.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

The Docker repository has the URL https://docker.io and it is accessed through a different protocol. It is called oci.

Therefore, to add the Docker repository, we need to run incus remote add with the appropriate parameters. The URL is https://docker.io and the --protocol is oci.

$ incus remote add docker https://docker.io --protocol=oci
$

Let’s list again the available Incus remotes. The docker remote has been added successfully.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| docker          | https://docker.io                  | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

If we ever want to remove a remote called myremote, we would incus remote remove myremote.

Launching a Docker image in Incus

When you launch (install and run) a container in Incus, you use incus launch with the appropriate parameters. In this first example, we launched the image hello-world, which is one of the Docker official images. As an image, it runs, printing some text and then it stops. In this case we used the parameter --console in order to see the text output. Finally, we use the --ephemeral parameter that would automatically delete the container image as soon as it stops. Ephemeral (εφήμερο) is a Greek word, meaning that it lasts only a brief time. Both these two additional parameters are not essential but are helpful in this specific case.

$ incus launch docker:hello-world --console --ephemeral
Launching the instance
Instance name is: best-feature                       

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

$ 

Note that we did not specify a name for the container. Astonishingly, Incus randomly selected the name best-feature with some editorial help. Since this is an ephemeral container and the specific image hello-world is short-lived, it is gone in a flash. Indeed, if you then run incus list, the Docker container is not found because it has been auto-deleted.

Let’s try another Docker image, the official nginx Docker image. It launches the nginx image which serves an empty nginx Web server. We need to run incus list and search for the IP address that was given to the container. Then, we can view the default page of the Web server in our Web browser.

$ incus launch docker:nginx --console --ephemeral
Launching the instance
Instance name is: best-feature                               
To detach from the console, press: <ctrl>+a q
2024/07/12 16:31:59 [notice] 21#21: using the "epoll" event method
2024/07/12 16:31:59 [notice] 21#21: nginx/1.27.0
2024/07/12 16:31:59 [notice] 21#21: built by gcc 12.2.0 (Debian 12.2.0-14) 
2024/07/12 16:31:59 [notice] 21#21: OS: Linux 6.5.0-41-generic
2024/07/12 16:31:59 [notice] 21#21: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/07/12 16:31:59 [notice] 21#21: start worker processes
2024/07/12 16:31:59 [notice] 21#21: start worker process 45
2024/07/12 16:31:59 [notice] 21#21: start worker process 46
2024/07/12 16:31:59 [notice] 21#21: start worker process 47
2024/07/12 16:31:59 [notice] 21#21: start worker process 48
2024/07/12 16:31:59 [notice] 21#21: start worker process 49
2024/07/12 16:31:59 [notice] 21#21: start worker process 50
2024/07/12 16:31:59 [notice] 21#21: start worker process 51
2024/07/12 16:31:59 [notice] 21#21: start worker process 52
2024/07/12 16:31:59 [notice] 21#21: start worker process 53
2024/07/12 16:31:59 [notice] 21#21: start worker process 54
2024/07/12 16:31:59 [notice] 21#21: start worker process 55
2024/07/12 16:31:59 [notice] 21#21: start worker process 56
10.10.10.1 - - [12/Jul/2024:16:33:29 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" "-"
...
^C
...
2024/07/12 16:39:10 [notice] 21#21: signal 29 (SIGIO) received
2024/07/12 16:39:10 [notice] 21#21: signal 17 (SIGCHLD) received from 55
2024/07/12 16:39:10 [notice] 21#21: worker process 55 exited with code 0
2024/07/12 16:39:10 [notice] 21#21: exit
Error: stat /proc/-1: no such file or directory
$ 

Discussion

How long is the lifecycle time of a minimal Docker container?

How long does it take to launch the docker:hello-world container? I prepend time to the command, and check the stats at the end of the execution. It takes about four seconds to launch and run a simple Docker container. On my system and my Internet connection (it’s cached).

$ time incus launch docker:hello-world --console --ephemeral
...
real	0m3,956s
user	0m0,016s
sys	0m0,016s
$ 

How long does it take to repeatedly run a minimal Docker container?

We are removing the --ephemeral option. We launch the container and we give some relevant name, mydocker. This container remains after execution. We can see that after launching it, the container stays in a STOPPED state. We then incus start the container with the command time incus start mydocker --console, and we can see that it takes a bit more than half a second to complete the execution.

$ incus launch docker:hello-world mydocker --console
Launching mydocker

Hello from Docker!
...
$ incus list mydocker
+----------+---------+------+------+-----------------+-----------+
|   NAME   |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+----------+---------+------+------+-----------------+-----------+
| mydocker | STOPPED |      |      | CONTAINER (APP) | 0         |
+----------+---------+------+------+-----------------+-----------+
$ time incus start mydocker --console

Hello from Docker!
...

Hello from Docker!
...
real	0m0,638s
user	0m0,003s
sys	0m0,013s
$ 

Are the container images cached?

Yes, they are cached. When you launch a container for the first time, you can visibly the downloading of the image components. Also, if you run incus image list, you can see the cached Docker.io images in the output.

Troubleshooting

Error: Can’t list images from OCI registry

You tried to list the images of the Docker Hub repository. Currently this is not supported. Technically it could be supported though the repository has more than ten thousand images. Using incus list without parameters may not make sense as the output would require to download the full list of images from the repository. Searching for images does make sense, but if you can search, you should be able to list as well. I am not sure what’s the maintainer’s view on this.

$ incus image list docker:
Error: Can't list images from OCI registry
$ 

As a workaround, you can simply locate the image name from the Website at https://hub.docker.com/search?q=&image_filter=official

Error: stat /proc/-1: no such file or directory

I ran many many instances of Docker containers and in a few cases I got the above error. I do not know what it is and I am adding it here in case someone manages to replicate. It feels that it’s some kind of race condition.

Failed getting remote image info: Image not found

You have configured Incus properly and you definitely have Incus version 6.3 (both the client and the server). But still, Incus cannot find any Docker image, not even hello-world.

This can happen if you are using a packaging of Incus that does not include the skopeo and umoci packages. If you are using the Zabbly distribution of Incus, these programs are included in the Incus packaging. Therefore, if you are using alternative packaging for Incus, you can manually install the versions of those packages as provided by your Linux distribution.

12 July, 2024 04:50PM

Ubuntu Blog: Managing OTA and telemetry in always-connected fleets

If you’ve been reading my blogs for the past two years, you know that the automotive industry is probably the most innovative one today. As a matter of fact, some of the biggest company valuations revolve around electric vehicles (EVs), autonomous driving (AD) and artificial intelligence (AI). As with any revolution, this one comes with its set of challenges. 

I’ve noticed that the most difficult technologies to grasp and master aren’t always the ones that seem the most complex at first sight. Over-the-air (OTA) updates and managing the telemetry of a fleet are typically some of the most promising, but also deceptively complicated, technologies in automotive today.

The unspoken power of OTA

Over the air updates have the potential to completely reshuffle the cards when it comes to the value of a vehicle. By enabling remote software and firmware updates, OTA updates not only eliminate the need for physical intervention, but also provide security updates, as well as new features, making your car feel brand new. When applied throughout an entire fleet, you can imagine the savings, but also the value that is added to vehicles driving on the road today.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXeGHajtAJaCLVXjaedrILZ3xOKbIaPwRce-l-heooQYsvuhQuxgXP72roJ5tPBuGBlDQWX43Bp6NSfC9bRaBcpOkjjzsfKGGuZ8zWUWpzHAD-7oP9m-aGfyA2SIQOvUV9IQgi0sZWHqZcu-pXnlo83eYG_e?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

By minimizing the need for vehicles to be taken to dealerships or garages for updates and maintenance, OTA updates ensure higher availability of vehicles. Similarly, by adding new features and improvements to your vehicles, you enhance the overall user experience.

You’ve probably heard of new cybersecurity automotive regulatory requirements that OEMs need to comply with, like ISO 21434. OTA updates can help OEMs meet these regulatory requirements, by remotely ensuring that vehicles have the latest and greatest security patches against vulnerabilities and cybersecurity threats.

At Canonical, we believe that one of the most effective ways to manage OTA updates is through the use of a Dedicated Snap Store. These provide a centralised platform for distributing and managing your software packages, while keeping the whole process secure. Being a single point of control for your software updates, it makes it easier to manage and employ these packages across your fleet.

For example, an OEM could use a Snap Store to manage updates for various car models, and versions. Allowing for precise control over which updates are deployed to which variant, so that each vehicle receives the necessary updates for its specific configuration.

All of these updates follow very strict security guidelines, ensuring that only authorised packages are delivered to your vehicles. By using delta mechanisms, you can optimise your download sizes and update times, which is especially useful for large fleets.

Telemetry for enhanced efficiency

Telemetry involves the collection of data coming from vehicles. This data can be used for improving the vehicles themselves, fleet operations, efficiency, etc. With fine-tuned, telemetry parameters, you can obtain relevant data at the right frequency to optimise analytics. 

Whether you want to track vehicle location, speed, fuel consumption, or ECU diagnostics, you want to make sure that you are monitoring your fleet with best-in-class performance. One of the most common use cases that justify the investment in feet telemetry is predictive maintenance. By analysing data from your fleet, you can predict the need for repairs and schedule proactively for maintenance. This helps you reduce the downtime of your fleet, and enables you to extend the lifetime of your vehicles.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXddYbK-DNnU6KdrVDrAbT9kGzI3ZY8aQfwD1TfJ1lRKF2Jc4SmctrUdLu6itdKGuhlrDC9A5M3TnP17909RLX2HYOnkz-7pnSdBVgXgmA8JyXKAv-N0CBnOaTK8TU-faVtQ1mR1266HGFJ_Y58OGj90y0eZ?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

A different use case relies on driver behaviour monitoring. Although this use case is often frowned upon, from an insurance company perspective it can provide a lot of value. From more accurate premiums based on your actual vehicle usage and driving behaviour, analysing this data can lead to potential cost savings. 

Yet another use case that is frequently mentioned when it comes to fleet management is route optimisation. By combining traffic information and geolocation data, it becomes possible to find highly optimised routes; reducing travel time and fuel consumption.

From interoperability to security, managing a large fleet of vehicles comes with its challenges. In fact, integrating data generated by very diverse systems and components from different OEMs can be extremely challenging. It’s important that your solutions abstract that complexity and ensure seamless communication and data exchange.

Cybersecurity wise, having vehicles constantly connected means that the cybersecurity threats can happen anytime. Security needs to be applied from the ground up; from the vehicles to the cloud. Moreover, your backend will sometimes be handling confidential information. The way you decide to collect, store and analyse these large amounts of data requires advanced frameworks and tools.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXdc1HXpcOz0YYNeDExtRRG1yerWG0zuSEiaFkBDAktd_u6m9lKDyFrxCMxp6jkuHN_a7RHAGw2Sct6f46ZZcrCSDNMmxJoPAsfq-PcMtae_QkXr0Nxo3Cogh6vaNvxwOG4loyczYf541vJECBZ58lhXXbk?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

Open source solutions offer several advantages for fleet management, including scalability, security, and interoperability. Community-driven development offers continuous improvements and optimisations, while transparent development processes can further improve security and flexibility.

Driving towards excellence by future-proofing fleet management

The automotive industry’s shift to software-driven operations necessitates a deep understanding of interconnected systems. OTA updates and fleet telemetry are at the front lines of this transformation, offering substantial benefits in terms of efficiency, security, and operational excellence. 

By taking advantage of Dedicated Snap Stores, edge computing, and open source solutions, automotive companies can embrace these challenges with confidence, benefiting from the full potential of open source software.

As the industry opens up to this software revolution, stakeholders must understand the intricacies of these complex systems. Failure to do so could lead to compromised security. Our latest white paper aims to address this knowledge gap and empower you to understand the full scope of the possible existing solutions.

If you want to learn more about achieving effective V2X communication, understanding OTA updates, and overcoming fleet management challenges, I recommend you download our white paper. This guide will help you understand the intricacies of these challenging technologies that are pushing the automotive software landscape forward.

To learn more about Canonical and our engagement in automotive: 

Contact Us

Check out our webpage

Watch our webinar with Elektrobit about SDV

Download our whitepaper on V2X (Vehicle-to-Everything)

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXc5xmqNvmPTy5UjNJ8_wGwK2avZdWz6qTYZpcxVOauHCUQgfgCA9LOLcifzqy4AZAjOTx2KzuiRo-7NUAh0vdBqVzmqnDjg8zMihmgQXHfkgluSDpH4ZzoNBnYEU6jphc_rGFDspz881P4BqekvMc1O9GKS?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

12 July, 2024 08:00AM

July 11, 2024

hackergotchi for SparkyLinux

SparkyLinux

Zed

There is a new application available for Sparkers: Zed What is Zed? Installation (Sparky 7 & 8 amd64): License: GNU AGPL/GPL, Apache Web: github.com/zed-industries/zed …

Source

11 July, 2024 07:13PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Ubuntu Studio 23.10 Has Reached End-Of-Life (EOL)

As of July 11, 2024, all flavors of Ubuntu 23.10, including Ubuntu Studio 23.10, codenamed “Mantic Minotaur”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 24.04 LTS via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

11 July, 2024 03:27PM

Ubuntu Blog: Charmed Kubeflow 1.9 Beta is here: try it out

After releasing a new version of Ubuntu every six months for 20 years, it’s safe to say that we like keeping our traditions. Another of those traditions is our commitment to giving our Kubeflow users early access to the latest version – and that promise still stands. Kubeflow 1.9 is about to go out in a couple of weeks and that only means one thing: Canonical has just released its Charmed Kubeflow beta. Are you ready to try it out? 

If you can put some time aside, we’re looking for data scientists, ML engineers, MLOps experts, creators and AI enthusiasts to take Charmed Kubeflow 1.9 for a ride and share their feedback with us. You can really help us by:

  • Trying it out and letting us know how the experience goes
  • Asking us any questions about the product and how it works
  • Reporting bugs or any issues you have with Charmed Kubeflow 1.9 Beta (and beyond)
  • Giving us improvement suggestions for the product and portfolio  

What’s new in Kubeflow 1.9?

Kubeflow is now going through the CNCF process to graduate from the incubation program. This challenges the community to evolve quickly and work on different aspects of the projects:

  • Improving the MLOps platform’s security features
  • Adding new capabilities to the project
  • Centralising communication channels and growing the community

Security as a priority

This release has the first updates from the security working group. One of the key features that the group announced was about Network Policies, which control the traffic flow at the IP address or port level. They will be enabled as a second security layer for core services to give users a better network overview and segmentation in line with common enterprise security guidelines.

ML integrations as part of a growing ecosystem

Kubeflow is designed to work in partnership with other ML & data tools. The latest release brings news integrations with leading ML tools and libraries such as BentoML, used for inference, or Ray, for training LLMs. One long-standing bug that Charmed Kubeflow users reported was related to the access to MLflow when deployed alongside the MLOs platform. Charmed Kubeflow 1.9 will solve this issue and give users clear guidance on how to use it.

Community growth

As part of CNCF, the community aims to integrate better into the ecosystem and enable new contributors to the project. One of the changes that the upstream community just made was to move to the CNCF Slack channel. Join us there to get in touch with a vibrant community and learn more from some of the industry experts.

We’re going live; join our MLOps tech talk.

Speaking of traditions, you might already know that all our betas bring the product engineering team live for a tech talk. This time is no exception, and I’ll be joined by two new faces. Michal Hucko and Orfeas Kourkakis, Software Engineers at Canonical, are ready to talk tomorrow, 11 July 2024, at 5 PM CET to Kubeflow users about the platform, latest news and how the industry is being shaped. Join us live, and you will:

  • Learn about the latest release and how our distribution handles it
  • Discover the key features covered in Charmed Kubeflow 1.9, in upstream and beyond.
  • Understand the differences between the upstream release and Canonical’s Charmed Kubeflow.
  • Get answers to any other question, technical or not, you have about MLOps, open source or Canonical’s portfolio.

 Don’t wait any longer, and add the event to your calendar!

Charmed Kubeflow 1.9 is out. Try it now!

Are you already a Charmed Kubeflow user?

Your job is even easier since you will only have to upgrade to the latest version to try the 1.9 beta. We’ve already prepared a guide with all the steps you need to take. 

Please be mindful that this is not a stable version, so there is always a risk that something might go wrong. Save your work and proceed with caution. If you encounter any difficulties, Canonical’s MLOps team is here to hear your feedback and help you out. Since this is a Beta version, Canonical does not recommend running or upgrading it on any production environment.

Are you new to Charmed Kubeflow?

Now, I can tell you are a real adventurer. Welcome to the MLOps world! Starting with a beta release might result in a few more challenges for you, but it’ll give you the chance to share in the product development and really contribute to the open source world. For all the prerequisites, check out the getting started tutorial.

Shortly after you deploy and install MicroK8s and Juju, you will need to add the Kubeflow model and then make sure you have the latest version. Follow the instructions below to get this up and running:

juju deploy kubeflow –channel 1.9/beta –trust

Now, you can go back to the tutorial to finish the configuration of Charmed Kubeflow or read the documentation to learn more.

You tried it out – what do you think?

You are part of something really important for us. As with any other open source project, joining a beta gives you a glimpse of the latest innovations, and it also gives you the chance to shape the product. Let’s make Charmed Kubeflow 1.9 better together.

11 July, 2024 11:52AM

hackergotchi for GreenboneOS

GreenboneOS

Vulnerability scanner Notus supports Amazon Linux

Most virtual servers in the Amazon Elastic Compute Cloud EC2 run a version of Linux that has been specially customised for the needs of the cloud. The latest generation of scanners from Greenbone has also been available for the Amazon Web Services operating system for a few weeks now. Over 1,900 additional, customised tests for the latest versions of Amazon Linux (Linux 2 and Linux 2023) have been integrated in recent months, explains Julio Saldana, Product Owner at Greenbone.

Significantly better performance thanks to Notus

Greenbone has been supplementing its vulnerability management with the Notus scan engine since 2022. The innovations in the architecture are primarily aimed at significantly increasing the performance of the security checks. Described as a “milestone” by Greenbone CIO Elmar Geese, the new scanner generation works in two parts: A generator queries the extensive software version data from the company’s servers and saves it in a handy Json format. Because this no longer happens at runtime, but in the background, the actual scanner (the second part of Notus) can simply read and synchronise the data from the Json files in parallel. Waiting times are eliminated. “This is much more efficient, requires fewer processes, less overhead and less memory,” explain the Greenbone developers.

Amazon Linux

Amazon Linux is a fork of Red Hat Linux sources that Amazon has been using and customising since 2011 to meet the needs of its cloud customers. It is largely binary-compatible with Red Hat, initially based on Fedora and later on CentOS. Amazon Linux was followed by Amazon Linux 2, and the latest version is now available as Amazon Linux 2023. The manufacturer plans to release a new version every two years. The version history of the official documentation also includes a feature comparison, as the differences are significant: Amazon Linux 2023 is the first version to also use Systemd, for example. Greenbone’s vulnerability scan was also available on Amazon Linux from the very beginning.

11 July, 2024 10:38AM by Markus Feilner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Bringing Real-time Ubuntu to Amazon EKS Anywhere customers with Ubuntu Pro

Earlier this year at Mobile World Congress (MWC) 2024 in Barcelona, Canonical announced the availability of Real-time Ubuntu on Amazon Elastic Kubernetes Services Anywhere (EKS Anywhere). With this technology enablement, a telecom operator can confidently run its Open Radio Access Network (RAN) software workloads on Amazon EKS Anywhere, thanks to the necessary support for ultra low-latency data processing with high reliability in Real-time Ubuntu.

The enablement work was part of a collaboration between partner companies to make Amazon EKS Anywhere an ideal platform for Open RAN workloads. At the MWC event, leaders from these companies discussed what has been achieved and what the future holds. These discussions focused on the roadmap to achieving successful Open RAN deployments on Amazon EKS Anywhere. The panel included representatives from NTT DOCOMO, Qualcomm, NEC and Canonical, and was moderated by AWS. In this blog, we’ll run through how Canonical engineers helped bring this work to fruition, and the advantages of having Real-time Ubuntu on Amazon EKS Anywhere.

Why Open RAN matters in telecom

Telecom operators constantly seek innovative ways to launch new services that generate revenue while simultaneously reducing operational costs. With the right infrastructure solutions, Open RAN offers operators an ecosystem where technology providers can deliver cost-effective solutions that are interoperable through open and standardised APIs. This technology enables telecom operators to run their RAN systems as disaggregated and distributed components across their edge infrastructure. Implementing virtual RAN functions as software enhances flexibility and operational efficiency in network operations.

What is a real-time kernel?

A real-time OS kernel enables the deployment of any software application that requires bounded low latency in the operating system kernel when executing the application.

For telecom applications with strict latency requirements, a real-time OS kernel is now essential. It significantly enhances performance by efficiently processing and delivering information between applications, external systems, and devices.

How Amazon EKS Anywhere benefits from Real-time Ubuntu

RAN software workloads are highly sensitive to delays in information processing and delivery. This makes it essential to integrate real-time kernel capabilities into Amazon EKS Anywhere for telecom operators like NTT DOCOMO. At their edge locations, these capabilities are crucial for their Open RAN deployments, enabling virtual RAN functions to operate effectively. In fact, beyond virtual RAN workloads, a real-time OS kernel is crucial for enabling any time-sensitive business application on cloud-native infrastructure like Amazon EKS Anywhere, a need that is addressed by Canonical’s Real-time Ubuntu.

The enablement journey

During the panel at the Mobile World Congress, Arno Van Huysteen (Chief Technology Officer for Telco at Canonical) highlighted the collaborative work of the partner companies and the ability to create innovative solutions to deliver what customers truly need. For example, Canonical engineers enabled out-of-tree kernel drivers in an integrated system, and our approach ensured the process was as smooth as possible.

Canonical engineers provided precise guidelines to AWS engineers on how to fine-tune the Ubuntu real-time kernel for the set of necessary boot parameters and the bindings between processes and specific CPUs, to achieve superior performance in Amazon EKS Anywhere operational environments. This process is publicly documented to assist customers and partners in executing similar workloads that necessitate a real-time kernel. 

In addition to optimising the operating system, Canonical engineers also worked with Amazon EKS Anywhere, a Kubernetes distribution, to port any necessary changes and integrations into Ubuntu Pro, Canonical’s subscription service to open source software security. With Ubuntu Pro, Ubuntu machines receive expanded security coverage (ESM) for over 25,000 software packages. 

Extensive testing was conducted across multiple environments where Amazon EKS Anywhere can operate to ensure smooth operations in various deployment scenarios. With Amazon EKS Anywhere seamlessly integrated with Ubuntu Pro images, operators can now take advantage of real-time processing capabilities, combined with the most comprehensive security and compliance support for open source software in Linux. This makes Ubuntu Pro on Amazon EKS Anywhere an excellent platform for telecom operators seeking secure systems with high performance. It will promptly facilitate Open RAN deployment for telecom operators, including NTT Docomo, who have chosen EKS Anywhere with Real-time Ubuntu.

<noscript> <img alt="" height="167" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_248,h_167/https://ubuntu.com/wp-content/uploads/06f4/contact-us.png" width="248" /> </noscript>

Learn more about Canonical’s solutions for telco

To discover more about Real-time Ubuntu and its advantages for telecommunications networks and applications, visit our blog. For additional details about our telecommunications services, please visit https://ubuntu.com/telco.

11 July, 2024 09:00AM

hackergotchi for Deepin

Deepin

deepin V23 Successfully Adapted to EIC7700X !

Recently, the deepin community announced the successful adaptation of the Eswin Computing EIC7700X, achieving the stable operation of the RISC-V version of deepin V23. This move once again confirms deepin's commitment and strength to the RISC-V ecosystem, and also opens the door to a brand new desktop experience for developers and users.   EIC7700X The EIC7700X is a powerful RISC-V intelligent computing System on Chip (SoC), equipped with a 64-bit out-of-order execution RISC-V processor and a proprietary high-efficiency Neural Processing Unit (NPU), offering a peak computing power of 19.95 TOPS, with the dual-die version reaching up to 39.9 TOPS. It ...Read more

11 July, 2024 02:38AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How often do you apply security patches on Linux?

<noscript> <img alt="" height="286" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720,h_286/https://ubuntu.com/wp-content/uploads/e15a/blog.jpg" width="720" /> </noscript>

Regular patching is essential for maintaining a secure environment, but there is no one-size fits all approach for keeping your Linux estate safe. So how do you balance the frequency of updates with operational stability?  There are strategies for enabling security patching automations in a compliant and safe way, for even the most restrictive and regulated environments. Understanding Canonical’s release schedules for software updates and knowing security patching coverage windows are essential pieces of information when defining a security patching strategy. I recently hosted a live webinar and security Q&A whereI explained how to minimise your patching cadence, or minimise the time unpatched vulnerabilities can be exploited. In this article, I’ll summarise my main points from that webinar and explain the most vital considerations for updates scheduling. 

Security patching for the Linux kernel

There are two types of kernels available in Ubuntu, and there are two ways these kernels can be packaged. The two types are the general availability (GA) kernel and the variant kernel. The two package types are debian packages, and snap packages. The GA kernel is the kernel version included on day 1 of an Ubuntu LTS release. Every Ubuntu LTS release receives a point release update every February and August, and typically there are 5 point releases. Ubuntu Server defaults to staying on the GA kernel for the lifetime of the Ubuntu Pro coverage for that release. Ubuntu Desktop defaults to upgrading the kernel from the second point release onwards, to a later version of the upstream kernel, referred to as the hardware enablement (HWE) kernel.

Security coverage for the GA kernel extends for the lifetime of the Ubuntu Pro coverage for that release. Security coverage for the HWE kernel extends for the lifetime of the HWE kernel, which is 6 months, plus another 3 months. This additional 3 months of security coverage beyond the end of life of the HWE kernel gives users a window to perform an upgrade to the next HWE kernel.

Is a reboot required to apply a security patch?

When the kernel package is updated, the Ubuntu instance must be rebooted to load the patched kernel into memory. When the general availability kernel is installed as a snap, updates to this snap package will reboot the device. When the general availability kernel is installed as a deb package, a reboot is not automatically performed, but it must be performed to apply the security patch.

There are some other packages in Ubuntu that also require a reboot, when they are updated. Any security updates to glibc, libc, CPU microcode, and the grub bootloader, will require a reboot to effect. Software which runs as a service will require those services to restart when security patches are applied, examples of such software includes ssh, web servers, and others. Other software which doesn’t run as a service, and runs on demand, doesn’t require a system reboot or service restart to take effect.

The Livepatch service will apply high and critical security patches to the running kernel, in memory. It will not upgrade the installed kernel package, so rebooting the machine and clearing its memory would result in removal of the Livepatch applied security patch. Livepatch provides 13 months of security patches to a GA kernel, and 9 months of security patches to an HWE kernel. After these 13- or 9-month windows, the kernel package must be upgraded and the Ubuntu instance must be rebooted, for continued security coverage through Livepatch.

3 approaches for security patching

Canonical provides an array of tools and services, ranging from Livepatch, Landscape, Snaps, and command line utilities such as unattended-upgrade. These tools and services can be used together, or selectively, and they provide security patching automation capabilities in Ubuntu. You have flexibility with using these tools to achieve a variety of different security patching goals on desktop, server, and IoT devices. Given the assumption that nobody wants to run software that isn’t being secured or supported by its maintainers, you may prefer one of the following security patching approaches:

  1. Delay security patching for as long as possible, to achieve maximum procrastination.
  2. Perform security patching with the least frequency, but on a predefined regularly recurring schedule.
  3. Minimise the window of the time systems can be exploited by a vulnerability, by reducing the time between the release of a security patch, and its installation.

Regardless of which approach is used, it’s worth noting that unscheduled security maintenance windows may be required, to patch security vulnerabilities in glibc, libc, or CPU microcode.

Security Patching Automations on Linux

The best practices for scheduling security patching automations webinar dives deeper into the rationale behind the implementation details of these 3 security patching approaches.

Access the Webinar

Security patching for procrastinators

With Livepatch enabled, Ubuntu LTS instances that stay on the GA kernel need to be upgraded and rebooted every 13 months. When running on an HWE kernel, the first upgrade and reboot must happen after 13 months, in May. Another upgrade and reboot must happen after 6 months, in November. After another upgrade and reboot activity in the subsequent May and November, the HWE kernel will be promoted to the next GA kernel. The GA kernel will need an upgrade and reboot every 13 months.

Many months may pass between upgrades and reboots, and so when procrastinating in this way, medium and below kernel vulnerabilities will remain unpatched. This approach maximises the amount of time medium and below kernel vulnerabilities remain unpatched.

Security patching on a recurring schedule

If a GA kernel is being used, an annual patching cadence can work. Taking the GA kernel’s 13-month Livepatch security coverage window into account, installing security patches and rebooting every May should keep the kernel and other packages on the machine in secure shape.

Assuming the HWE kernel is being used, an annual patching cadence cannot be used to apply security patches in the same month year after year. This approach would result in a lapse in security patching coverage for the kernel, for a period of time. Excluding the third year of a Ubuntu LTS release, when the fourth point release is published, it is possible to apply security patches once a year when using the HWE kernel.

A bi-annual security patching cadence every May and September would provide peace of mind, regardless of your choice of kernel. This security patching cadence takes Canonical’s release schedule and kernel security coverage windows into account. There is no lapse in security coverage when scheduling security maintenance windows bi-annually, every May and September.

Security patching for the smallest exploit footprint

More frequent security maintenance windows are obviously better. A very common schedule is monthly: there is no chance of running a stale kernel when upgrading and rebooting monthly. Weekly security patching and reboot cadences are recommended, and a daily security patching regimen is applauded. Canonical’s security patches are published as they are made available, it is recommended to take advantage of these security patches as they are released. Running your workloads in high availability, enabling security patching automations, and rolling out phased upgrades and reboots for groups of machines on a daily basis, provides the strongest security posture.

Best practices for enabling security patching automations

The best practices for scheduling security patching automations webinar answers all the big questions:

  • What patching automation options are available?
  • Where do security patches come from, and where can they be distributed from?
  • How are security patches distributed, and how should they be applied? How does Canonical’s rolling kernel strategy extend the security coverage window, for certain kernels?
  • When should security maintenance events be scheduled?

11 July, 2024 12:01AM

Podcast Ubuntu Portugal: E307 Dégradés De Ultravioletas

Um Raspberry Pi serve para muita coisa: testar Plasma e XFCE, trabalhar o dia todo a ouvir podcasts; saber quando devemos sair à rua para apanharmos um melanoma - e muito mais! O Miguel continua a saborear ambientes gráficos num Pi 5 e a brincar com automatizações e dégradés de cores com Home Assistant; mas vale a pena gastar dinheiro com mais aparelhos? Meh. O Diogo teve uma recaída no número de abas abertas; actualizou o seu Ubuntu mesmo em cima do prazo; destruiu o seu Thunderbird; promoveu descaradamente o Outro Podcast…e está a habilitar-se a arranjar problemas com o Роскомнадзор da Federação Russa e com a República Popular da China. Será que o voltaremos a ver depois de ir de férias?…

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

11 July, 2024 12:00AM

July 10, 2024

hackergotchi for Tails

Tails

Tails report for June 2024

Highlights

  • The European summer is here, and with it are summer holidays! We took some time off for some quality rest and recreation. How we vacationed: music festivals in Milan, hiking in the Alps, the Sierra Nevada, and the High Sierra, and biking in the Pyrenees. We do love the mountains! ⛰️

  • But before we went away for some quality R&R, we continued making it easier for Tails users to recover from the most common failure modes without requiring technical expertise:

    • We finalized a design to detect corruption of the Persistent Storage on a Tails USB stick, reporting it to users, and repairing it.

    • We made incremental progress towards warning Tails users when they have low available memory. We don't detect all the problematic cases yet but, when we do, GNOME gently notifies the user.

  • We have been working on a new user journey for backups. In June, we finished designing all interfaces and solicited feedback. The proposal was well received by 2 volunteers who have contributed code related to backups.

Releases

📢 We released Tails 6.4!

In Tails 6.4, we brought:

  • even stronger cryptographic protections, as Tails now stores a random seed on the Tails USB stick
  • fixes to make unlocking the Persisted Storage smoother
  • more reliable installation of Additional Software, due to a switch to using HTTPS addresses instead of onion addresses for the Debian and Tails APT repositories

To know more, check out the Tails 6.4 release notes and the changelog.

Metrics

Tails was started more than 775,377 times this month. That's a daily average of over 25,946 boots.

10 July, 2024 04:43PM

July 09, 2024

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.1.3-11 Released

This release of Clonezilla live (3.1.3-11) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.1.2-22

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2024/Jun/28).
  • Linux kernel was updated to 6.9.7-1.
  • Partclone was updated to 0.3.31.
  • Removed package cpufrequtils from lists of live system. It's not in the Debian repo anymore.
  • Removed thin-provisioning-tools from packages list of clonezilla live due to it breaks the dependence.
  • Added package yq, and removed package deborphan in live system.
  • Merged pull request #31 from iamzhaohongxin/patch-1. Update zh_CN.UTF-8.
  • Language file ca_ES was updated. Thanks to René Mérou.
  • Language file de_DE was updated. Thanks to Savi G and Michael Vinzenz.
  • Package live-boot was updated to 1:20240525.drbl1.
  • Package live-config was updated to 11.0.5+drbl3.
  • Set a bigger scrollback for screen in live system. It's easier to debug.

BUG FIXES

09 July, 2024 02:46AM by Steven Shiau

July 08, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 847

Welcome to the Ubuntu Weekly Newsletter, Issue 847 for the week of June 30 – July 6, 2024. The full version of this issue is available here.

In this issue we cover:

  • Developer Membership Board Election Results
  • Ubuntu Stats
  • Hot in Support
  • ¡Teaser Trailer de UbuCon Latinoamérica 2024!
  • LoCo Events
  • Checkbox 4.0.0 stable release
  • Thunderbird beta/core24 snap available for testing
  • Upstream release of cloud-init 24.2
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

08 July, 2024 10:29PM

Ubuntu Blog: Deploy confidential computing with Intel® TDX and Ubuntu 24.04 today

When we first announced support for Intel® Trust Domain Extensions (Intel® TDX) guest and host capabilities on Ubuntu 23.10, many of you used it to build applications and datacenters with strong silicon-level security guarantees. You also provided feedback on how easy the setup process was, expressed excitement about starting your confidential computing journey with Ubuntu, and shared your plans to continue this commitment with future Ubuntu releases, which will incrementally bring more advanced silicon features.

Today, we are happy to announce the availability of the Intel-optimised build for Ubuntu 24.04 LTS, which allows you to run Intel® TDX with an Ubuntu host, and continues Ubuntu’s earlier support for the TDX guest side. With no changes required to the application layer, VM isolation with Intel® TDX greatly simplifies the porting and migration of existing workloads to a confidential computing environment.

Why confidential computing with Intel TDX

Confidential computing addresses a critical gap in data security: protecting data while it is being processed in system memory. While traditional security measures primarily secure data at rest and data in transit, data in-use faces unique challenges. These include insider threats, where malicious insiders with elevated privileges can access sensitive data during its processing, as well as malware and exploits that take advantage of vulnerabilities within the platform’s privileged system software (such as the operating system, hypervisor, and firmware). 

Intel® TDX on 4th Gen and 5th Gen Intel® Xeon Scalable Processors represents one of the most ambitious silicon realisations of the confidential computing paradigm. They introduce secure and isolated virtual machines called trust domains (TDs), designed to shield against diverse software threats, including those posed by virtual-machine managers and other VMs hosted on the same platform. Intel® TDX also enhances defences against physical access attacks on platform memory, such as cold-boot attacks and DRAM interface intrusions. To achieve this high level of security, Intel® TDX incorporates new CPU security extensions that provide three essential security features:

  1. Memory Isolation through Main Memory Encryption: CPUs equipped with confidential computing capabilities include an AES-128 hardware encryption engine within their memory controller. This engine encrypts and decrypts memory pages whenever there is a memory read or write operation. Instead of storing workload code and data in plain text in system memory, they are encrypted using a hardware-managed encryption key. This encryption and decryption process happens seamlessly within the CPU, ensuring strong memory isolation for confidential workloads.
  1. Additional CPU-Based Hardware Access Control Mechanisms: CPUs with confidential computing capabilities introduce new instructions and data structures that allow auditing of security-sensitive tasks typically carried out by privileged system software. These tasks encompass memory management and access to platform devices. For example, when reading memory pages mapped to confidential workloads, these new instructions also provide information about the last value written into the page. This feature helps prevent data corruption and replay attacks by detecting unauthorised modifications to memory pages.
  1. Remote Attestation: Enable a relying party, whether it’s the owner of the workload or a user of the services provided by the workload, to confirm that the workload is operating on an Intel® TDX-enabled platform located within a TD before sharing data with it. Remote attestation allows both workload owners and consumers to digitally verify the version of the Trusted Computing Base (TCB) they are relying on to secure their data.

What do we offer with Ubuntu 24.04?

Ensuring end-users can fully utilise these critical silicon security features requires more than just acquiring the right hardware: it demands an enabled software stack above it. Within the Linux ecosystem, upstreaming patches before they can be integrated by the downstream OS distributions is a meticulous and time-consuming process. 

Recognising the timely need for Ubuntu end-users and customers to secure their sensitive data and code at run-time, Canonical and Intel have established a strategic collaboration through which we can provide a rolling Intel-optimized Ubuntu build that is ahead of upstream, and which continuously brings you the latest Intel® TDX features as they are developed by Intel. Today, we make available  an Intel-optimized build derived from Ubuntu 24.04, encompassing all the essential components required for deploying Intel® TDX confidential workloads. These Ubuntu builds support both host and guest environments, as well remote attestation capabilities, enabling seamless deployment of confidential Intel® TDX virtual machines:

  • Host side:  it includes a 6.8 kernel derived from the 24.04 generic kernel, along with critical user-space components such as Libvirt, and QEMU.
  • Guest side:  it provides a 6.8 kernel, Shim, Grub, and TDVF which serves as an in-guest VM firmware
  • Attestation: this release also includes Intel® Software Guard Extensions Data Center Attestation Primitives (DCAP) on the host side, and the Intel® Trust authority CLI on the guest side. These allow users to retrieve attestation reports from the underlying hardware root of trust, and forward them to the Intel® Trust Authority service for verification.
<noscript> <img alt="" height="424" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_327,h_424/https://ubuntu.com/wp-content/uploads/34f0/Screenshot-from-2024-07-09-17-18-55.png" width="327" /> </noscript>

Figure 1. End-2-End TDX software stack with Ubuntu

Support structure

To support our customers in confidently adopting Intel® TDX, Canonical will provide security maintenance and enterprise support for the Ubuntu 24.04 Intel-optimised build throughout its lifetime. For the host side, the kernel will continue to be updated, and will be engineered to allow users to roll to the new kernel every six months. Each kernel will receive nine months of security maintenance and support. This approach of hardware enablement (HWE) kernels is commonplace to allow for support of new hardware, and each is derived from the kernel version shipping with the interim releases, e.g. Ubuntu 24.10, ensuring continuous support. Similarly, for the userspace, we will either backport patches to the existing 24.04 versions or support newer versions. 

This rolling approach carefully balances enabling customers to leverage evolving TDX features as they progress upstream, while also enabling secure deployment of TDX today.

Looking ahead

This collaboration between Canonical and Intel underscores our shared commitment to advancing confidential computing, particularly within the enterprise sector where robust support for both host and guest capabilities is paramount. 

As Intel progresses with upstreaming additional silicon features, Canonical remains dedicated to delivering optimised Ubuntu builds, ensuring a smooth adoption path for Intel® TDX by our customers.

We eagerly anticipate your deployment of the Ubuntu 24.04 Intel® TDX build and value your feedback and questions. Your insights are vital as we continue to innovate and enhance data security solutions for the future.

Additional Resources

Contact us

Understand the basics of  confidential computing 

Learn about how to secure your AI workloads with confidential computing

Use Intel TDX on Azure

Use Intel TDX on Google Cloud

08 July, 2024 01:22PM

hackergotchi for Deepin

Deepin

July 07, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: One year of freelancing

Introduction

It was exactly one year ago today that I left my day job as Engineering Manager of LXD at Canonical and went freelance. It’s been quite a busy year but things turned out better than I had hoped and I’m excited about year two!

Zabbly

Zabbly is the company I created for my freelance work. Over the year, a number of my personal projects were transferred over to being part of Zabbly, including the operation of my ASN (as399760.net), my datacenter co-location infrastructure and more.

Through Zabbly I offer a mix of by-the-hour consultation with varying prices depending on the urgency of the work (basic consultation, support, emergency support) as well as fixed-cost services, mostly related to Incus (infrastructure review, migration from LXD, remote or on-site trainings, …).

Other than Incus, Zabbly also provides up to date mainline kernel packages for Debian and Ubuntu and associated up to date ZFS packages. This is something that came out as needed for a number of projects I work on, from being able to test Incus on recent Linux kernels to avoiding Ubuntu kernel bugs on my own and NorthSec’s servers.

Zabbly is also the legal entity for donations related to my open source work, currently supporting:

And lastly, Zabbly also runs a Youtube channel covering the various projects I’m involved with.
A lot of it is currently about Incus, but there is also the occasional content on NorthSec or other side projects. The channel grew to a bit over 800 subscribers in the past 10 months or so.

Now, how well is all of that doing? Well enough that I could stop relying on my savings just a few months in and turn a profit by the end of 2023. Zabbly currently has around a dozen active customers from 7 countries and across 3 continents with size ranging from individuals to large governmental agencies.

2024 has also been very good so far and while I’m not back to the level of income I had while at Canonical, I also don’t have to go through 4-5 hours of meetings a day and get to actually contribute to open source again, so I’ll gladly take the (likely temporary) pay cut!

Incus

A lot of my time in the past year has been dedicated to Incus.

This wasn’t exactly what I had planned when leaving Canonical.
I was expecting LXD to keep on going as a proper Open Source project as part of the Linux Containers community. But Canonical had other plans and so things changed a fair bit over the few months following my departure.

For those not aware, the rough timeline of what happened is:

So rather than contributing to LXD while working on some other new projects, a lot of my time has instead gone into setting up the Incus project for success.

And I think I’ve been pretty successful at that as we’re seeing a monthly user base growth (based on image server interactions) of around 25%. Incus is now natively available in most Linux distributions (Alpine, Arch Linux, Debian, Gentoo, Nix, Ubuntu and Void) with more coming soon (Fedora and EPEL).

Incus has 6 maintainers, most of whom were the original LXD maintainers.
We’ve seen over 100 individual contributors since Incus was forked from LXD including around 20 students from the University of Texas in Austin who contributed to Incus as part of their virtualization class.

I’ve been acting as the release manager for Incus, also running all the infrastructure behind the project, mentoring new contributors and reviewing a number of changes while also contributing a number of new features myself, some sponsored by my customers, some just based on my personal interests.

A big milestone for Incus was its 6.0 LTS release as that made it suitable for production users.
Today we’re seeing around 40% of our users running the LTS release while the rest run the monthly releases.

On top of Incus itself, I’ve also gotten to contribute to both create the Incus Deploy project, which is a collection of Ansible playbooks and Terraform modules to make it easy to deploy Incus clusters and contribute to both the Ansible Incus connection plugin and our Incus Terraform/OpenTofu provider.

The other Linux Containers projects

As mentioned in my recent post about the 6.0.1 LTS releases, the Linux Containers project tries to do coordinated LTS releases on our core projects. This currently includes LXC, LXCFS and Incus.

I didn’t have to do too much work myself on LXC and LXCFS, thanks to Aleksandr Mikhalitsyn from the Canonical LXD team who’s been dealing with most of the review and issues in both LXC and LXCFS alongside other long time maintainers, Serge Hallyn and Christian Brauner.

NorthSec

NorthSec is a yearly cybersecurity conference, CTF and training provider, usually happening in late May in Montreal, Canada. It’s been operating since 2013 and is now one of the largest on-site CTF events in the world along with having a pretty sizable conference too.

I’m the current VP of Infrastructure for the event and have been involved with it from the beginning, designing and running its infrastructure, first on a bunch of old donated hardware and then slowly modernizing that to the environment we have now with proper production hardware both at our datacenter and on-site during the event.

This year, other than transitioning everything from LXD to Incus, the main focus has been on upgrading the OS on our 6 physical servers and dozens of infrastructure containers and VMs from Ubuntu 20.04 LTS to Ubuntu 24.04 LTS.

At the same time, also significantly reducing the complexity of our infrastructure by operating a single unified Incus cluster, switching to OpenID Connect and OpenFGA for access control and automating even more of our yearly infrastructure with Ansible and Terraform.

Automation is really key with NorthSec as it’s a non-profit organization with a lot of staffing changes every year, around 100 year long contributors and then an additional 50 or so on-site volunteers!

I went over the NorthSec infrastructure in a couple of YouTube videos:

Conferences

I’ve cut down and focused my conference attendance a fair bit over this past year.
Part of it for budgetary reasons, part of it because of having so many things going on that fitting another couple of weeks of cross-country travel was difficult.

I decided to keep attending two main events. The Linux Plumbers Conference where I co-organizer the Containers and Checkpoint-Restore Micro-Conference and FOSDEM where I co-organize both the Containers and the Kernel devrooms.

With one event usually in September/October and the other in February, this provides two good opportunities to catch up with other developers and users, get to chat a bunch and make plans for the year.

I’m looking forward to catching up with folks at the upcoming Linux Plumbers Conference in Vienna, Austria!

What’s next

I’ve got quite a lot going on, so the remaining half of 2024 and first half of 2025 are going to be quite busy and exciting!

On the Incus front, we’ve got some exciting new features coming in, like the native OCI container support, more storage options, more virtual networking features, improved deployment tooling, full coverage of Incus features in Terraform/OpenTofu and even a small immutable OS image!

NorthSec is currently wrapping up a few last items related to its 2024 edition and then it will be time to set up the development infrastructure and get started on organizing 2025!

For conferences, as mentioned above, I’ll be in Vienna, Austria in September for Linux Plumbers and expect to be in Brussels again for FOSDEM in February.

There’s also more that I’m not quite ready to talk about, but expect some great Incus related news to come out in the next few months!

07 July, 2024 12:00PM

July 05, 2024

Ubuntu Blog: Ceph Days London 2024

<noscript> <img alt="" height="720" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_720/https://ubuntu.com/wp-content/uploads/18f1/1719940073433.jpeg" width="1280" /> </noscript>

Date: July 17th, 2022

Location: London, United Kingdom

In a couple of weeks, Ceph Days makes a stop off in London, at Canonical’s newly opened office at More London Place. Canonical is proud to be sponsoring this community led event alongside IBM.

If you are unfamiliar, Ceph Days are one day conference events that bring local Ceph communities together and host a multitude of technical talks from upstream developers and operators alike. Naturally, the event will end with an opportunity to network with fellow Ceph developers and users.

Tickets are still available and more information can be found here.

Canonical led talks

We aren’t just hosting the London event, we’re also excited to have two of our own Ceph experts on the speaker list.

Integrating NVMe-oF with Ceph and Juju

Software Engineer, Luciano Lo Giudice, will deliver a talk on how he has developed a new Juju charm that allows users to create NVMe-oF devices backed by RBD.  The charm provides a user-friendly experience for scaling and high-availability of the NVMe-oF gateway.

Sunbeam and Ceph sitting in a tree

James Page, Principal Architect, will talk about OpenStack Sunbeam and Ceph.  The Sunbeam project is a re-think of how to deploy and operate OpenStack Clouds including how OpenStack integrates with Ceph – in the form of MicroCeph. He will discuss the motivations for this approach and the tech used under the covers.

Looking forward to seeing you

We look forward to seeing you there, for more details on Ceph Days London, the full agenda, and to sign up see the event page here

Learn more about Ceph on Ubuntu here.

05 July, 2024 08:45AM

July 04, 2024

Harisfazillah Jamel: Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

 

Critical OpenSSH Vulnerability (CVE-2024-6387): Please Update Your Linux

A critical security flaw (CVE-2024-6387) has been identified in OpenSSH, a program widely used for secure remote connections. This vulnerability could allow attackers to completely compromise affected systems (remote code execution).

Who is Affected?

Only specific versions of OpenSSH (8.5p1 to 9.7p1) running on glibc-based Linux systems are vulnerable. Newer versions are not affected.

What to Do?

  1. Update OpenSSH: Check your version by running ssh -V in your terminal. If you're using a vulnerable version (8.5p1 to 9.7p1), update immediately.

  2. Temporary Workaround (Use with Caution): Disabling the login grace timeout (setting LoginGraceTime=0 in sshd_config) can mitigate the risk, but be aware it increases susceptibility to denial-of-service attacks.

  3. Recommended Security Enhancement: Install fail2ban to prevent brute-force attacks. This tool automatically bans IPs with too many failed login attempts.

Optional: IP Whitelisting for Increased Security

Once you have fail2ban installed, consider allowing only specific IP addresses to access your server via SSH. This can be achieved using:

  • ufw for Ubuntu

  • firewalld for AlmaLinux or Rocky Linux

Additional Resources

About Fail2ban

Fail2ban monitors log files like /var/log/auth.log and bans IPs with excessive failed login attempts. It updates firewall rules to block connections from these IPs for a set duration. Fail2ban is pre-configured to work with common log files and can be easily customized for other logs and errors.

Installation Instructions:

  • Ubuntu: sudo apt install fail2ban

  • AlmaLinux/Rocky Linux: sudo dnf install fail2ban


About DevSec Hardening Framework

The DevSec Hardening Framework is a set of tools and resources that helps automate the process of securing your server infrastructure. It addresses the challenges of manually hardening servers, which can be complex, error-prone, and time-consuming, especially when managing a large number of servers. The framework integrates with popular infrastructure automation tools like Ansible, Chef, and Puppet. It provides pre-configured modules that automatically apply secure settings to your operating systems and services such as OpenSSH, Apache and MySQL. This eliminates the need for manual configuration and reduces the risk of errors.


Prepare by LinuxMalaysia with the help of Google Gemini


5 July 2024

 

In Google Doc Format 

 

https://docs.google.com/document/d/e/2PACX-1vTSU27PLnDXWKjRJfIcjwh9B0jlSN-tnaO4_eZ_0V5C2oYOPLLblnj3jQOzCKqCwbnqGmpTIE10ZiQo/pub 



04 July, 2024 09:42PM by LinuxMalaysia (noreply@blogger.com)

hackergotchi for Finnix

Finnix

Finnix 126 released

Finnix 126 boot screen

Today marks the release of Finnix 126, the original utility live Linux distribution. Finnix 126 includes a number of fixes, new packages and new features:

  • Linux kernel 6.8 (Debian 6.8.12-1)
  • New packages: libc6-i386 (finnix/finnix#35; not directly usable but allows for running certain i386 binaries in Finnix’s amd64 userland)
  • Added 0 kernel command line option which does the same as the 0 (locale-config) utility, but during early boot and before shell prompts
  • Upstream Debian package updates
  • Many minor fixes and improvements

This is the first Finnix release to contain additional “supply chain” assurances. The release was built on a public CI platform (GitHub Actions), with the ISO (.disk/build_info) pointing to the URL of the build run which lists a SHA256 checksum of the ISO and links to the exact commit used to build it. Additionally, the build provides an attestation of the build artifacts through GitHub’s new attestation functionality.

Note that this release was made a few days after the OpenSSH CVE-2024-6387 vulnerability announcement, and to be clear, Finnix 126 does include a fixed version (Debian 9.7p1-7).

Please visit finnix.org to download Finnix 126 today!


04 July, 2024 05:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: MongoDB® use cases for automotive industry

The automotive industry is a vast and complex sector that includes all companies and activities involved in manufacturing vehicles or their components (such as engines, electronic control units (ECUs) and bodies). However,  this sector is growing even more, as it experiences  a technological revolution driven by new technologies like artificial intelligence, machine learning, and big data analytics.

Managing and utilising the enormous amounts of data that modern automobiles create is a critical challenge the automotive industry faces. Innovators are tasked with producing great user experiences and need to make effective use of data to deliver value. 

MongoDB® is the ideal fit for these tasks and so it’s easy to see why it’s one of the most widely used databases for enterprises, including the automotive  industry. It  provides a sturdy, adaptable and trustworthy foundation for operations, and can also safeguard sensitive customer data while facilitating swift responses to rapidly evolving situations. This security and stability can be enhanced even further with Charmed MongoDB.

Let’s explore how various players in the automotive sector are making use of MongoDB.

<noscript> <img alt="" height="983" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1748,h_983/https://ubuntu.com/wp-content/uploads/7c30/Enterprise-mongodb-for-automotive.png" width="1748" /> </noscript>

Supply chain management

The first use case is supply chain management, wherein a database stores data on automotive parts and components through the supply chain in order to help the manufacturer or supplier manage inventory, order parts, and generally ensure efficient production processes. MongoDB can be used to store and collect data related to the inventory of automotive parts and components and can be leveraged for data analytics and reporting. This includes a significant level of detail, down to the types and quantities of parts, storage locations,order history, and more. Its flexibility allows for easy adaptation to changes in the types of parts and inventory requirements. MongoDB can also be used in demand forecasting and supplier management.

Fleet management

Fleet operators rely on databases to optimise vehicle use, reduce costs and ensure safety. These databases help sector operators manage their fleets, track vehicle locations, schedule maintenance, and monitor driver behaviour. 

MongoDB can play a considerable role in real-time vehicle tracking as it can store and manage real-time location data. In addition, it can maintain logs and reports such as maintenance reports and schedules.

Predictive analytics

Finally, MongoDB can be a valuable database for implementing automotive predictive maintenance systems. Predictive maintenance aims to prevent equipment failures and reduce downtime by predicting when maintenance is needed based on the condition of the equipment and historical data. MongoDB’s flexibility, scalability, and real-time data processing capabilities make it well-suited for this application. In addition, it can process real-time data scales and can be integrated with different machine-learning applications.

Conclusion

MongoDB is ideally suited to a wide variety of use cases in the automotive industry, and it solves a number of industry challenges in supply chain management, fleet management, and predictive maintenance through its efficient data storage and management, real-time tracking, and easy scalability.  Whether you’re managing a large fleet of trucks around the clock, tightening up your supply chain to prevent downtime, or planning maintenance for a dizzying array of complex parts,  MongoDB’s flexibility, scalability, performance, and integration capabilities make it a perfect database solution.


MongoDB® for enterprise data management


Trusted by major automotive companies, Canonical delivers a full suite of open-source solutions and services that help you provide security, safety and innovation for digital transformation iautomotive.

Explore how MongoDB can support your projects in financial, telecommunications and automotive industries in our recent guide: Download Now

<noscript> <img alt="" height="1150" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1500,h_1150/https://ubuntu.com/wp-content/uploads/4b45/image.png" width="1500" /> </noscript>

Canonical for your MongoDB® journey

Charmed MongoDB is an enhanced, open source and fully-compatible, drop-in replacement for the MongoDB Community Edition with advanced enterprise features – Canonical can support you at every stage of your MongoDB journey (from Day 0 to Day 2). Our services include design, deployment, maintenance, management and support.

Further reading

Trademark Notice

“MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc.

04 July, 2024 10:00AM

hackergotchi for Deepin

Deepin

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E306 a Raposa De Wall Street

O Miguel voltou do hospital, mas todos receiam pelo seu estado de saúde: experimentou Plasma…e gostou. A comunidade médica está estupefacta com o fenómeno. Entretanto, escavações arqueológicas desenterraram um teclado antigo do ano 2000 DC e começou o processo de restauro. O Diogo trouxe todas as novidades da nova versão de Firefox e redes de espionagem que envolvem agentes russos; o site da Comunidade Ubuntu-PT cresce paulatinamente; o Omnivore reduziu heroicamente o número de abas abertas; a Raspberry Pi entrou no mercado de acções e o Steam segue a todo o vapor à conquista de jogos em GNU/Linux!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

04 July, 2024 12:00AM

July 03, 2024

hackergotchi for rescatux

rescatux

Super Grub2 Disk 2.06s3-beta4 downloads







Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu

NOTE:   The hybrid version should work in most any machine you might have. Please download that version.

Non scientific machine names Description
Oldie x86   These are very old machines that only have 32-bit processors. Their supported architecture for boot is i386.
Oldie 64bit   These are old machines, usually from 2010 year or before. They have 64-bit processors. Their supported architectures for boot are: i386 and x86_64.
UEFI 64bit   These are new machines, usually from 2011 year or after. They have 64-bit processors. Their supported architecture for boot is: x86_64-efi. If you enable CSM (also known as legacy boot) support on them they also support i386 and x86_64.
UEFI 32bit   These are new machines, usually from 2011 year or after. They are very rare. They have either 64-bit processors or 32-bit processors but somehow boot initially in 32-bit mode. Their supported architecture for boot is: i386-efi. I highly doubt you can enable CSM support on these machines.

NOTE:   The hybrid version should work in most any machine you might have. Please download that version.

Super Grub2 Disk 2.06s3-beta4

USB Bootable Images

Download Supported archs Notes
Download supergrub2-2.06s3-beta4-multiarch-USB.img.zip
i386, x86_64, i386-efi and x86_64-efi Recommended. Secure boot enabled. Modern UEFI 64-bit and 32-bit systems and also old BIOS systems. Includes additional BOOTISOS partition so that you can carry your loopback.cfg enabled distributions with you.
Download CLASSIC supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
i386, x86_64, i386-efi and x86_64-efi Secure boot non enabled. Modern UEFI 64-bit and 32-bit systems and also old BIOS systems. Includes additional BOOTISOS partition so that you can carry your loopback.cfg enabled distributions with you.

CD-ROM Bootable Images

Download Supported archs Notes
Download CLASSIC supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
i386, x86_64, i386-efi and x86_64-efi Modern UEFI 64-bit and 32-bit systems and also old BIOS systems.
Download supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
i386 and x86_64 Old BIOS (non UEFI) systems only.
Download supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
x86_64-efi Modern UEFI 64-bit systems only.
Download supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
i386-efi Modern UEFI 32-bit systems only. Also known as ia-32.

Standalone Images

Download Supported archs Notes
Download supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
x86_64-efi Modern UEFI 64-bit systems only.
Download supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI
i386-efi Modern UEFI 32-bit systems only. Also known as ia-32.

Misc

Download Supported archs Notes
Source Code (Git repo)
N/A Let’s you build Super Grub2 Disk on non supported archs.
Download super_grub2_disk_2.06s3-beta4.zip
i386, x86_64, i386-efi and x86_64-efi Every binary and source code inside a zip file. For offline people.

About other downloads. These other downloads might be built in the future if anyone complains and helps enough on our mailing list: coreboot, ieee1275, standalone coreboot and standalone ieee1275.

Hashes

In order to check the former downloads you can either check the download directory page for this release or you can check checksums right here:

MD5SUMS

85a2b7401bb867249aba15b89f5cbcb6  ./super_grub2_disk_2.06s3-beta4.zip
060731065ba529f2c1e509cb35fb80b9  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
288e77bd6afc73d92934b4b5c72dc99b  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
581f39ef92ba1dc9a4d0d1560b9af0b0  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
7c3d5c4bf64ca2020e340fc93a643946  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
1a3c543518ca3d900ad68a6a20e3668b  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
7ac9f636f2168b8f60a41e86ced6ba0c  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
63ba107bc92340eda6bd3799c4f86138  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
b6f90528d23f921b154a7d3e566e27d0  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
678d5b0b20e1687981a4b01f81e999fc  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
4f42e10e2e061d0f9887eedbc98f72cd  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
cc9e36bd25b26ca713fea3ff8bfb91cc  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
8968138e906457fb31b884bdf2ac4917  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
ab81e32bac87a6ffd9324d51a726fd6c  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
8ca38a24b62fab48b05195d4e4a1fc25  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
136f9f67a5bc5c9adaafea8dde05721a  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI

SHA1SUMS

3006efc17c468d9585fcd1bd5edd9238c96bfaf7  ./super_grub2_disk_2.06s3-beta4.zip
1a079d2677f234d16275c8eb2338e373a6f1dc02  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
9451ef5fd5b246a015476382fb91b6faa8753d82  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
4c5a83513953f0a7585a4f1a440c4edca7f27981  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
ce4fc8b4bea8ee80c9f3ce0f6fa9618fe12b91b5  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
6a3f94612501b179b21eb769fc8fde0ed00e8b47  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
f4db8c84f4e20c16571f61c7b7d2bcdc531640a1  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
6a27f45bf3daf4e26a2c0f869c00e0c9f54d5341  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
89fdd22839f92f4193094dfdedfe489ea3f88e48  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
30af59960390b971225fb2413bc77813a9042ecd  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
8fec34ccd2a0efb1e992702db08f8d3367533679  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
d8a659308b8361b449c0e3364f1ee7f4cc63220f  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
f39ce85e559a0a86f65fa70680f39ca8b807d31b  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
c84e7765c9d1b4fff8e77a79f5cddb5f0d881ae3  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
94733a513923503ea0cd5c9a0f4ed254f8e61c5d  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
6dde7e408f7d25062ae34c5e5ca8a5ea59c04658  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI

SHA256SUMS

11070279740c43c4901c069998311df72ad3ac818232735a3e7c572f92369539  ./super_grub2_disk_2.06s3-beta4.zip
317990a9a62844776a1e8e2b7f18fd00fd306285954bd4c757766c1a61c97157  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
bc12530efdbe89266c69c4125fbf3a3a1170540345928b7e0f4e9a941d1db1c9  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
4bae0633f77752d969eecd5b6d72a01929bf352c508663e53fef811d1ffa36b9  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
1f2e3749aa12249507c0b33c08f07c21b93a486e3961f2f39e5c3c3d7d082af5  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
0695b6a1579c82135a439ae0fb36d30b80e229e35eb84615ca640b6fe934e9c9  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
93c09e18b32cb956a2d28de963808848dccd273c8f199e39c8d8d8e530c6e877  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
6e8b6e51ab46f47860974d7695c4a3df6eb1c30738c80d797ac2b20f1757cb7a  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
bcbf765bfd6d088bdf673831e191c23c14c8cf1b2f496df2c9757153ad40e7de  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
d4815f564fcb6a29de1ddcb19de28f470c928d6ab35355e516b3e959e1dd51fb  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
b3d230a34873372785311dfa3e4d79d7975c5188b52c1ecfc9921b403a280f2e  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
feb1e76ae773ce3c1eacee46ea3a7915e374d4ca609062f00502367b64dfd00c  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
5ab064ee4c4abdf6233d3bb194d5ec72d7f8d07a1bc371e35db7017ecadfda81  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
c1ef669fe725a00fced5c84e90919575182a762c0373b745030b2b749647bec7  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
9675705f963804d269ba3f23aded56b1d1cabed04da52d7dfafb2925168fa566  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
d8460a97e765032aabcd96c24b8a3939e3607c7aaa2f3594fe6df18b61685771  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI

The post Super Grub2 Disk 2.06s3-beta4 downloads first appeared on Rescatux & Super Grub2 Disk.

03 July, 2024 10:00PM by adrian15

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What you need to know about regreSSHion: an OpenSSH server remote code execution vulnerability (CVE-2024-6387)

On 1 July 2024 we released a fix for the high-impact CVE-2024-6387 vulnerability, nicknamed regreSSHion, as part of the coordinated release date (CRD). Discovered and responsibly disclosed by Qualys, the unauthenticated, network-exploitable remote code execution flaw affects the OpenSSH server daemon (sshd) starting with version 8.5p1 and before 9.8p1. As for the versions distributed and supported by Ubuntu, this only affected the 22.04 LTS, 23.10 and 24.04 LTS releases – patched packages were made available to all users on the CRD. Older security-maintained releases, including those under ESM or Legacy Support (14.04 LTS, 16.04 LTS, 18.04LTS and 20.04 LTS) were unaffected, as they contain prior versions of the software that did not contain the affected code. If you’re running an OpenSSH server on a version that was affected, our recommendation is that you update as soon as possible. Read on to learn more about this CVE and how you can apply the fix.

Details

This vulnerability stems from the fact that an async-signal-unsafe function is called from a signal handler, specifically the one called when LoginGraceTime expires. Hitting a race condition, one made considerably harder by Address Space Layout Randomisation (ASLR), allows a malicious actor to execute arbitrary code as root. The name given by the researchers alludes to the fact that this is essentially a regression of a previous vulnerability, tracked as CVE-2006-5051, which had been fixed in OpenSSH 4.4p1, 18 years ago.  Despite this lapse, the Qualys report praises the defence-in-depth design, great track-record and overall security posture of the OpenSSH project, further underlining that software security issues are a fact of life, one that needs to be handled through a strong vulnerability management policy.

It should be noted that the researchers suspect that an unrelated patch only included in the Ubuntu 23.10 and 24.04 LTS releases prevents the service from being exploitable; however, we still advise that the updated package be installed.

Who is affected

An attacker with network access to a vulnerable sshd service may be able to exploit this race condition, without needing any credentials, hence the high severity associated – any SSH service accessible over the internet would be a prime target for such an attack. Qualys’ researchers have been able to demonstrate a proof-of-concept on the i386 architecture, but amd64 (x86-64) deployments are also at risk, with the caveat that it is believed to be more difficult to exploit due to the more effective utilisation of ASLR on this architecture. While this emphasises the benefits of a defence-in-depth approach to cybersecurity, with network access control used to restrict access to sensitive services, the strong recommendation is to upgrade to the patched versions as soon as possible.

How to address CVE-2024-6387

Upgrading the openssh-server package is sufficient, as this will restart the daemon process, as well:

sudo apt update && sudo apt install openssh-server

Users of Ubuntu Pro can also use the pro fix command:

sudo pro fix CVE-2024-6387

It should be noted that all Ubuntu releases from 16.04 LTS onwards enable the unattended-upgrades service which automatically checks for, and installs, any unapplied security updates every 24 hours. As such, this update was automatically rolled out within 24 hours of the updates being released at the CRD.

Mitigation

As the problematic code is only reached when the LoginGraceTime signal-based timer fires, this vulnerability can be eliminated by setting this configuration option to 0 (indefinite). However, please note that this leaves sshd vulnerable to a denial of service attack instead, through the exhaustion of all MaxStartups connections; therefore, the recommendation is to upgrade to the patched version.

If you wish to continue with this mitigation, you can issue the following commands:

echo "LoginGraceTime 0" | sudo tee /etc/ssh/sshd_config.d/cve-2024-6387.conf
sudo systemctl reload ssh.service

References

For more information, please refer to:

https://www.qualys.com/2024/07/01/cve-2024-6387/regresshion.txt
https://ubuntu.com/security/CVE-2024-6387
https://ubuntu.com/security/notices/USN-6859-1
https://www.cve.org/CVERecord?id=CVE-2024-6387

03 July, 2024 04:03PM

hackergotchi for rescatux

rescatux

Super Grub2 Disk 2.06s3-beta4 released

2.06s3-beta4 adds many translations and new features such as BTRFS support, Linux from /boot partition, partition labels and support for booting GNU/Hurd and ReactOS.

Super Grub2 Disk 2.06s3-beta4 is here.

Super GRUB2 Disk is a live cd that helps you to boot into most any Operating System (OS) even if you cannot boot into it by normal means.

A new beta release

This new version is packed with many new features. New Hungarian, Traditional Chinese, Polish and Japanese translations. Debian and Ubuntu secureboot binaries have been updated so that they properly work on updated or recent UEFIs. (Fix) Force to update devices after enabling native disk drivers. Added BTRFS support all over Super Grub2 Disk (Thanks to thermon!). Fixed the use of unicode.pf2. grub.cfg files are now searched at EFI partitions. diskpartchainboot.cfg: Fix quoted label. Operating System specific options: EFI, FreeBSD, FreeDOS, Linux, Mac OS X, MSDOS, Windows 98, Windows NT, Windows Vista (and newer). New Operating Systems: GNU/Hurd, ReactOS and Linux from /boot partition. Partition labels. Overall redesign. Refactor unicode font file generation.

Feedback on Arch GNU/Hurd is welcome because the current implementation is based on Debian GNU/Hurd.

Feedback on SecureBoot support is welcome.

I plan to release this same code as an stable release in about a month so please if something worked for you before in a previous Super Grub2 Disk version and it no longer works for you please report.

You can give us feedback in github issues page so that we can fix it properly. Please specify the exact filename you are using.

Super Grub2 Disk 2.06s2-beta1 – SecureBoot quick Demo



Please remember that there has been a renaming of Super Grub2 Disk files. So filenames with ‘classic’ on them means that they do NOT support SecureBoot.

Enjoy the beta and, once again, please give us feedback to report us if it works ok or not for you.

New Change SecureBoot vendor menu from the main menu
New Change SecureBoot vendor menu
New Operating System specific boot options

We are going to see which are the complete Super Grub2 Disk features with a demo video, where you can download it, the thank you – hall of fame and some thoughts about the Super Grub2 Disk development.

Please do not forget to read our howtos so that you can have step by step guides (how to make a cdrom or an usb, how to boot from it, etc) on how to use Super Grub2 Disk and, if needed, Rescatux.

Super Grub2 Disk 2.02s4 main menuSuper Grub2 Disk 2.02s3 main menu

Tour

Here there is a little video tour in order to discover most of Super Grub2 Disk options. The rest of the options you will have to discover them by yourself.

Features

Most of the features here will let you boot into your Operating Systems. The rest of the options will improve the Super Grub2 Disk operating systems autodetecting (enable RAID, LVM, etc.) or will deal with minor aspects of the user interface (Colours, language, etc.).

  • Change the language UI
  • Translated in several languages.
    • Chinese (Simplified)
    • Chinese (Traditional)
    • Finnish / Suomi
    • French / Français
    • German / Deutsch
    • Hungarian
    • Italian / Italiano
    • Japanese
    • Malay / Bahasa Melayu
    • Polish
    • Russian
    • Spanish / Español
Super Grub2 Disk 2.01 rc2 Spanish Main Menu

Super Grub2 Disk 2.01 rc2 Spanish Main Menu

  • Detect and show boot methods option to detect most Operating Systems
Super Grub2 Disk 2.01 beta 3 Everything menu making use of grub.cfg extract entries option functionality

Super Grub2 Disk 2.01 beta 3 – Everything menu making use of grub.cfg extract entries option functionality

  • Enable all native disk drivers *experimental* to detect most Operating Systems also in special devices or filesystems
  • Boot manually
    • Operating Systems
    • grub.cfg – Extract entries
Super Grub2 Disk 2.01 beta 3 grub.cfg Extract entries optionSuper Grub2 Disk 2.01 beta 3 grub.cfg Extract entries option
  • grub.cfg – (GRUB2 configuration files)
  • menu.lst – (GRUB legacy configuration files)
  • core.img – (GRUB2 installation (even if mbr is overwritten))
  • Disks and Partitions (Chainload)
  • Bootable ISOs (in /boot-isos or /boot/boot-isos
    • Extra GRUB2 functionality
      • Enable GRUB2’s LVM support
      • Enable GRUB2’s RAID support
      • Enable GRUB2’s PATA support (to work around BIOS bugs/limitation)
      • Mount encrypted volumes (LUKS and geli)
      • Enable serial terminal
    • Extra Search functionality
      • Search in floppy ON/OFF
      • Search in CDROM ON/OFF
  • List Devices / Partitions
  • Color ON /OFF
  • Exit
    • Halt the computer
    • Reboot the computer
  • Supported Operating Systems

    Excluding too custom kernels from university students Super Grub2 Disk can autodetect and boot most every Operating System. Some examples are written here so that Google bots can see it and also to make more confident the final user who searchs his own special (according to him) Operating System.

    • Windows
      • Windows 10
      • Windows Vista/7/8/8.1
      • Windows NT/2000/XP
      • Windows 98/ME
      • MS-DOS
      • FreeDOS
    • GNU/Linux
      • Direct Kernel with autodetected initrd
    Super Grub2 Disk - Detect any Operating System - Linux kernels detected screenshotSuper Grub2 Disk – Detect any Operating System – Linux kernels detected
    • vmlinuz-*
    • linux-*
    • kernel-genkernel-*
  • Debian / Ubuntu / Mint
  • Mageia
  • Fedora / CentOS / Red Hat Enterprise Linux (RHEL)
  • openSUSE / SuSE Linux Enterpsise Server (SLES)
  • Arch
  • Any many, many, more.
    • FreeBSD
      • FreeBSD (single)
      • FreeBSD (verbose)
      • FreeBSD (no ACPI)
      • FreeBSD (safe mode)
      • FreeBSD (Default boot loader)
    • EFI files
    • Mac OS X/Darwin 32bit or 64bit
    Super Grub2 Disk 2.00s2 rc4 Mac OS X entriesSuper Grub2 Disk 2.00s2 rc4 Mac OS X entries (Image credit to: Smx)

    Support for different hardware platforms

    Before this release we only had the hybrid version aimed at regular pcs. Now with the upcoming new EFI based machines you have the EFI standalone versions among others. What we don’t support is booting when secure boot is enabled.

    • Most any PC thanks to hybrid version (i386, x86_64, i386-efi, x86_64-efi) (ISO)
    • EFI x86_64 standalone version (EFI)
    • EFI i386 standalone version (EFI)
    • Additional Floppy, CD and USB in one download (ISO)
      • i386-pc
      • i386-efi
      • x86_64-efi

    Known bugs

    • Non English translations are not completed

    Supported Media

    • Compact Disk – Read Only Memory (CD-ROM) / DVD
    • Universal Serial Bus (USB) devices
    • Floppy (1.98s1 version only)






    Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu

    NOTE:   The hybrid version should work in most any machine you might have. Please download that version.

    Non scientific machine names Description
    Oldie x86   These are very old machines that only have 32-bit processors. Their supported architecture for boot is i386.
    Oldie 64bit   These are old machines, usually from 2010 year or before. They have 64-bit processors. Their supported architectures for boot are: i386 and x86_64.
    UEFI 64bit   These are new machines, usually from 2011 year or after. They have 64-bit processors. Their supported architecture for boot is: x86_64-efi. If you enable CSM (also known as legacy boot) support on them they also support i386 and x86_64.
    UEFI 32bit   These are new machines, usually from 2011 year or after. They are very rare. They have either 64-bit processors or 32-bit processors but somehow boot initially in 32-bit mode. Their supported architecture for boot is: i386-efi. I highly doubt you can enable CSM support on these machines.

    NOTE:   The hybrid version should work in most any machine you might have. Please download that version.

    Super Grub2 Disk 2.06s3-beta4

    USB Bootable Images

    Download Supported archs Notes
    Download supergrub2-2.06s3-beta4-multiarch-USB.img.zip
    i386, x86_64, i386-efi and x86_64-efi Recommended. Secure boot enabled. Modern UEFI 64-bit and 32-bit systems and also old BIOS systems. Includes additional BOOTISOS partition so that you can carry your loopback.cfg enabled distributions with you.
    Download CLASSIC supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
    i386, x86_64, i386-efi and x86_64-efi Secure boot non enabled. Modern UEFI 64-bit and 32-bit systems and also old BIOS systems. Includes additional BOOTISOS partition so that you can carry your loopback.cfg enabled distributions with you.

    CD-ROM Bootable Images

    Download Supported archs Notes
    Download CLASSIC supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
    i386, x86_64, i386-efi and x86_64-efi Modern UEFI 64-bit and 32-bit systems and also old BIOS systems.
    Download supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
    i386 and x86_64 Old BIOS (non UEFI) systems only.
    Download supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
    x86_64-efi Modern UEFI 64-bit systems only.
    Download supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
    i386-efi Modern UEFI 32-bit systems only. Also known as ia-32.

    Standalone Images

    Download Supported archs Notes
    Download supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
    x86_64-efi Modern UEFI 64-bit systems only.
    Download supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI
    i386-efi Modern UEFI 32-bit systems only. Also known as ia-32.

    Misc

    Download Supported archs Notes
    Source Code (Git repo)
    N/A Let’s you build Super Grub2 Disk on non supported archs.
    Download super_grub2_disk_2.06s3-beta4.zip
    i386, x86_64, i386-efi and x86_64-efi Every binary and source code inside a zip file. For offline people.

    About other downloads. These other downloads might be built in the future if anyone complains and helps enough on our mailing list: coreboot, ieee1275, standalone coreboot and standalone ieee1275.

    Hashes

    In order to check the former downloads you can either check the download directory page for this release or you can check checksums right here:

    MD5SUMS

    85a2b7401bb867249aba15b89f5cbcb6  ./super_grub2_disk_2.06s3-beta4.zip
    060731065ba529f2c1e509cb35fb80b9  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
    288e77bd6afc73d92934b4b5c72dc99b  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
    581f39ef92ba1dc9a4d0d1560b9af0b0  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
    7c3d5c4bf64ca2020e340fc93a643946  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
    1a3c543518ca3d900ad68a6a20e3668b  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
    7ac9f636f2168b8f60a41e86ced6ba0c  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
    63ba107bc92340eda6bd3799c4f86138  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
    b6f90528d23f921b154a7d3e566e27d0  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
    678d5b0b20e1687981a4b01f81e999fc  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
    4f42e10e2e061d0f9887eedbc98f72cd  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
    cc9e36bd25b26ca713fea3ff8bfb91cc  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
    8968138e906457fb31b884bdf2ac4917  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
    ab81e32bac87a6ffd9324d51a726fd6c  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
    8ca38a24b62fab48b05195d4e4a1fc25  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
    136f9f67a5bc5c9adaafea8dde05721a  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI
    

    SHA1SUMS

    3006efc17c468d9585fcd1bd5edd9238c96bfaf7  ./super_grub2_disk_2.06s3-beta4.zip
    1a079d2677f234d16275c8eb2338e373a6f1dc02  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
    9451ef5fd5b246a015476382fb91b6faa8753d82  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
    4c5a83513953f0a7585a4f1a440c4edca7f27981  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
    ce4fc8b4bea8ee80c9f3ce0f6fa9618fe12b91b5  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
    6a3f94612501b179b21eb769fc8fde0ed00e8b47  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
    f4db8c84f4e20c16571f61c7b7d2bcdc531640a1  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
    6a27f45bf3daf4e26a2c0f869c00e0c9f54d5341  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
    89fdd22839f92f4193094dfdedfe489ea3f88e48  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
    30af59960390b971225fb2413bc77813a9042ecd  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
    8fec34ccd2a0efb1e992702db08f8d3367533679  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
    d8a659308b8361b449c0e3364f1ee7f4cc63220f  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
    f39ce85e559a0a86f65fa70680f39ca8b807d31b  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
    c84e7765c9d1b4fff8e77a79f5cddb5f0d881ae3  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
    94733a513923503ea0cd5c9a0f4ed254f8e61c5d  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
    6dde7e408f7d25062ae34c5e5ca8a5ea59c04658  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI
    

    SHA256SUMS

    11070279740c43c4901c069998311df72ad3ac818232735a3e7c572f92369539  ./super_grub2_disk_2.06s3-beta4.zip
    317990a9a62844776a1e8e2b7f18fd00fd306285954bd4c757766c1a61c97157  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_pc-CD.iso
    bc12530efdbe89266c69c4125fbf3a3a1170540345928b7e0f4e9a941d1db1c9  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-USB.img.zip
    4bae0633f77752d969eecd5b6d72a01929bf352c508663e53fef811d1ffa36b9  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-multiarch-CD.iso
    1f2e3749aa12249507c0b33c08f07c21b93a486e3961f2f39e5c3c3d7d082af5  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-STANDALONE.EFI
    0695b6a1579c82135a439ae0fb36d30b80e229e35eb84615ca640b6fe934e9c9  ./super_grub2_disk_2.06s3-beta4/supergrub2-2.06s3-beta4-multiarch-USB.img.zip
    93c09e18b32cb956a2d28de963808848dccd273c8f199e39c8d8d8e530c6e877  ./super_grub2_disk_2.06s3-beta4/source_code/shimia32-debian-sourcecode.tar.gz
    6e8b6e51ab46f47860974d7695c4a3df6eb1c30738c80d797ac2b20f1757cb7a  ./super_grub2_disk_2.06s3-beta4/source_code/grubia32-debian-sourcecode.tar.gz
    bcbf765bfd6d088bdf673831e191c23c14c8cf1b2f496df2c9757153ad40e7de  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-ubuntu-sourcecode.tar.gz
    d4815f564fcb6a29de1ddcb19de28f470c928d6ab35355e516b3e959e1dd51fb  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-ubuntu-sourcecode.tar.gz
    b3d230a34873372785311dfa3e4d79d7975c5188b52c1ecfc9921b403a280f2e  ./super_grub2_disk_2.06s3-beta4/source_code/grubx64-debian-sourcecode.tar.gz
    feb1e76ae773ce3c1eacee46ea3a7915e374d4ca609062f00502367b64dfd00c  ./super_grub2_disk_2.06s3-beta4/source_code/super_grub2_disk_2.06s3-beta4_source_code.tar.gz
    5ab064ee4c4abdf6233d3bb194d5ec72d7f8d07a1bc371e35db7017ecadfda81  ./super_grub2_disk_2.06s3-beta4/source_code/shimx64-debian-sourcecode.tar.gz
    c1ef669fe725a00fced5c84e90919575182a762c0373b745030b2b749647bec7  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-CD.iso
    9675705f963804d269ba3f23aded56b1d1cabed04da52d7dfafb2925168fa566  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-x86_64_efi-CD.iso
    d8460a97e765032aabcd96c24b8a3939e3607c7aaa2f3594fe6df18b61685771  ./super_grub2_disk_2.06s3-beta4/supergrub2-classic-2.06s3-beta4-i386_efi-STANDALONE.EFI
    

    Changelog (since former 2.06s2-beta1 release)

    Changes since 2.06s2-beta1 version:


    • Added Hungarian translation



    • Oliver Tzeng (1):

    • Added Traditional Chinese translation



    • Osamu Aoki (1):

    • Added Japanese translation



    • adrian15 (67):

    • DEVELOPMENT.md – New section: How to update Secure Boot Binaries.

    • download-x64-debian was updated to use Debian 12 (Bookworm) Grub 2.06

    • download-x64-debian: New approach based on simpler variables.

    • download-x64-debian: Update to most recent Bookworm binaries.

    • download-ia32-debian: New approach based on simpler variables.

    • download-ia32-debian: Update to most recent Bookworm binaries.

    • download-x64-ubuntu: New approach based on simpler variables.

    • download-x64-ubuntu: Update to most recent Jammy binaries.

    • Remove secureboot deprecated lines.

    • Fix SecureBoot binaries url so that we can unpack its binaries.

    • Bump version to 2.06s3-beta1.

    • Force to update devices after enabling native disk drivers.

    • Ignore mo files from sgd_locale directory.

    • osdetect.cfg: Extract bootdir variable.

    • osdetect.cfg: initrd_file loop improvement.

    • osdetect.cfg: Added linux_entry_add function.

    • osdetect.cfg: Use linux_entry_add function.

    • osdetect.cfg: Add linux recovery option.

    • cfgdetect.cfg – Multi subvol

    • cfgdetect.cfg – Multi subvol (prettify)

    • cfgextract.cfg – Fixed identation.

    • cfgextract.cfg – Multi subvol

    • cfgextract.cfg – Multi subvol (prettify)

    • osdetect.cfg – Multi subvol

    • osdetect.cfg – Multi subvol (prettify)

    • Show ISO filename before its path.

    • Make sure to install grub-common so that unicode.pf2 can be used.

    • Show boot mode.

    • Search for grub.cfg files at EFI partitions.

    • Fixed grub-install path.

    • Do not end search at EFI files.

    • osdetect.cfg: Actual Linux Kernel boot fix.

    • Fix AUTHORS and COPYING path

    • Fixed Simplified Chinese language name.

    • Bump version to Bump version to 2.06s3-beta2.

    • download-x64-ubuntu: Make sure to use dualsigned image.

    • Bump version to Bump version to 2.06s3-beta3.

    • Added missing forward slashes in cfgdetect and cfgextract.

    • BTRFS – Fixed subvol detection.

    • osdetect.cfg – bootdir has been renamed to bootpath.

    • Linux Filesystem detection refactor

    • osdetect.cfg – Linux entries. Remove extra space.

    • Group Linux entries by basic kernel options.

    • BTRFS – Linux kernel check fixed.

    • grubdetect.cfg – Added BTRFS support.

    • grubdetect.cfg – Change imgs order.

    • autoiso.cfg – Added BTRFS support.

    • diskpartchainboot.cfg: Fix quoted label.

    • New osdetect.cfg batch implementation.

    • osdetect-os.cfg files were added.

    • osdetect-linux.cfg: Group entries by version.

    • osdetect-linux.cfg: Ident special kernel options.

    • osdetect-osx.cfg: Ident special kernel options.

    • osdetect-freebsd.cfg: Ident special kernel options.

    • osdetect-windows-nt.cfg: Ident special boot options.

    • osdetect-windows-vista.cfg: Ident special boot options.

    • Partition labels support.

    • Use partition labels.

    • Added ReactOS boot support.

    • Added Hurd boot support.

    • Show partitions at the very end.

    • Entry found logic is based on string values.

    • Download SecureBoot scripts – Simplify their failure.

    • path_title with custom ident.

    • Boot Linux from /boot partition.

    • osdetect-reactos.cfg: Disable additional ReactOS options.

    • Bump version to 2.06s3-beta4.



    • mk-pmb (1):

    • Refactor unicode font file generation



    • tofilwiktor (1):

    • Added Polish translation

    Finally you can check all the detailed changes at our GIT commits.

    If you want to translate into your language please check TRANSLATION file at source code to learn how to translate into your language.

    Thank you – Hall of fame

    I want to thank in alphabetical order:

    • The upstream Grub crew. I’m subscribed to both help-grub and grub-devel and I admire the work you do there.
    • Necrosporus for his insistence on making Super Grub2 Disk smaller.

    The person who writes this article is adrian15 .

    And I cannot forget about thanking bTactic, the enterprise where I work at and that hosts our site.

    Some thoughts about Super Grub2 Disk development

    Super Grub2 Disk development ideas

    I think we won’t improve Super Grub2 Disk too much. We will try to stick to official Grub2 stable releases. Unless a new feature that it’s not included in official Grub2 stable release is needed in order to give additional useful functionalities to Super Grub2 Disk.

    I have added some scripts to Super Grub2 Disk build so that writing these pieces of news is more automatic and less prone to errors. Check them out in git repo as you will not find them in 2.02s8 source code.

    Old idea: I don’t know when but I plan to readapt some scripts from os-prober. That will let us detect more operating systems. Not sure when though. I mean, it’s not something that worries me because it does not affect too many final users. But, well, it’s something new that I hadn’t thought about.

    Again, please send us feedback on what you think it’s missing on Super Grub2 Disk.

    Rescatux development

    I want to focus on Rescatux development on the next months so that we have an stable release before the end of 2017. Now I need to finish adding UEFI features (most finished), fix the scripts that generate Rescatux source code (difficult) and write much documentation.

    (adrian15 speaking)

    Getting help on using Super Grub2 Disk

    More information about Super Grub2 Disk

    The post Super Grub2 Disk 2.06s3-beta4 released first appeared on Rescatux & Super Grub2 Disk.

    03 July, 2024 09:45AM by adrian15

    hackergotchi for Deepin

    Deepin

    deepin Store App Update Log Summary(2024-06)

    New Applications Application Updates   Appendix: (1)deepin Previous Versions(include deepin V15): https://distrowatch.com/index.php?distribution=deepin   Content source: deepin community Reprinted with attribution

    03 July, 2024 07:05AM by aida

    July 02, 2024

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Colin Watson: Free software activity in June 2024

    My Debian contributions this month were all sponsored by Freexian.

    • I switched man-db and putty to Rules-Requires-Root: no, thanks to a suggestion from Niels Thykier.
    • I moved some files in pcmciautils as part of the /usr move.
    • I upgraded libfido2 to 1.15.0.
    • I made an upstream release of multipart 0.2.5.
    • I reviewed some security-update patches to putty.
    • I packaged yubihsm-connector, yubihsm-shell, and python-yubihsm.
    • openssh:
      • I did a bit more planning for the GSS-API package split, though decided not to land it quite yet to avoid blocking other changes on NEW queue review.
      • I removed the user_readenv option from PAM configuration (#1018260), and prepared a release note.
    • Python team:
      • I packaged zope.deferredimport, needed for a new upstream version of python-persistent.
      • I fixed some incompatibilities with pytest 8: ipykernel and ipywidgets.
      • I fixed a couple of RC or soon-to-be-RC bugs in khard (#1065887 and #1069838), since I use it for my address book and wanted to get it back into testing.
      • I fixed an RC bug in python-repoze.sphinx.autointerface (#1057599).
      • I sponsored uploads of python-channels-redis (Dale Richards) and twisted (Florent ‘Skia’ Jacquet).
      • I upgraded babelfish, django-favicon-plus-reloaded, dnsdiag, flake8-builtins, flufl.lock, ipywidgets, jsonpickle, langtable, nbconvert, requests, responses, partd, pytest-mock, python-aiohttp (fixing CVE-2024-23829, CVE-2024-23334, CVE-2024-30251, and CVE-2024-27306), python-amply, python-argcomplete, python-btrees, python-cups, python-django-health-check, python-fluent-logger, python-persistent, python-plumbum, python-rpaths, python-rt, python-sniffio, python-tenacity, python-tokenize-rt, python-typing-extensions, pyupgrade, sphinx-copybutton, sphinxcontrib-autoprogram, uncertainties, zodbpickle, zope.configuration, zope.proxy, and zope.security to new upstream versions.

    You can support my work directly via Liberapay.

    02 July, 2024 12:02PM

    Ubuntu Blog: Introducing Firefighting Support

    New service provides enhanced cloud support from Canonical’s experts 

    Canonical’s Managed Solutions team is proud to announce Firefighting Support, a new service for organisations that manage their infrastructure by themselves but need experts on call for troubleshooting needs.

    Firefighting Support provides managed-service-level support to customers who graduate away from fully managed services or are under security regulations too stringent to grant environment access to a third-party. The service, priced per node per year, enables you to: 

    • Receive support from Canonical Managed Solutions engineers in high severity situations, in the shape of a video call, within one hour of the incident alert.
    • Embedded Ubuntu Pro + Support, with all its benefits – all Firefighting Support packages include 24/7 Support for the supported products of choice at no additional fee. 
    • Unlock non-operational packages and access consulting sessions with our engineers, who are able to guide you on topics that would not be otherwise covered by a managed service.
    • Get more peace of mind, with the knowledge that both the Canonical Support and the Canonical Managed Solutions teams collaborate to troubleshoot and sustain your tech stack.

    Ideal to transition to self-managed infrastructure

    In the current market, options for customers looking to transition out of a managed service are limited almost entirely to enterprise support, which often involves a completely different team of engineers that applies standard protocols for every incident. Firefighting Support proposes a different take for post-managed life: the same team that has managed the environment to the point of handover continues to provide enhanced support in high-severity situations. Because the team is already highly familiar with both the stack’s architecture and the customer’s business needs, the support continues to be highly personalised and makes the transition to self-management much smoother. 

    A good choice for highly-regulated environments 

    There are several situations where managed services are simply not a viable option. This is often due to security protocols, which can be internal (in the case of highly-confidential infrastructures) or external (in the case of government entities, which fall under stringent national regulations). Firefighting Support is a feasible and compliant solution that provides engineers with the same in-depth knowledge of the supported environment, but does not require them to connect to it. This enables customers to receive a managed-level support while simultaneously adhering to any security requirements they may have.

    To learn more about Firefighting Support, get in touch with our team

    02 July, 2024 11:30AM

    hackergotchi for Deepin

    Deepin

    deepin Community Monthly Report for June 2024

    Community Data Overview   Community Products deepin V23 RC2 Release On June 14, 2024, the deepin V23 RC2 version was officially released. The latest update of deepin V23 RC2 not only includes multiple performance optimizations but also essentially completes the selection of basic software packages, keeping pace with mainstream domestic and international distributions. It ensures that the deepin V23 version has a comprehensive enhancement in security, stability, hardware and software compatibility, and performance in various scenarios. Deepin Home In June 2024, Deepin Home received a total of 353 user feedback on bugs and requirements:There were 246 bug reports and 57 ...Read more

    02 July, 2024 07:32AM by aida

    July 01, 2024

    hackergotchi for Ubuntu

    Ubuntu

    Ubuntu Weekly Newsletter Issue 846

    Welcome to the Ubuntu Weekly Newsletter, Issue 846 for the week of June 23 – 29, 2024. The full version of this issue is available here.

    In this issue we cover:

    • UbuCon North America 2024 postponed
    • Ubuntu Stats
    • Hot in Support
    • LoCo Events
    • FOSSCOMM 2024: Call for papers
    • Event Report – OpenSouthCode 2024
    • Canonical offers 12 year LTS for any open source Docker image
    • Ubuntu Cloud News
    • Canonical News
    • In the Press
    • In the Blogosphere
    • Featured Audio and Video
    • Meeting Reports
    • Upcoming Meetings and Events
    • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
    • And much more!

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    .

    01 July, 2024 10:56PM by guiverc