July 26, 2024

hackergotchi for VyOS

VyOS

VyOS Project July 2024 Update

Hello, Сommunity!

This summer is anything but a slow news season, but if you need a break from world events, here's a VyOS development update for you — a purely technical read. Our development efforts in June brought a reorganized (and much faster) config syntax migration subsystem, improvements in QoS and reverse proxy; support for raw firewall tables, and more.

26 July, 2024 08:26PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: MongoDB® use cases for the telecommunications industry

A trusted database is fundamental to the smooth and secure operation of telecommunications services:, from network management and customer service to compliance and fraud prevention. 

MongoDB® is one of the most widely used databases (DB Engines, 2024) for enterprises,  including those in the telecommunications  industry. It provides a sturdy, adaptable and trustworthy foundation. It also safeguards sensitive customer data while facilitating swift responses to rapidly evolving situations. 

With that in mind, let’s take a look at the key use cases for MongoDB in the telco sector and the advantages that this solution brings to the table. 

<noscript> <img alt="" height="764" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1366,h_764/https://ubuntu.com/wp-content/uploads/8184/Enterprise-mongodb-for-telco.png" width="1366" /> </noscript>

Subscribers and billing

The telco sector faces several technology challenges, including storing and processing large amounts of data collected from multiple sources. Operators collect data on how clients utilise their services and engage with third parties, as well as data related to the billing and charging applied to the services they receive. The difficulty comes from sorting through this vast amount of data to find insightful information, in order to deduce customer behaviour and usage trends, and then generate reliable predictions on service demand, data volume, and resource consumption. The size and speed at which data must be processed in telecommunications are too challenging for traditional relational databases to handle. If database operations are not agile and reliable, it leads to system slowdown and low service quality – this makes MongoDB the best database fit across multiple use cases.

MongoDB’s scalability and high availability features ensure that billing systems remain operational with minimal downtime, which is critical for accurate and timely billing. Additionally, MongoDB’s querying capabilities enable operators to swiftly retrieve and analyse customer data, including billing leading to more informed decision-making and improved customer support.

Internet of Things (IoT)

The use of MongoDB as a database for storing, retrieving, and processing IoT data from communication networks is increasingly common. This powerful database allows for real-time processing of IoT data, including alerts and sensor events. The data stored in MongoDB can be used for predictive analytics, specifically for monitoring the health and performance of network equipment like base stations, routers, and switches. By leveraging MongoDB, large volumes of sensor data can be stored and processed, enabling predictive maintenance algorithms to identify potential equipment failures before they occur. Continuous collection and analysis of sensor data in MongoDB helps predict maintenance needs, ultimately reducing downtime and maintenance costs.

Network and inventory management

A suitable database is integral to telco network management because it provides a structured and efficient way to store, manage, and analyse the vast amounts of data generated . MongoDB can store network performance data, including equipment status, network logs, and real-time telemetry data. This allows for monitoring and analysis of network performance, troubleshooting, and proactive maintenance as it can provide a near real-time data feed.

Telco companies often manage extensive inventories of network equipment and service resources. This is why use cases such as service provisioning and inventory management can be streamlined with a database technology like MongoDB. It can help track the allocation and availability of these resources through its storage, retrieval and scaling capabilities.

Take the next step

Explore how MongoDB can support your projects in the financial, telecommunications and automotive industries in our recent guide: “MongoDB® for enterprise data management”Download it now

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXcaUdhtODWvubjz-sDxzryGwO2ebmiQpEZtV__Ms_OLfdMDrz0Fs5hA4USwsPJKOjU9YYaa-gyE0iIpJ0VQolEv6TphKk1hTd0-qn_KS0fmii8ijyL_FAZz7wxXpT4tK3vOILFCFE45xpZjb3EFfdGrDnry?key=PlQNqUjkIPSDLnuhjuPjpw" width="720" /> </noscript>

Canonical for your MongoDB journey

Charmed MongoDB is an enhanced, open source and fully-compatible, drop-in replacement for the MongoDB Community Edition with advanced enterprise features – Canonical can support you at every stage of your MongoDB journey (from Day 0 to Day 2). Our services include:

Canonical open source solutions are building blocks that provide a unified approach to meet any current or future use cases in telco using multiple database,  AI, cloud and edge technologies.

Further reading

Trademark Notice

“MongoDB” is a trademark or registered trademark of MongoDB Inc. Other trademarks are property of their respective owners. Charmed MongoDB is not sponsored, endorsed, or affiliated with MongoDB, Inc.

26 July, 2024 12:07PM

hackergotchi for Pardus

Pardus

Mustafa Akgül Özgür Yazılım 2024 Yaz Kampı Katılımcı Başvuruları Başladı!

Mustafa Akgül Özgür Yazılım 2024 Yaz Kampı, özgür yazılım ve açık kaynak dünyasına ilgi duyan herkesi bu yıl da heyecanla bekliyor! 24 – 31 Ağustos 2024 tarihleri arasında düzenlenecek olan kamp, katılımcılara hem teorik hem de pratik eğitimler sunarak bilgi ve becerilerini geliştirme fırsatı tanıyacak.

Bolu Abant İzzet Baysal Üniversitesi Gölköy Yerleşkesi’n gerçekleştirilecek  kampa katılmak isteyenler için başvurular 4 Ağustos 2024 tarihine kadar devam edecek.

Kamp Programı ve İçerikleri

Kamp boyunca aşağıda yer alan konularda alanında uzman eğitmenler tarafından sunulacak olan eğitimler, katılımcıların teknik bilgi ve yetkinliklerini artırmayı hedefliyor.

Her yıl olduğu gibi bu yıl da , Pardus Eğitmenimiz Şenol Aldıbaş, Pardus geliştiricisi Yusuf Düzgün ve Osman Ünalan da eğitmenler arasında yer alacak.

Katılım ve Başvuru

Kampa katılmak isteyenler için başvurular 24 Temmuz 2024 tarihinde başladı ve 4 Ağustos tarihinde sona erecek.

Başvurularla ilgili https://kamp.linux.org.tr/2024-yaz/ sayfasından detaylı bilgi alabilir ve başvurularınızı gerçekleştirebilirsiniz.

Neden Katılmalısınız?

Linux Kullanıcıları Derneği (https://www.lkd.org.tr/) ve Bolu Abant İzzet Baysal Üniversitesi (https://www.ibu.edu.tr) ’nin organizasyonunu üstlendiği etkinlik bu yıl Bolu Abant İzzet Baysal Üniversitesi Gölköy Yerleşkesi’nde yapılıyor.

Mustafa Akgül Özgür Yazılım 2024 Yaz Kampı, özgür yazılım topluluğunun bir parçası olma, yeni teknolojiler öğrenme ve benzer ilgi alanlarına sahip kişilerle tanışma fırsatı sunuyor. Kamp süresince, katılımcılar hem teorik bilgi edinecek hem de pratik uygulamalarla öğrendiklerini pekiştirecekler. Ayrıca, Pardus Eğitmenimiz Şenol Aldıbaş ile Pardus geliştiricilerimiz Yusuf Düzgün ve Osman Ünalan’ın katılımıyla, Pardus’un geliştirilme süreci hakkında doğrudan bilgi edinme ve deneyim paylaşma fırsatı bulacaklar.

 

Bu eşsiz deneyimi kaçırmayın ve hemen başvurun!

 

 

26 July, 2024 11:03AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical to present keynote session at Kubecon China 2024

We are excited to announce that, on the 21st of August 2024, product managers Andreea Munteanu (AI) and Adrian Matei (Managed Services) will represent Canonical in a keynote session at Kubecon China, at the Kerry Hotel in Hong Kong. Canonical has been a regular presence at Kubecon events over the years, and we are excited to be joining Kubecon China for the first time.

The session, Tackling operational time-to-market decelerators in AI/ML projects, will dive deep into the requirements and factors that enable operational excellence in AI ventures, from infrastructure provisioning to monitoring and incident recovery. We will focus on what an open source AI stack should look like from a dual perspective: 

  • On the component side, we will explore tools like Kubeflow and MLFlow, examining their integrability in the infrastructure stack and their usability in various AI projects. 
  • On the operations side, we will provide a holistic analysis of AI infrastructure operations, from the lowest to the highest levels of the stack. The session will define operational excellence in machine learning, then present the benefits and challenges of achieving it.Lastly, we will lay out various pathways to operational excellence suitable for enterprises of multiple sizes. 

This topic is particularly important in the context of the recent operational challenges that the industry is facing. With the fast-paced innovation that surrounds the market, and the plethora of new tools and methods that emerge every day, sustaining an operationally efficient stack is key to long-term resilience and success. Furthermore, operational excellence is essential to achieving legal and security compliance, as well as to involvement in governmental projects. 
For these reasons and more, we invite you to attend the keynote. Adrian and Andreea will be joined by several Canonical APAC representatives, who are excited to meet and discuss any requirements or questions you may have. Registrations for Kubecon China are open and accessible here. Our team looks forward to meeting you there.

26 July, 2024 10:30AM

hackergotchi for Deepin

Deepin

Common Questions and Answers for deepin Users - Part 1

Hello everyone, Based on the common issues encountered during the download, installation, and usage of deepin, we have compiled these issues into a Q&A for your reference. We hope this can help everyone. Feel free to check it out, interact, and discuss~ You can also leave more specific questions in the comments or on the deepin forum, and we will continuously update this to better serve everyone.   Q1: What are the recommended system requirements for installation? The recommended configuration is as follows: Processor: 2.0GHz multi-core or higher frequency processor Memory: 8GB or more of physical memory Hard Disk: 64GB ...Read more

26 July, 2024 05:25AM by aida

July 25, 2024

hackergotchi for Purism PureOS

Purism PureOS

Avoiding a Monoculture with Secure Diverse Technology

The Perils of Big IT Dependence: The Recent Global OS Outage In the world of technology, diversity is not just a virtue, but a necessity. At Purism, a company dedicated to creating secure, privacy-respecting, and freedom-preserving digital solutions, we are deeply concerned about the growing trend of reliance on just a handful of operating systems […]

The post Avoiding a Monoculture with Secure Diverse Technology appeared first on Purism.

25 July, 2024 04:28PM by Randy Siegel

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Meet us in Sydney and let’s talk about how you can navigate your AI journey

Date: August 27, 2024

Venue: The Fullerton Hotel Sydney

Time: 13:00 PM – 18:00 PM

AI has officially taken off. Today, thousands of exciting projects are being taken to production in all industries, while a report by Deloitte found that use of gen AI by employees at Australian workplaces rose to 38% in 2023. Despite the wide and fast adoption of AI, businesses across the region are struggling to keep up, trying to educate employees about the risks of AI and introducing secure applications provided by experienced technology vendors. If you’re struggling with implementing AI in production, upskilling your workforce, or choosing trusted, secure solutions that respect compliance requirements, we’re here to help. Our upcoming AI workshop in Sydney will help organisations navigate this challenging AI journey and accelerate their time to market.

<noscript> <img alt="" height="360" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_640,h_360/https://ubuntu.com/wp-content/uploads/9208/ad82dcda650ef9ec2673d5b4b1492ae43ecafdb7.png" width="640" /> </noscript>

Canonical’s promise to offer secure open software goes beyond Ubuntu and covers your entire stack, including AI tooling. We provide solutions to enable organisations to run their ML workloads at all scales, from workstations to cloud and edge devices. The company’s experience of 20 years in the open source space is reflected in the AI industry, and is the reason we help organisations assess their AI readiness and build their ML infrastructure.

AI readiness assessment

AI has a huge potential to optimise costs, reduce manual tasks and enable innovation within any department. Organisations are put in front of a technological transformation that challenges them to restructure their entire infrastructure and reshape many of their processes. From optimising the compute power to introducing new tooling that enables machine learning operations (MLOps), they need to go through an entire journey before seeing any return on investment. 

Canonical’s expertise helped the University of Tasmania unlock real-time space tracking with AI/ML supercomputing.

Read more about it

<noscript> <img alt="" height="217" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_624,h_217/https://lh7-us.googleusercontent.com/docsz/AD_4nXdakf3Lb-KKCBXnaOBNCp8LlS6mYOgtG3BHdgPOAKYxX2eReE9XZ_hath1XsxpqpqrBWwkqHyz3SeR02TgMSSYFY8ieOI0_hFJHd4vKmMwiEZWL7ArcuAwxPtxAXez2_wtc7PWIO3xaCybOuBnwY6cQQnSi?key=nKAoON3wyiMzmBvjHnsCAg" width="624" /> </noscript>

Any AI initiative starts with initial experimentation on a workstation, which does not present a lot of problems. As projects progress, there is a need for more sophisticated tooling that can be used to automate ML workloads, optimise ML models and deploy them to edge devices. During the workshop, we will guide organisations in assessing their AI readiness and discuss how to build their architecture, depending on the use case or existing infrastructure.

GenAI with open source tooling

Linux Foundation reported in 2023 that almost half of the organisations prefer open source tooling for their GenAI projects. Open source lowers the barrier to entry, fosters collaboration and enables practitioners to experiment quickly. Despite this, enterprises still struggle because of security concerns, scalability of the ML tooling and costs associated with any GenAI initiatives.

During the executive AI workshop in Sydney, I will talk about how open source enables GenAI projects. During the talk, I will focus on the key considerations and challenges that organisations face when moving GenAI projects from experimentation to production, as well as the common pitfalls that the industry faces. The talk will highlight open source tools such as OpenSearch and Kubeflow, that are used to productise GenAI initiatives and what are the security risks that they address. At the end of it, you will be better equipped to run your GenAI projects in production, using secure open source tooling. 

Future of AI in Asia-Pacific

Halfway through the year, we will take the proposed points and do a deep dive into the topic, focusing on the Asia-Pacific region. Together with industry leaders from Dell Technologies and Firmusgrid Supercloud Pty Ltd, we will discuss about the challenges that companies face when it comes to AI, industries that are ahead of time and the next big things that Australia will embrace

This event is by invitation only and tailored specifically for executive and senior management levels. Invitations will be sent out one month prior to the event to ensure a curated and impactful experience. Please register your interest to attend this exclusive workshop and join us in shaping the future of AI.

25 July, 2024 03:29PM

Ubuntu Blog: How do you select the best enterprise data storage solution for your business?

<noscript> <img alt="" height="675" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_675/https://ubuntu.com/wp-content/uploads/0a53/enterprise-data-storage.png" width="1200" /> </noscript>

The choices you make around IT infrastructure have great impact for both business cost and performance, across areas as diverse as operations, finance, data analysis and marketing. Given the importance of data across all of these areas and frankly, across your business as a whole, making the right decision when choosing a new storage system is critical.  In this blog post we will take a look at some of the factors to consider in order to ensure you balance cost effectiveness with performance.

Performance

 There are multiple dimensions to storage performance, first let’s consider the most simplistic metrics:

IOPs – Input/Output operations per Second, i.e. the number of operations that can be processed in a one second period.

Response time – The time taken for an IO operation to be processed and safely stored in a storage system and an acknowledgement sent to the requesting application. 

Bandwidth – The measure of the volume of data that can be transferred in a single second.

Things can become more complex when we start to consider the size of each IO. The size of each IO has an effect on the total amount of bandwidth utilised. Transferring 4KB takes less time than transferring 1MB, and therefore impacts the response time of an IO request. Let’s look at two examples

Databases

A database will typically use small IO sizes, each operation typically only updating a database table, therefore the total amount of bandwidth utilised will be low. However, the response time is critically important for this use case, as the faster the database receives an acknowledgment that data has been safely written, then the faster the next transaction can be processed.  

Streaming

When editing multiple 4k video streams, a video editing application needs access to all of that data in totality, so here the response time of each IO request is less important than just transferring the entire video file as fast as possible, by utilising all available bandwidth to the storage system.

Scalability

All organisations face the prospect of data growth at some point in their existence.  Across the world, exabytes of new data is created everyday, and while very few organisations have to deal with data growing on that kind of scale, their storage system should be expandable, without disruption to existing workloads.

In some systems this is achieved by adding more and more disk shelves (scale up), which allows your capacity to grow, but this doesn’t add additional performance to the controllers of the system. In more modern scale out storage systems, when adding additional capacity you also add additional compute, so you get the best of both worlds: more capacity and more performance!

Reliability

The primary purpose of a storage system is to safely store data. If an application cannot consistently retrieve data, then the storage system is next to useless.  To protect data, modern storage systems use technologies like mirroring, parity or erasure coding to ensure that the loss of a disk or SSD doesn’t cause data loss.  Storage systems also have multiple controllers and multiple client connections to ensure high availability, should any of those components fail.  Scale out storage systems can provide even greater reliability as the software components that make up the cluster are distributed across many nodes, which allows the cluster to survive multiple hardware failures.

Flexibility

A storage system must be able to accommodate many different workloads, each with their own requirements. Some of them might be high performance, whilst others might be archival, however having the ability to migrate data between these different classes of storage pool is important  as that can free up expensive fast storage for other applications. 

While linked to capacity growth and consumption, having the ability to scale from a small storage system to a large one, with compromising performance is very important.  Having to migrate data is always a challenge and can be the cause of application outages just to move to a larger capacity storage system should be a problem of the past!

It is also important to be able to shrink a cluster if an organisation no longer requires the total available capacity of the storage system.  This is where scale out systems have an additional advantage over proprietary scale up systems, as they are built from general purpose hardware, which can be reused for other applications as necessary.

Featureset

When comparing multiple solutions, it is important to focus on the features that are important to you. Which protocols (block, file or object) do you need to use, and can the system support all of them? Do you need local replication like snaps and clones? And if yes, how many of these are the systems capable of creating and managing? Do you expect to need remote replication, or compliance features like data at rest encryption or object versioning?

Working with application owners in your organisation can help narrow down which features are really important versus choosing a solution based on hero-numbers or extreme limits sometimes shared by vendors.

Cost efficiency

In each of these areas, it’s possible to make decisions that cause the cost of the storage system to increase,which is why it is important to match the needs of a use case to the capabilities of the system.  For example, we could build a storage system with all flash disks, but is that necessary for archival class storage that is accessed infrequently? Similarly, when thinking about the available features, do you need remote-replication, and is there an extra licence cost for that feature?

Being pragmatic and understanding the things you don’t need is just as important as understanding those that you do!

Open source options for Enterprise Storage

Balancing all of these needs – performance, scalability, flexibility and cost can require compromise and a solid understanding of what you’re looking to achieve across these areas.

Proprietary storage arrays often entail significant costs, paid upfront, for both support and future upgrades, and in some cases upgrades can be difficult and time consuming, especially if you have to migrate from a smaller system in order to expand further.  Public cloud solutions are cheap and flexible to begin with, but once you have significant amounts of data stored, it is no longer the most economically efficient approach, (if you’re interested in going into more detail on this, read our white paper on that subject here!). 

Open source storage systems such as Ceph are ready for enterprise deployment, and can provide an economically advantageous answer to all of the needs described in this blog post. Canonical Ceph is a storage solution for all scales and all workloads, from the edge to large scale enterprise-wide deployments, and for all protocols (block, file and object). 

Diverse use cases with different performance, capacity, and protocol needs can all be managed by a single scale out cluster. Ceph’s ability to scale horizontally using commodity hardware means that growth can be incremental, and tuned to meet either performance or capacity needs.

<noscript> <img alt="" height="636" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1322,h_636/https://ubuntu.com/wp-content/uploads/b8fe/ceph.png" width="1322" /> </noscript>
Architectural overview of a Ceph cluster

Learn more

<noscript> <img alt="" height="675" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1200,h_675/https://ubuntu.com/wp-content/uploads/0460/SDS-for-enterprises.png" width="1200" /> </noscript>

Download our whitepaper – Software-defined storage for enterprises, to learn more about:

  • The budget challenges businesses face when scaling their storage
  • How open-source software-defined storage provides a viable alternative to legacy appliance-based storage systems
  • How to use Ceph to future proof for:
    • Reliability
    • Scalability
    • Flexibility
  • How to scale for growth, but maintain cost efficiency
  • How to reduce data silos with consolidation into a single multi-protocol storage cluster
  • How to prepare for disaster situations with local and remote data replication
  • How managed services can provide an “as-a-service experience” while reducing cost

Additional resources

25 July, 2024 08:42AM

hackergotchi for GreenboneOS

GreenboneOS

NIS2: What you need to know

NIS2 Umsetzung gezielt auf den Weg bringen!

The deadline for the implementation of NIS2 is approaching – by October 17, 2024, stricter cybersecurity measures are to be transposed into law in Germany via the NIS2 Implementation Act. Other member states will develop their own legislature based on EU Directive 2022/2555. We have taken a close look at this directive for you to provide you with the most important pointers and signposts for the entry into force of NIS2 in this short video. In this video, you will find out whether your company is affected, what measures you should definitely take, which cybersecurity topics you need to pay particular attention to, who you can consult in this regard and what the consequences of non-compliance are.

Learn about the Cyber Resilience Act, which provides a solid framework to strengthen your organization’s resilience against cyberattacks. The ENISA Common Criteria will help you assess the security of your IT products and systems and take a risk-minimizing approach right from the development stage. Also prioritize the introduction of an information management system, for example by implementing ISO 27001 certification for your company. Seek advice about IT baseline protection from specialists recommended by the BSI or your local responsible office.

In addition to the BSI as a point of contact for matters relating to NIS2, we are happy to assist you and offer certified solutions in the areas of vulnerability management and penetration testing. By taking a proactive approach, you can identify security gaps in your systems at an early stage and secure them before they can be used for an attack. Our vulnerability management solution automatically scans your system for weaknesses and reports back to you regularly. During penetration testing, a human tester attempts to penetrate your system to give you final assurance about the attack surface of your systems.

You should also make it a habit to stay up to date with regular cybersecurity training and establish a lively exchange with other NIS2 companies. This is the only way for NIS2 to lead to a sustainable increase in the level of cyber security in Europe.

To track down the office responsible for you, follow the respective link for your state.

Austria France Malta
Belgium Germany Netherlands
Bulgaria Greece Poland
Croatia Hungary Portugal
Cyprus Ireland Romania
Czech Republic Italy Slovakia
Denmark Latvia Slovenia
Estonia Lithuania Spain
Finland Luxembourg Sweden

25 July, 2024 07:00AM by Greenbone AG

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E309 Ananás Na Chapa

Devido a motivos imprevistos e a um calor de ananases - ou por outra, de fritar ananases - esta semana não teremos mais um episódio de podcast. Todavia, trazemos uma bonita mensagem de Natal e memórias nostálgicas dos anos 80.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

25 July, 2024 12:00AM

July 24, 2024

hackergotchi for Volumio

Volumio

Volumio Rivo Shines: A Bold Triumph In Son-video’s Performance Test

We offer a French translation of the same article for our Francophone Volumio-philes

Son-Vidéo went above and beyond in their evaluation of the Volumio Rivo. Their rigorous testing covered various music genres and audio setups, ensuring a comprehensive review of not only the hardware setup of the Rivo but how it interacts with Volumio OS as well. Their meticulous attention to detail is something we appreciate at Volumio where we put the same attention into crafting our streamers.  We provided some key highlights of their evaluation, to display how the Rivo not only withstood the test but shined through with the Rivo’s characteristic soundscape, connectivity options, and design.

 

Key Highlights

Outstanding Sound Quality:

The Volumio Rivo is celebrated for its exceptional sound quality, setting a new standard in audio reproduction. With remarkable clarity and nuanced detail, the Rivo’s soundscape exceeds expectations across diverse music genres. Whether rendering delicate classical compositions or delivering the punch of rock anthems, its fidelity and tonal accuracy are unmatched. This attention to sonic detail ensures a captivating and immersive listening experience. Compared to the Eversolo DMP-A6 Master edition, the Rivo resonates with both audiophiles seeking precision and casual listeners appreciating the richness of their favorite tracks.

Elegant Design:

Son-Vidéo

Another thing Son-Video praised the Rivo for was its elegant and practical design. Crafted with a sleek chassis that not only enhances its visual appeal but also provides robust protection against electromagnetic interference, the Rivo merges form with function seamlessly. Its minimalist aesthetics complement any audio setup, making it a sophisticated addition to modern living spaces.

Versatile Connectivity:

The review points out that a hallmark feature of the Rivo is its versatile connectivity options, catering to a wide range of audio preferences and setups. Whether streaming wirelessly from online services or playing locally stored files via USB or micro SD card, the Rivo offers unparalleled flexibility. It effortlessly integrates with various audio systems, supporting high-resolution outputs including USB-A, coaxial, optical, AES/EBU (up to 24-bit/192 kHz), and HDMI for seamless connectivity to home theater systems. This versatility extends to its ability to connect with external devices such as CD players for playback and ripping, making the Volumio Rivo a versatile centerpiece for audio enthusiasts seeking uncompromising performance and adaptability.

The necessity of an external power supply:

Son-Vidéo

Though not all the points were positive, the review indicates that the Rivo benefits from advanced power filtering and a separate transformer to ensure a stable current. Son-video does note that the included mains transformer may not match the quality of the internal components. Thankfully, we’ve anticipated this concern for our demanding audiophiles and we’ve created the Lineo5 which provides a stable current flow optimized for the Rivo and Primo

 

Son-Vidéo’s testing process was rigorous. They didn’t just test the basics but explored every aspect of the Rivo, using different musical genres and setups to push the device to its limits. The detail of their evaluation adds significant weight to their positive assessment of the Rivo. We’re glad that the Rivo withstood the test but received praise! 

 

With this review, we’re confident in recommending the Rivo as an option for anyone looking to enhance their music listening experience.

 

For an in-depth look at the review, visit Son-Vidéo.com.

The post Volumio Rivo Shines: A Bold Triumph In Son-video’s Performance Test appeared first on Volumio.

24 July, 2024 07:58AM by Alia Elsaady

July 23, 2024

hackergotchi for Purism PureOS

Purism PureOS

Private Cellular Networking and Secure Client Devices

Private cellular networking and secure client devices provide a holistic, turnkey solution leveraging the N79 band for government or enterprise networks. 5G The advent of 5G technology has revolutionized the world of wireless communication. Among the various aspects of 5G, the N79 band and private 5G networks have emerged as significant components. This post explores […]

The post Private Cellular Networking and Secure Client Devices appeared first on Purism.

23 July, 2024 06:36PM by Randy Siegel

hackergotchi for Volumio

Volumio

Rivo de Volumio brille : Un triomphe remarqué dans le test de performance par Son-Vidéo

 

Son-Vidéo s’est surpassé dans son évaluation du Volumio Rivo. Leurs tests rigoureux ont porté sur différents genres musicaux et configurations audio, garantissant un examen complet non seulement de la configuration matérielle du Rivo, mais aussi de son interaction avec le système d’exploitation Volumio OS. L’attention méticuleuse qu’ils portent aux détails est quelque chose que nous apprécions chez Volumio, où nous accordons la même attention à l’élaboration de nos streamers.  Nous vous présentons quelques points forts de leur évaluation, afin de montrer comment le Rivo a non seulement résisté au test, mais s’est également distingué par son ambiance sonore caractéristique, ses options de connectivité et son design.

 

Points forts

Une qualité sonore exceptionnelle :

Le Volumio Rivo est réputé pour sa qualité sonore exceptionnelle, établissant une nouvelle norme en matière de reproduction audio. Avec une clarté remarquable et des détails nuancés, le paysage sonore du Rivo dépasse les attentes dans divers genres musicaux. Qu’il s’agisse de restituer des compositions classiques délicates ou de donner du punch à des hymnes rock, sa fidélité et sa précision tonale sont inégalées. Cette attention portée aux détails sonores garantit une expérience d’écoute captivante et immersive qui, comparée à un Eversolo DMP-A6 Master, résonnera certainement à la fois chez les audiophiles à la recherche de précision et chez les auditeurs occasionnels appréciant la richesse de leurs morceaux préférés.

Design élégant :

volumio rivo

Son-Video a également fait l’éloge du Rivo pour son design élégant et pratique. Doté d’un châssis élégant qui non seulement rehausse son attrait visuel mais assure également une protection solide contre les interférences électromagnétiques, le Rivo allie harmonieusement forme et fonction. Son esthétique minimaliste complète n’importe quelle installation audio, ce qui en fait un complément sophistiqué pour les espaces de vie modernes.

Connectivité polyvalente :

volumio rivo

L’article souligne que l’une des caractéristiques principales du Rivo est la polyvalence de ses options de connectivité, qui permet de répondre à un large éventail de préférences et de configurations audio. Qu’il s’agisse de diffusion en continu sans fil à partir de services en ligne ou de lecture de fichiers stockés localement via un port USB ou une carte micro SD, le Rivo offre une flexibilité inégalée. Il s’intègre sans effort à divers systèmes audio et prend en charge des sorties haute résolution, notamment USB-A, coaxiales, optiques, AES/EBU (jusqu’à 24 bits/192 kHz) et HDMI, pour une connectivité transparente avec les systèmes de home cinéma. Cette polyvalence s’étend à sa capacité à se connecter à des appareils externes tels que des lecteurs de CD pour la lecture et l’extraction, faisant du Volumio Rivo une pièce maîtresse polyvalente pour les passionnés d’audio à la recherche de performances et d’adaptabilité sans compromis.

La nécessité d’une alimentation externe :

volumio rivo

Bien que tous les points ne soient pas positifs, la revue souligne que le Rivo bénéficie d’un filtrage avancé de l’alimentation et d’un transformateur séparé pour assurer un courant stable. Son-video note toutefois que le transformateur secteur fourni peut ne pas être à la hauteur de la qualité des composants internes. Heureusement, nous avons anticipé cette préoccupation pour nos audiophiles exigeants et nous avons créé le Lineo5 qui fournit un flux de courant stable optimisé pour le Rivo et le Primo.

 

Le processus de test de Son-Vidéo a été rigoureux. Ils ne se sont pas contentés de tester les éléments de base, mais ont exploré tous les aspects du Rivo, en utilisant différents genres musicaux et différentes configurations pour pousser l’appareil à ses limites. Les détails de leur évaluation donnent du poids à leur appréciation positive du Rivo. Nous sommes heureux que le Rivo ait résisté au test et qu’il ait reçu des éloges ! 

 

Cette évaluation nous conforte dans notre recommandation du Rivo comme option pour tous ceux qui cherchent à améliorer leur expérience d’écoute de la musique.

 

Pour un examen approfondi, visitez Son-Vidéo.com.

 

The post Rivo de Volumio brille : Un triomphe remarqué dans le test de performance par Son-Vidéo appeared first on Volumio.

23 July, 2024 10:31AM by Alia Elsaady

hackergotchi for GreenboneOS

GreenboneOS

How CSAF 2.0 Advances Automated Vulnerability Management

IT security teams don’t necessarily need to know what CSAF is, but on the other hand, familiarity with what’s happening “under the hood” of a vulnerability management platform can give context to how next-gen vulnerability management is evolving, and the advantages of automated vulnerability management. In this article, we take an introductory journey through CSAF 2.0, what it is, and how it seeks to benefit enterprise vulnerability management. 

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

What is CSAF?

The Common Security Advisory Framework (CSAF) 2.0 is a standardized, machine-readable vulnerability advisory format. CSAF 2.0 enables the upstream cybersecurity intelligence community, including software and hardware vendors, governments, and independent researchers to provide information about vulnerabilities. Downstream, CSAF allows vulnerability information consumers to aggregate security advisories from a decentralized group of providers and automate risk assessment with more reliable information and less resource overhead.

By providing a standardized machine readable format, CSAF represents an evolution towards “next-gen” automated vulnerability management which can reduce the burden on IT security teams facing an ever increasing number of CVE disclosures, and improve risk-based decision making in the face of an “ad-hoc” approach to vulnerability intelligence sharing.

CSAF 2.0 is the replacement for the Common Vulnerability Reporting Framework (CVRF) v1.2 and extends its predecessor’s capabilities to offer greater flexibility.

Here are the key takeaways:

  • CSAF is an international open standard for machine readable vulnerability advisory documents that uses the JSON markup language.
  • CSAF aggregation is a decentralized model of distributing vulnerability information.
  • CSAF 2.0 is designed to enable next-gen automated enterprise vulnerability management.

The Traditional Process of Vulnerability Management

The traditional process of vulnerability management is a difficult process for large organizations with complex IT environments. The number of CVEs published each patch cycle has been increasing at an unmanageable pace [1][2]. In a traditional vulnerability management process, IT security teams collect vulnerability information manually via Internet searches. In this way, the process involves extensive manual effort to collect, analyze, and organize information from a variety of sources and ad-hoc document formats.

These sources typically include:

  • Vulnerability tracking databases such as NIST NVD
  • Product vendor security advisories
  • National and international CERT advisories
  • CVE numbering authority (CNA) assessments
  • Independent security research
  • Security intelligence platforms
  • Exploit code databases

The ultimate goal of conducting a well-informed risk assessment can be confounded during this process in several ways. Advisories, even those provided by the product vendor themselves, are often incomplete and come in a variety of non-standardized formats. This lack of cohesion makes data-driven decision making difficult and increases the probability of error.

Let’s briefly review the existing vulnerability information pipeline from both the creator and consumer perspectives:

The Vulnerability Disclosure Process

Common Vulnerability and Exposure (CVE) records published in the National Vulnerability Database (NVD) of the NIST (National Institute of Standards and Technology) represent the world’s most centralized global repository of vulnerability information. Here is an overview of how the vulnerability disclosure process works:

  1. Product vendors become aware of a security vulnerability from their own security testing or from independent security researchers, triggering an internal vulnerability disclosure policy into action. In other cases, independent security researchers may interact directly with a CVE Numbering Authority (CNA) to publish the vulnerability without prior consultation with the product vendor.
  2. Vulnerability aggregators such as NIST NVD and national CERTs create unique tracking IDs (such as a CVE ID) and add the disclosed vulnerability to a centralized database where product users and vulnerability management platforms such as Greenbone can become aware and track progress.
  3. Various stakeholders such as the product vendor, NIST NVD and independent researchers publish advisories that may or may not include remediation information, expected dates for official patches, a list of affected products, CVSS impact assessment and severity ratings, Common Platform Enumeration (CPE) or Common Weakness Enumeration (CWE).
  4. Other cyber-threat intelligence providers such as CISA’s Known Exploited Vulnerabilities (KEV) and First.org’s Exploit Prediction Scoring System (EPSS) provide additional risk context.

The Vulnerability Management Process

Product users are responsible for ingesting vulnerability information and applying it to mitigate the risk of exploitation. Here is an overview of the traditional enterprise vulnerability management process:

  1. Product users need to manually search CVE databases and monitor security advisories that pertain to their software and hardware assets or utilize a vulnerability management platform such as Greenbone which automatically aggregate the available ad-hoc threat advisories.
  2. Product users must match the available information to their IT asset inventory. This typically involves maintaining an asset inventory and conducting manual matching, or using a vulnerability scanning product to automate the process of building an asset inventory and executing vulnerability tests.
  3. IT security teams prioritize the discovered vulnerabilities according to the contextual risk presented to critical IT systems, business operations, and in some cases public safety.
  4. Remediation tasks are assigned according to the final risk assessment and available resources.

What is Wrong with Traditional Vulnerability Management?

Traditional or manual vulnerability management processes are operationally complex and lack efficiency. Aside from the operational difficulties of implementing software patches, the lack of accessible and reliable information bogs down efforts to effectively triage and remediate vulnerabilities. Using CVSS alone to assess risk has also been criticized [1][2] for lacking sufficient context to satisfy robust risk-based decision making. Although vulnerability management platforms such as Greenbone greatly reduce the burden on IT security teams, the overall process is still often plagued by time-consuming manual aggregation of ad-hoc vulnerability advisories that can often result in incomplete information.

Especially in the face of an ever increasing number of vulnerabilities, aggregating ad-hoc security information risks being too slow and introduces more human error, increasing vulnerability exposure time and confounding risk-based vulnerability prioritization.

Lack of Standardization Results in Ad-hoc Intelligence

The current vulnerability disclosure process lacks a formal method of distinguishing between reliable vendor provided information, and information provided by arbitrary independent security researchers such as Partner CNAs. In fact, the official CVE website itself promotes the low requirements for becoming a CNA. This results in a large number of CVEs being issued without detailed context, forcing extensive manual enrichment downstream.

Which information is included depends on the CNA’s discretion and there is no way to classify the reliability of the information. As a simple example of the problem, the affected products in an ad-hoc advisory are often provided using a wide range of descriptors that need to be manually interpreted. For example:

  • Version 8.0.0 – 8.0.1
  • Version 8.1.5 and later
  • Version <= 8.1.5
  • Versions prior to 8.1.5
  • All versions < V8.1.5
  • 0, V8.1, V8.1.1, V8.1.2, V8.1.3, V8.1.4, V8.1.5

Scalability

Because vendors, assessors (CNAs), and aggregators utilize various distribution methods and formats for their advisories, the challenge of efficiently tracking and managing vulnerabilities becomes operationally complex and difficult to scale. Furthermore, the increasing rate of vulnerability disclosure exacerbates manual processes, overwhelms security teams, and increases the risk of error or delay in remediation efforts.

Difficult to Assess Risk Context

NIST SP 800-40r4 “Guide to Enterprise Patch Management Planning” Section 3 advises the application of enterprise level vulnerability metrics. Because risk ultimately depends on each vulnerability’s context – factors such as affected systems, potential impact, and exploitability – the current environment of ad-hoc security intelligence presents a significant barrier to robust risk-based vulnerability management.

How Does CSAF 2.0 Solve These Problems?

CSAF documents are essential cyber threat advisories designed to optimize the vulnerability information supply chain. Instead of manually aggregating ad-hoc vulnerability data, product users can automatically aggregate machine-readable CSAF advisories from trusted sources into an Advisory Management System that combines core vulnerability management functions of asset matching and risk assessment. In this way, security content automation with CSAF aims to address the challenges of traditional vulnerability management by providing more reliable and efficient security intelligence, creating the potential for next-gen vulnerability management.

Here are some specific ways that CSAF 2.0 solves the problems of traditional vulnerability management:

More Reliable Security Information

CSAF 2.0 remedies the crux of ad-hoc security intelligence by standardizing several aspects of a vulnerability disclosure. For example, the affected version specifier fields allow standardized data such as Version Range Specifier (vers), Common Platform Enumeration (CPE), Package URL specification, CycloneDX SBOM as well as the product’s common name, serial number, model number, SKU or file hash to identify affected product versions.

In addition to standardizing product versions, CSAF 2.0 also supports Vulnerability Exploitability eXchange (VEX) for product vendors, trusted CSAF providers, or independent security researchers to explicitly declare product remediation status. VEX provides product users with recommendations for remedial actions.

The explicit VEX status declarations are:

  • Not affected: No remediation is required regarding a vulnerability.
  • Affected: Actions are recommended to remediate or address a vulnerability.
  • Fixed: Represents that these product versions contain a fix for a vulnerability.
  • Under Investigation: It is not yet known whether these product versions are affected by a vulnerability. An update will be provided in a later release.

More Effective Use of Resources

CSAF enables several upstream and downstream optimizations to the traditional vulnerability management process. The OASIS CSAF 2.0 documentation includes descriptions of several compliance goals that enable cybersecurity administrators to automate their security operations for more efficient use of resources.

Here are some compliance targets referenced in the CSAF 2.0 documentation that support more effective use of resources above and beyond the traditional vulnerability management process:

  • Advisory Management System: A software system that consumes data and produces CSAF 2.0 compliant advisory documents. This allows CSAF producing teams to assess the quality of data being ingested at a point in time, verify, convert, and publish it as a valid CSAF 2.0 security advisory. This allows CSAF producers to optimize the efficiency of their information pipeline while verifying accurate advisories are published.
  • CSAF Management System: A program that can manage CSAF documents and is able to display their details as required by CSAF viewer. At the most fundamental level, this allows both upstream producers and downstream consumers of security advisories to view their content in a human readable format.
  • CSAF Asset Matching System / SBOM Matching System: A program that integrates with a database of IT assets including Software Bill of Materials (SBOM) and can match assets to any CSAF advisories. An asset matching system serves to provide a CSAF consuming organization with visibility into their IT infrastructure, identify where vulnerable products exist, and optimally provide automated risk assessment and remediation information.
  • Engineering System: A software analysis environment within which analysis tools execute. An engineering system might include a build system, a source control system, a result management system, a bug tracking system, a test execution system and so on.

Decentralized Cybersecurity Information

A recent outage of the NIST National Vulnerability Database (NVD) CVE enrichment process demonstrates how reliance on a single source of vulnerability information can be risky. CSAF is decentralized, allowing downstream vulnerability consumers to source and integrate information from a variety of sources. This decentralized model of intelligence sharing is more resilient to an outage by one information provider, while sharing the burden of vulnerability enrichment more effectively distributes the workload across a wider set of stakeholders.

Enterprise IT product vendors such as RedHat and Cisco have already created their own CSAF and VEX feeds while government cybersecurity agencies and national CERT programs such as the German Federal Office For Information Security Agency (BSI) and US Cybersecurity & Infrastructure Security Agency (CISA) have also developed CSAF 2.0 sharing capabilities. 

The decentralized model also allows for multiple stakeholders to weigh in on a particular vulnerability providing downstream consumers with more context about a vulnerability. In other words, an information gap in one advisory may be filled by an alternative producer that provides the most accurate assessment or specialized analysis.

Improved Risk Assessment and Vulnerability Prioritization

Overall, the benefits of CSAF 2.0 contribute to more accurate and efficient risk assessment, prioritization and remediation efforts. Product vendors can directly publish reliable VEX advisories giving cybersecurity decision makers more timely and trustworthy remediation information. Also, the aggregate severity (aggregate_severity) object in CSAF 2.0 acts as a vehicle to convey reliable urgency and criticality information for a group of vulnerabilities, enabling a more unified risk analysis, and more data driven prioritization of remediation efforts, reducing the exposure time of critical vulnerabilities.

Summary

Traditional vulnerability management processes are plagued by lack of standardization resulting in reliability and scalability issues and increasing the difficulty of assessing risk context and the likelihood of error.

The Common Security Advisory Framework (CSAF) 2.0 seeks to revolutionize the existing process of vulnerability management by enabling more reliable, automated vulnerability intelligence gathering. By providing a standardized machine-readable format for sharing cybersecurity vulnerability information, and decentralizing its source, CSAF 2.0 empowers organizations to harness more reliable security information to achieve more accurate, efficient, and consistent vulnerability management operations.

Greenbone AG is an official partner of the German Federal Office for Information Security (BSI) to integrate technologies that leverage the CSAF 2.0 standard for automated cybersecurity advisories.

23 July, 2024 07:00AM by Joseph Lee

July 22, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 849

Welcome to the Ubuntu Weekly Newsletter, Issue 849 for the week of July 14 – 20, 2024. The full version of this issue is available here.

In this issue we cover:

  • Upcoming APT cryptography changes for 24.04.1
  • Ubuntu 23.10 (Mantic Minotaur) reached End of Life on July 11, 2024
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Oracular Oriole 24.10 Wallpaper Competition
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

22 July, 2024 11:02PM

Faizul "Piju" 9M2PJU: Understanding Permission Setting and Security on FreeBSD vs. Linux

Introduction

When managing Unix-like operating systems, understanding permission settings and security practices is crucial for maintaining system integrity and protecting data. FreeBSD and Linux, two popular Unix-like systems, offer distinct approaches to permission settings and security. This article delves into these differences, providing a comprehensive comparison to help system administrators and users navigate these systems effectively.

1. Overview of FreeBSD and Linux

FreeBSD is a Unix-like operating system derived from the Berkeley Software Distribution (BSD), renowned for its stability, performance, and advanced networking features. It is widely used in servers, network appliances, and embedded systems.

Linux, on the other hand, is a free and open-source operating system kernel created by Linus Torvalds. It is the foundation of numerous distributions (distros) like Ubuntu, Fedora, and CentOS. Linux is known for its flexibility, broad hardware support, and extensive community-driven development.

2. File System Hierarchy

Both FreeBSD and Linux follow the Unix file system hierarchy but with slight variations. Understanding these differences is key to grasping permission settings on each system.

  • FreeBSD: Uses the Filesystem Hierarchy Standard (FHS) but has its nuances. The /usr directory contains user programs and data, while /var holds variable data like logs and databases. FreeBSD also utilizes /usr/local for locally installed software.
  • Linux: Generally adheres to the FHS. Important directories include /bin for essential binaries, /etc for configuration files, /home for user directories, and /var for variable files.

3. Permissions and Ownership

Both systems use a similar model for file permissions but have some differences in implementation and additional features.

3.1 Basic File Permissions

  • FreeBSD:
  • Owner: The user who owns the file.
  • Group: A group of users with shared permissions.
  • Others: All other users.
  • Permissions are represented as read (r), write (w), and execute (x) for each category. Commands to manage permissions:
  • ls -l: Lists files with permissions.
  • chmod: Changes file permissions.
  • chown: Changes file ownership.
  • chgrp: Changes group ownership.
  • Linux:
  • Similar to FreeBSD, Linux file permissions are also divided into owner, group, and others.
  • Commands are the same: ls -l, chmod, chown, chgrp.

3.2 Special Permissions

  • FreeBSD:
  • Setuid: Allows users to execute a file with the file owner’s permissions.
  • Setgid: When applied to a directory, new files inherit the directory’s group.
  • Sticky Bit: Ensures only the file owner can delete the file.
  • Linux:
  • Setuid: Allows a user to execute a file with the permissions of the file owner.
  • Setgid: When set on a directory, files created within inherit the directory’s group.
  • Sticky Bit: Similar to FreeBSD, it restricts file deletion.

4. Extended Attributes and ACLs

4.1 FreeBSD:

FreeBSD supports Extended File Attributes (EAs) and Access Control Lists (ACLs) to provide more granular permission control.

  • Extended Attributes: Used to store metadata beyond standard attributes. Managed with setfattr and getfattr.
  • Access Control Lists (ACLs): Allow setting permissions for multiple users and groups. Managed with setfacl and getfacl.

4.2 Linux:

Linux also supports Extended Attributes and ACLs.

  • Extended Attributes: Managed with setxattr and getxattr.
  • Access Control Lists (ACLs): Managed with setfacl and getfacl.

5. Security Models and Practices

5.1 FreeBSD Security Model:

FreeBSD includes several features for enhanced security:

  • Jails: Provide a form of operating system-level virtualization. Each jail has its own filesystem, network configuration, and process space, which helps in isolating applications and services.
  • TrustedBSD Extensions: Enhance FreeBSD’s security by adding Mandatory Access Control (MAC) frameworks, which include fine-grained policies for file and process management.
  • Capsicum: A lightweight, capability-based security framework that allows developers to restrict the capabilities of running processes, minimizing the impact of potential vulnerabilities.

5.2 Linux Security Model:

Linux employs a range of security modules and practices:

  • SELinux (Security-Enhanced Linux): A set of kernel-level security enhancements that provide mandatory access controls. It defines policies that restrict how processes can interact with files and other processes.
  • AppArmor: A security module that restricts programs’ capabilities with per-program profiles. Unlike SELinux, it uses path-based policies.
  • Namespaces and cgroups: Used for containerization, allowing process isolation and resource control. These are the basis for technologies like Docker and Kubernetes.

6. System Configuration and Management

6.1 FreeBSD Configuration:

FreeBSD uses configuration files located in /etc and other directories for system management. The rc.conf file is central for system startup and service configuration. The sysctl command is used for kernel parameter adjustments.

6.2 Linux Configuration:

Linux configurations are distributed across various directories like /etc for system-wide settings and /proc for kernel parameters. Systemd is the most common init system, managing services and their dependencies. The sysctl command is also used in Linux for kernel parameter adjustments.

7. User Management

7.1 FreeBSD:

FreeBSD manages users and groups through /etc/passwd, /etc/group, and /etc/master.passwd. User and group management commands include adduser, pw, and groupadd.

7.2 Linux:

Linux also uses /etc/passwd and /etc/group for user management. User and group management commands include useradd, usermod, groupadd, and passwd.

8. Network Security

8.1 FreeBSD:

FreeBSD offers robust network security features, including:

  • IPFW: A firewall and packet filtering system integrated into the kernel.
  • PF (Packet Filter): A powerful and flexible packet filter that provides firewall functionality and network address translation (NAT).

8.2 Linux:

Linux provides several options for network security:

  • iptables: The traditional firewall utility for configuring packet filtering rules.
  • nftables: The successor to iptables, offering a more streamlined and flexible approach to packet filtering and NAT.
  • firewalld: A front-end for iptables and nftables, providing dynamic firewall management.

9. Backup and Recovery

9.1 FreeBSD:

FreeBSD supports several backup and recovery tools:

  • dump/restore: Traditional utilities for file system backups.
  • rsync: For incremental backups and synchronization.
  • zfs snapshots: ZFS filesystem features allow creating snapshots for backup and recovery.

9.2 Linux:

Linux offers a range of backup and recovery tools:

  • tar: A traditional tool for archiving files.
  • rsync: For incremental backups and synchronization.
  • LVM snapshots: Logical Volume Manager features provide snapshot capabilities.

10. Conclusion

Both FreeBSD and Linux offer robust permission settings and security features, each with its strengths and specific implementations. FreeBSD provides a comprehensive suite of security features, including jails and Capsicum, while Linux offers a variety of security modules like SELinux and AppArmor. Understanding these differences is crucial for system administrators to effectively manage and secure their systems. By leveraging the unique features of each operating system, administrators can enhance their systems’ security and maintain a robust and reliable computing environment.

The post Understanding Permission Setting and Security on FreeBSD vs. Linux appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

22 July, 2024 02:09PM

hackergotchi for GreenboneOS

GreenboneOS

“Only 62 minutes”: From security provider to security problem

“Your company can be ruined in just 62 minutes”: This is how the security provider Crowdstrike advertises. Now the US manufacturer has itself caused an estimated multi-billion-dollar loss due to a faulty product update – at breakneck speed.

On 19 July at 04:09 (UTC), the security specialist CrowdStrike distributed a driver update for its Falcon software for Windows PCs and servers. Just 159 minutes later, at 06:48 UTC, Google Compute Engine reported the problem, which “only” affected certain Windows computers and servers running CrowdStrike Falcon software.

Almost five per cent of global air traffic was unceremoniously paralysed as a result, and 5,000 flights had to be cancelled. Supermarkets from Germany to New Zealand had to close because the checkout systems failed. A third of all Japanese MacDonalds branches closed their doors at short notice. Among the US authorities affected were the Department of Homeland Security, NASA, the Federal Trade Commission, the National Nuclear Security Administration and the Department of Justice. In the UK, even most doctors’ surgeries were affected.

The problem

The incident points to a burning problem: the centralisation of services and the increasing networking of the IT systems behind them makes us vulnerable. If one service provider in the digital supply chain is affected, the entire chain can break, leading to large-scale outages. As a result, the Microsoft Azure cloud was also affected, with thousands of virtual servers unsuccessfully attempting to restart. Prominent people affected reacted quite clearly. Elon Musk, for example, wants to ban CloudStrike products from all his systems.

More alarming, however, is the fact that security software is being used in areas for which it is not intended. Although the manufacturer advertises quite drastically about the threat posed by third parties, it accepts no responsibility for the problems that its own products can cause and their consequential damage. CrowdStrike expressly advises against using the solutions in critical areas in its terms and conditions. It literally states – and in capital letters: “THE OFFERINGS AND CROWDSTRIKE TOOLS ARE NOT FAULT-TOLERANT AND ARE NOT DESIGNED OR INTENDED FOR USE IN ANY HAZARDOUS ENVIRONMENT.”

The question of liability

Not suitable for critical infrastructures, but often used there: How can this happen? Negligent errors with major damage, but no liability on the part of the manufacturer: How can this be?

In the context of open source, it is often incorrectly argued that the question of liability in the event of malfunctions and risks is unresolved, even though most manufacturers who place open source on the market with their products do provide a warranty.

We can do a lot to make things better by tackling the problems caused by poor quality and dependence on individual large manufacturers. Of course, an open source supply chain is viewed critically, and that’s a good thing. But it has clear advantages over a proprietary supply chain. The incident is a striking example of this. It is easy to prevent an open source company from rolling out a scheduled update in which basic components simply do not work by using appropriate toolchains, and this is what happens.

The consequences

So what can we learn from this disaster and what are the next steps to take? Here are some suggestions:

  1. improve quality: The best lever to put pressure on manufacturers is to increase the motivation for quality via stricter liability. The Cyber Resilience Act (CRA) offers initial approaches here.
  2. Safety first: In this case, this rule relates primarily to the technical approach to product development. Deeply intervening in customer systems is controversial in terms of security. Many customers reject this, but those affected obviously do not (yet). They have now suffered the damage. There are alternatives, which are also based on open source.
  3. use software only as intended: If a manufacturer advises against use in a critical environment, then this is not just a phrase in the general terms and conditions, but a reason for exclusion.
  4. centralisation with a sense of proportion: There are advantages and disadvantages to centralising the digital supply chain that need to be weighed up against each other. When dependency meets a lack of trustworthiness, risks and damage arise. User authorities and companies then stand helplessly in the queue, without alternatives and without their own sovereignty.

22 July, 2024 11:25AM by Elmar Geese

hackergotchi for Deepin

Deepin

The Story Behind Mesa LLVMpipe ORCJIT Upstreaming

Recently, the Mesa open-source graphics driver merged the Merge Request (MR) for the llvmpipe ORCJIT backend, adding support for the riscv64 architecture.   What is LLVMpipe? LLVMpipe is a software renderer in the Mesa driver that does not use GPU hardware. Instead, it uses LLVM's JIT compiler to dynamically convert the graphics-related code to be rendered into rasterized data for display. It offers better performance compared to softpipe. Closed-source drivers have long been a major obstacle to the desktop ecosystem of the riscv64 architecture, causing most riscv64 development boards' built-in GPUs to be partially or completely unusable. Consequently, desktop distributions ...Read more

22 July, 2024 02:38AM by aida

July 19, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What is Open RAN?

You may have heard of the term Open Radio Access Networks (RAN) widely used in the telecom industry in recent years. In this blog, we are going to explain what Open RAN is, why it represents an important technology transformation, and how it will impact the telecom ecosystem. It is the first part of a series of blogs that will discuss this popular topic.

Mobile telecommunication networks

Understanding the importance of Open RAN requires some insight into the foundation it’s deployed on, so let’s start with a quick definition of a mobile network. In simplest terms, mobile networks are a connectivity solution that provides wireless communication capabilities to devices. This could be your phone, tablet, or laptop, but also any type of machines, devices and IoT products with communications capabilities.

<noscript> <img alt="" height="634" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_988,h_634/https://ubuntu.com/wp-content/uploads/972b/mobileNetwork.png" width="988" /> </noscript>
Figure 1. Simple illustration of a mobile network.

A mobile network is important not just because it provides voice calls and Internet connectivity to your phone or laptop, but also various types of data services to a vast range of devices, machines, people and businesses. Whether those devices are on the move or not, the network is designed to provide them with seamless connectivity. This is possible thanks to the radio access network (RAN) infrastructure deployed over a wide area, such as an entire country, and a core network located centrally at a data centre acting as the “brain” of the entire mobile network. The core network performs control operations on user data traffic to ensure that services that users have subscribed to receive the agreed quality of service (QoS) levels.

Radio access networks

Now let’s dive in a little bit more into RAN, which is simply a collection of radio towers and data processing elements. At a high level, RAN is the bridge between mobile devices and the core network, providing information exchange between a user’s mobile device and data services hosted elsewhere. It delivers user requests for services in the uplink to the mobile network as well as content uploaded by users, and content downloaded by user devices, such as video streams.

In the “uplink” from mobile devices to data networks, radio waves carry the information and signals sent by devices to the network. These radio signals are received by radio equipment hosted on radio towers, and then converted into digital signals in the RAN. Information in these signals is relayed to the mobile core network in data packets. The core then forwards these packets to services running over the Internet or on data networks. 

In the opposite direction, which is the “downlink” from data networks to mobile devices, the reverse process takes place: data from services on the Internet or other data networks are processed by the core network and the RAN, and then delivered to mobile devices.

Disaggregated RAN

Traditionally, a RAN is built with appliance-like purpose-built hardware. Part of the RAN is installed on radio towers as radio units and the rest is deployed at data centres to perform central data processing operations. The RAN hardware at data centres runs the entire telecommunications software stack of the RAN as a single processing unit, performing all processing except for the lowest level radio frequency (RF) operations carried out at radio units. In LTE/4G, the processing unit of a RAN is called a baseband unit (BBU), and in 5G, it is called a gNodeB.

<noscript> <img alt="" height="630" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1622,h_630/https://ubuntu.com/wp-content/uploads/4db4/RAN_disagregation.png" width="1622" /> </noscript>
Figure 2. Disaggregation of RAN: from traditional RAN running the complete protocol stack into one where the stack is disaggregated and run across edge sites.

The latest technology transition in the RAN is to disaggregate the software stack of a gNodeB into separate components. One can think of this transition as similar to the migration in modern software systems, from monolithic software architectures to more composable ones consisting of micro services, each of which perform a set of correlated services. The idea in having a disaggregated RAN is to achieve more modularity, offering several benefits to telecom operators and the telco ecosystem overall.

Benefits of a disaggregated RAN

The first benefit of a disaggregated RAN is that it allows for relocating parts of the software stack to locations further away from the central cloud towards the radio towers, and deploy them at edge clouds. This makes it possible for those parts of the software stack that need quick interaction with radios to be located closer to the radios, providing them with a shorter time to send and receive data.

If an operator were to deploy a complete gNodeB radio stack replicated at each and every radio site, it would be an extremely costly and inefficient deployment strategy. It would also be impractical to maintain and operate such a large number of gNodeBs over a network of thousands radio sites located at remote and hard to reach locations dispersed over a wide geographical area. 

Instead of a complete gNodeB software stack, a disaggregated RAN allows for only the parts of the stack that can be pushed towards the edge, deployed at edge clouds and shared among multiple closely located radio sites. This deployment structure strikes the balance between achieving higher performance by running some of the radio stack at the edge and lowering CAPEX and OPEX by sharing common parts of the stack across groups of radio sites in a hierarchical architecture.

Another benefit is that because a disaggregated RAN stack is implemented as a set of separate units, multiple vendors can offer their innovative solutions specifically for different parts of the software stack. This incubates the RAN technology ecosystem with new players entering the market and generates more competition. By creating a larger marketplace to source equipment from, operators can reduce their CAPEX in RAN deployments, thanks to higher competition leading to lower equipment prices. 

Finally, separate parts of the radio stack can be updated and upgraded independently without having to touch the entire software stack each time a specific part of it has to change. This modularity lowers the OPEX in equipment updates and upgrades, and makes it possible to undertake more granular operations carried out at different parts of the RAN independent from others.

Open RAN

The benefits of a disaggregated RAN can only be realised if the industry can work in harmony in delivering the components that make up these different components which as a whole builds a complete RAN. This is achievable through standardisation where components are fully interoperable. Just as in any system where separate parts of the system may be sourced from different vendors, agreeing on standard interfaces between disaggregated RAN components is the vital cornerstone of interoperability and smooth system integrations.

<noscript> <img alt="" height="520" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1330,h_520/https://ubuntu.com/wp-content/uploads/c257/open_ran_architecture.png" width="1330" /> </noscript>
Figure 3. Simplified diagram of Open RAN architecture.

Open RAN was born with a simple objective: achieve disaggregated RAN with open standard interfaces. The Third Generation Partnership Project (3GPP), which is the industry standardisation body that publishes mobile telecommunication network standards, defined different options as to how the radio software stack can be disaggregated into separate components. This was then further pioneered by the O-RAN Alliance, defining standard and open interfaces for the disaggregated RAN components.

With open standard interfaces, it is now possible for vendors to implement different parts of the radio stack as either hardware or software, with the assurance that products from different vendors can talk to each other as building blocks of a complete RAN system.

Summary

Operators look for new ways to achieve cost-efficiency in their infrastructure, where RAN is accountable for a large portion of their deployment and operational costs. Open RAN offers a new architecture, targeting at delivering on this cost reduction goal in CAPEX and OPEX that operators seek. With a disaggregated architecture, Open RAN incubates a larger RAN vendor ecosystem, lowering equipment costs in the long term by bringing more competition to the market with more innovative products offered for different parts of the radio software protocol stack. A disaggregated RAN will also bring in new capabilities to deploy, update, upgrade, and maintain RANs more effectively, lowering OPEX. The flexibility in deploying parts of the RAN at shared edge cloud locations brings performance benefits to modern edge computing services that require quick interactions. 

In the next part of this blog series, we will talk about how network functions virtualisation and cloud-native software operation principles complement and further enhance the benefits of Open RAN for operators and the telecom industry.

Contact us

<noscript> <img alt="" height="167" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_248,h_167/https://ubuntu.com/wp-content/uploads/06f4/contact-us.png" width="248" /> </noscript>

Get in touch with us for your telco deployment needs and your transition to open source in mobile networking. Canonical provides a full stack for your telecom infrastructure. To learn more about our telco solutions, visit our webpage at ubuntu.com/telco.

Further reading

What is a telco cloud?

Bringing automation to telco edge clouds at scale

Fast and reliable telco edge clouds with Intel FlexRAN and Real-time Ubuntu for 5G URLLC scenarios

19 July, 2024 09:30AM

Ubuntu Blog: Charmed OpenSearch Beta is here. Try it out now!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXf_eGmo4DXQX_7vCwXlKd8hPh2zsm23d-mEImXo01mS5wTWp65Fg7GFAY818eDHImPEjlcvngSX5CUro9P8FAmiOh8KxgcZessk3VVyDxaGScuPIVOg93P-0LOGNn9ANcjFmQ3FOObpK9-50__c8LDgwH_m?key=leRjOoMK9rXIuxMrw8ddvA" width="720" /> </noscript>

Canonical’s Data and AI portfolio is growing with a new data and analytics tool. We proudly announce that Charmed OpenSearch version 2.14 is now available in Beta. OpenSearch®  is a comprehensive search engine and analytics suite solution that thousands of organisations use for various use cases in search engines, security, and AI/ML.

From today, data engineers, scientists, analysts,  machine learning engineers, and AI enthusiasts can take the Charmed OpenSearch beta for a drive and share the feedback directly with us and in our beta programme.

Join our beta program

What is OpenSearch®?

OpenSearch is a thriving open source project and community with over 600 million project downloads, over 300 Github contributors and over 9100 Github stars. It is used in multiple cases, such as search, observability, event management, security analytics, visualisation, AI, machine learning and more. Let’s take a brief look at how companies are taking advantage of OpenSearch’s robust engine capabilities.

Search

Using OpenSearch as a search engine can be a powerful solution for various applications, ranging from e-commerce sites to large-scale enterprise data searches. One example would be customers needing to quickly find products from a large inventory with various data attributes like brand, price, and multiple categories. You can use OpenSearch for full-text search, faceted search with filter, autocomplete, suggestions, and personalisation.

Observability

Using OpenSearch as an observability tool is an excellent choice for monitoring, troubleshooting, and gaining insights into complex systems. You can use metrics, logs and traces for OpenSearch to understand a system’s health, performance and reliability.

Security Analytics

You can leverage OpenSearch to build a robust Security Information and Event Management (SIEM) system. This system enables real-time analysis of security alerts generated by hardware and software, network infrastructure, and applications. It can be used for data and log aggregation, alerting, threat detection, compliance reporting, etc.

Visualisation

OpenSearch has a Dashboard feature that can be used to create and customise visualisation tools to monitor and display data insights in real time. These tools can be used as charts, graphs, data filters, and customised panels.

Machine Learning

OpenSearch’s machine learning capabilities can be used for advanced data analysis, anomaly detection, predictive analytics, and improving search relevance. It can also be used as a vector database and model embeddings and is a good tool for Retrieval Augmented Generation (RAG) for large language model (LLM) projects.

What is Charmed OpenSearch?

OpenSearch is an all-in-one search engine, analytics and machine learning tool for multiple use cases. However, working with OpenSearch in production-grade use cases that process vast amounts of data and extensive data infrastructure can be challenging. Automating the deployment, provisioning, management, and orchestration of production OpenSearch data clusters can also be highly complex.

What if there was an easier way to get more out of OpenSearch through an operator – called Charmed OpenSearch? An operator is an application containing code that takes over automated application management tasks. Picture it as your technological virtuoso, orchestrating a grand performance that includes high availability, automated single multiple clusters deployment, implementing robust security measures like transport layer security (TLS), configuring initial user management, adding plug-ins and extension features, upgrades automation, observability of OpenSearch clusters, and even handling backup and restore operations. Charmed OpenSearch is an upgraded version of the OpenSearch upstream project that uses automation and can be deployed on any private, public or hybrid cloud.

With a primary mission of simplifying the OpenSearch experience, Charmed OpenSearch is your backstage pass to a world where OpenSearch isn’t just a search and analytics suite – it’s a seamlessly operated search engine and analytics powerhouse.

Try Charmed OpenSearch Beta today.

Are you a data engineer, analyst, scientist, or machine learning enthusiast interested in trying OpenSearch? Charmed OpenSearch can be:

  • Deployed on your local machine
  • Used for multiple use cases: observability, SIEM, visualisation and GenAI
  • Improved with your feedback.

To get started, you must run Ubuntu OS, meet the minimum system requirements, and be familiar with OpenSearch concepts.
Simple deployment steps for Charmed OpenSearch  in your Ubuntu VM:

juju deploy opensearch --channel 2/beta

Learn to use Charmed OpenSearch:

Access the tutorial here

Share your feedback

Charmed OpenSearch is an open-source project that is growing because of the care, time and feedback that our community gives. This beta release is no exception, so if you have any feedback or questions, please feel free to contact us.

Give us your feedback

Further Reading

Trademark Notice

OpenSearch is a registered trademark of Amazon Web Services. Other trademarks are the property of their respective owners. Charmed OpenSearch is not sponsored, endorsed, or affiliated with Amazon Web Services.


19 July, 2024 08:49AM

hackergotchi for Purism PureOS

Purism PureOS

Abside and Purism Partner to Deliver Secure Mobile Solution for U.S. Government and NATO Countries

Acton, MA, USA – July 19th, 2024 – Abside, a leading provider of US made secure networking solutions, and Purism, a secure computing and US phone manufacturer, today announced a collaboration to deliver a secure mobile solution for the U.S. government and NATO countries. This collaboration brings together Abside’s N79 5G private network solution, designed […]

The post Abside and Purism Partner to Deliver Secure Mobile Solution for U.S. Government and NATO Countries appeared first on Purism.

19 July, 2024 12:09AM by Purism

Purpose-Built Smartphones for Government vs. Commercial Off-The-Shelf Devices

The rise of the smartphone as both an indispensable consumer electronic device and an omnipresent tool for performing tasks at work has led to an inevitable question: Should enterprise and government centralized IT organizations deploy Commercial Off-the-Shelf (COTS) devices such as the iPhone, Google Pixel, Samsung Galaxy and attempt to secure them, or should organizations […]

The post Purpose-Built Smartphones for Government vs. Commercial Off-The-Shelf Devices appeared first on Purism.

19 July, 2024 12:03AM by Randy Siegel

July 18, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Let’s meet at AI4 and talk about open source and AI tooling

Date: 12 – 14 August 2024

Booth: 426

Autumn is a season full of events for the AI/ML industry. We are starting early this year, before the summer ends, and will be in Las Vegas to attend AI4 2024. This is North America’s largest industry event, and it attracts some of the biggest names in the AI/ML space to share deep discussions about initial exploration for AI, machine learning operations (MLOps) and AI at the edge. Join Canonical at our very first AI4, where you’ll be able to get in-person recommendations for innovating at speed with open source AI.

<noscript> <img alt="" height="694" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1236,h_694/https://ubuntu.com/wp-content/uploads/a06c/Screenshot-2024-07-18-at-15.04.41.png" width="1236" /> </noscript>

Canonical is the publisher of Ubuntu, the leading Linux distribution that has been around for 20 years. Our mission to provide secure open source software extends well into the machine learning space. We have been providing solutions since the earliest days of AI/ML, for example our official distribution of Kubeflow, an end-to-end MLOps platform that helps organisations productise their ML workloads. Nowadays, Canonical’s MLOps portfolio includes a suite of tools that help you run the entire machine learning lifecycle at all scales, from AI workstations to the cloud to edge devices. Let’s meet to talk more about it!

Engaging with industry leaders, open source users, and organisations looking to scale their ML projects is a priority for us. We’re excited to connect with attendees at AI4 to meet, share a cup of coffee, and give you our insights and lessons in this vibrant ecosystem.

Innovate at speed with open source AI

At Canonical, ever since we launched our cloud-native apps portfolio, our aim has been to enable organisations to run their Data & AI projects with one integrated stack, on any CNCF-comformant Kubernetes distribution and on any environment, weather it is on-prem or on any major public cloud.  As you might know already, we enable you to run your projects at all scales:

  • Data science stack for beginners: Many people are trying to upskill in data science and machine learning these days. However, beginners spend more time setting up their environment than developing their capabilities in this field. Data science stack (DSS) is an easy-to-deploy solution that can run on any workstation. It gives access to an environment to quickly get started with data science or machine learning. You can read more about our Data Science Stack here.
  • Charmed Kubeflow for ML at scale: Kubeflow is an end-to-end MLOps platform used to develop and deploy models at scale. It is a cloud-native application that runs on any CNCF-conformant Kubernetes, including MicroK8s, AKS, EKS or GKE. It integrates with leading open source toolings such as MLflow, Spark or OpenSearch. Try it out now!
  • Ubuntu Core and Kserve for Edge AI: Devices are often a vulnerable point and running a secure OS is crucial in order to protect all the artefacts, regardless of the architecture. Open source tools such as KServe enable AI practitioners to deploy their models on any edge device,

During AI4, we have prepared for a series of demos to show you how open source tooling can help you with your data & AI projects. You should join our booth if you:

  • Have questions about AI, MLOps, Data and the role of open source
  • Need help with defining your MLOps architecture
  • Are looking for secure open source software for your Data & AI initiatives
  • Would like to learn more about Canonical and our solutions

Get your infrastructure ready for GenAI

In 2023, the Linux Foundation published a report which found that almost half of surveyed organisations prefer open source tooling for their GenAI projects. Despite this rapid adoption, enterprises are still facing challenges that are related to security, transparency and accessibility and costs. While initial experimentation seems handy due to the large number of solutions available on the market, taking GenAI projects to production obliges organisations to upgrade their AI infrastructure and ensure the data and model protection.

Join my talk, “GenAI beyond the hype: from experimentation to production“ at AI4 on Wednesday, August 14, 2024  from 11:35 AM. During the presentation, I will guide you through  how to move GenAI projects beyond experimentation using open source tooling such as Kubeflow or OpenSearch. We will explore the key considerations and common pitfalls as well as challenges that organisations face when starting a new initiative. Finally, we will analyse the ready-made ML models and scenarios to determine when they are more suitable than building your own solution. 

At the end of it, you will be better equipped to run your GenAI projects in production, using secure open source tooling. 

<noscript> <img alt="" height="684" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1234,h_684/https://ubuntu.com/wp-content/uploads/ebc6/Screenshot-2024-07-18-at-15.16.06.png" width="1234" /> </noscript>

Join us at Booth 426 

If you are attending AI4 2024 in Las Vegas, US between 12- 14 August, make sure to visit booth 426. Our team of open source experts will be available throughout the day to answer all your questions about AI/ML and beyond.

You can already book a meeting with our team member [SDR name] using the link below.

18 July, 2024 12:14PM

hackergotchi for Deepin

Deepin

New Progress! deepin M1 Project Updated to deepin RC2 Version

Last July, we successfully made deepin initially compatible with Apple M1. This year, as deepin V23 beta enters the RC2 version, the deepin M1 project naturally follows with updates. Additionally, this adaptation work not only upgrades the system environment version but also updates some system underlying component versions, optimizes the packaging process of various project modules, and partially adds timers to build content weekly for developers to experience firsthand. Now, let's dive into the specific updates in this release. 《deepin adapts to Apple M1, what have we experienced? (Part 1)》 《deepin adapts to Apple M1, what have we experienced? (Part ...Read more

18 July, 2024 05:57AM by aida

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.1.3-16 Released

This release of Clonezilla live (3.1.3-16) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.1.3-11

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2024/Jul/15).
  • Linux kernel was updated to 6.9.9-1.
  • Partclone was updated to 0.3.32.
  • ocs-resize-part: add option "-f" for fatresize in batch mode.
  • Replace the command to clean super blocks of fakeraid. Now mdadm is used, and no more using dmraid since dmraid is not maintained anymore.

BUG FIXES

18 July, 2024 12:35AM by Steven Shiau

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: E308 Trocas & Baldrocas

Esta semana o Miguel andou a brincar com Kubuntu, automações falhadas em Home Assistant e clonou um disco com DD com consequências inesperadas. O Diogo anda a ler um novo livro empolgante (Privacidade 404), que recomenda a toda a gente; abjurámos erros cometidos em episódios anteriores; dissecámos penosamente o rastreio publicitário da Mozilla no Firefox; descascámos na Apple e na Google e ainda ressuscitámos a saudosa Cândida Branca Flor, graças a uma clamorosa falha de segurança em OpenSSH.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

18 July, 2024 12:00AM

July 17, 2024

hackergotchi for GreenboneOS

GreenboneOS

June 2024 Threat Tracking: Cybersecurity On The Edge

Before this year, 3,000 CVEs (Common Vulnerabilities and Exposures) had never been published in a single month. 2024 has been a string of record breaking months for vulnerability disclosure; over 5,000 CVEs were published in May 2024. While June offered a lapse from the storm, some may be questioning whether delivering a secure software product is simply impossible. Even vendors with the most capital and market share – Apple, Google, Microsoft – and vendors of enterprise grade network and security appliances – Cisco, Citrix, Fortinet, Ivanti, Juniper, PaloAlto – have all presented perpetually insecure products to market. What insurmountable hurdles could be preventing stronger application security? Are secure software products truly an impossibility?

One possible truth is: being first to market with new features is considered paramount to gaining competitive edge, stealing priority from security. Other suggestions are more conspiratorial. The Cyber Resilience Act [1][2], set to be enforced in late 2027, may create more accountability, but is still a long way down the road. Cyber defenders need to stay vigilant, implement cybersecurity best practices, be proactive about detecting security gaps, and remediate them in a timely fashion; easy to say, but a monstrous feat indeed.

In this month’s edition of Greebone’s Threat Tracking blog post we will review culprits in a recent trend – increased exploitation of edge network devices.

Edge Devices Are Hot Targets For Cyber Attack

Cyber threat actors are increasingly exploiting vulnerabilities in network perimeter services and devices. The network perimeter refers to the boundary that separates an organization’s internal network from external networks, such as the internet and is typically home to critical security infrastructure such as VPNs, firewalls, and edge computing services. This cluster of services on the network perimeter is often called the Demilitarized Zone, or DMZ. Perimeter services serve as an ideal initial access point into a network, making them a high value target for cyber attacks.

Greenbone’s Threat Tracker posts have previously covered numerous edge culprits including Citrix Netscaler (CitrixBleed), Cisco XE, Fortinet’s FortiOS, Ivanti ConnectSecure, PaloAlto PAN-OS and Juniper Junos. Let’s review new threats that emerged this past month, June 2024.

Chinese APT campaign Attacking FortiGate Systems

CVE-2022-42475 (CVSS 9.8 Critical), a severe remote code execution vulnerability, impacting FortiGate network security appliances has been implicated by the Dutch Military Intelligence and Security Service (MIVD) in a new cyber espionage campaign targeting Western governments, international organizations, and the defense industry. The MIVD disclosed details including attribution to a Chinese state hacking group. The attacks installed a new variant of an advanced stealthy malware called CoatHanger, specifically designed for FortiOS that persists even after reboots and firmware updates. According to CISA, CVE-2022-42475 was previously used by nation-state threat actors in a late-2023 campaign. More than 20,000 FortiGate VPN instances have been infected in the most recent campaign.

One obvious takeaway here is that an ounce of prevention is worth a pound of cure. These initial access attacks leveraged a vulnerability over a year old, and thus were preventable. Cybersecurity best practices dictate that organizations should deploy regular vulnerability scanning and take action to mitigate discovered threats. Greenbone Enterprise feed includes detection for CVE-2022-42475.

P2Pinfect Is Ransoming And Mining Unpatched Redis Servers

P2Pinfect, a peer-to-peer (P2P) worm targeting Redis servers, has recently been modified to deploy ransomware and cryptocurrency miners as observed by Cado Security. First detected in July 2023, P2Pinfect is a sophisticated Rust-based malware with worm capabilities meaning that recent attacks exploiting CVE-2022-0543 (CVSS 10 Critical) against unpatched Redis servers, can automatically spread to other vulnerable servers.

Since CVE-2022-0543 was published in February 2022, organizations employing compliant vulnerability management should already be impervious to the recent P2Pinfect ransomware attacks. Within days of CVE-2022-0543 being published, Greenbone issued multiple Vulnerability Tests (VTs) [1][2][3][4][5] to the Community Edition feed that identify vulnerable Redis instances. This means that all Greenbone users globally can be alerted and protect themselves if this vulnerability exists in their infrastructure.

Check Point Quantum Security Gateways Actively Exploited

The Canadian Centre for Cyber Security issued an alert due to observed active exploitation of CVE-2024-24919 (CVSS 8.6 High), which has also been added to CISA’s catalog of known exploited vulnerabilities (KEV). Both entities have urged all affected organizations to patch their systems immediately. The vulnerability may allow an attacker to access information on public facing Check Point Gateways with IPSec VPN, Remote Access VPN, or Mobile Access enabled and can also allow lateral movement via unauthorized domain admin privileges on a victim’s network.

This issue affects several product lines from Check Point, including CloudGuard Network, Quantum Scalable Chassis, Quantum Security Gateways, and Quantum Spark Appliances. Check Point has issued instructions for applying a hotfix to mitigate CVE-2024-24919. “Hotfixes” are software updates issued outside of the vendor’s scheduled update cycle to specifically address an urgent issue.

CVE-2024-24919 was just released on May 30th, 2024, but very quickly became part of an attack campaign further highlighting a trend of diminishing Time To Exploit (TTE). Greenbone added active check and passive banner detection vulnerability tests (VTs) to identify CVE-2024-24919 within days of its publication allowing defenders to swiftly take proactive security measures.

Critical Patches Issued For Juniper Networks Products

In a hot month for Juniper Networks, the company released a security bulletin (JSA82681) addressing multiple vulnerabilities in Juniper Secure Analytics optional applications, another new critical bug was disclosed; CVE-2024-2973. On top of these issues, Juniper’s Session Smart Router (SSR) was outed for having known default credentials [CWE-1392] for its remote SSH login. CVE-2024-2973 (CVSS 10 Critical) is an authentication bypass vulnerability in Session Smart Router (SSR), Session Smart Conductor, and WAN Assurance Router products that are running in high-availability redundant configurations and allows an attacker to take full control of an affected device.

Greenbone Enterprise vulnerability test feed provides detection for CVE-2024-2973 and remediation information is provided by Juniper in their security advisory (JSA83126). Finally, Greenbone includes an active check to detect insecure configuration of Session Smart Router (SSR), by verifying if it is possible to login via SSH with known default credentials.

Progress Telerik Report Server Actively Exploited

Last month we discussed how one of Greenbone’s own security researchers identified and participated in the responsible disclosure of CVE-2024-4837, impacting Progress Software’s Telerik Report Server. This month, another vulnerability in the same product was added to CISA’s actively exploited catalog. Also published in May 2024, CVE-2024-4358 (CVSS 9.8 Critical) is an Authentication Bypass by Spoofing Vulnerability [CWE-290] that allows an attacker to obtain unauthorized access. Additional information, including temporary mitigation workaround instructions are available from the vendor’s official security advisory.

Also in June 2024, Progress Software’s MOVEit Transfer enterprise file transfer tool was again in the hot seat with a new critical severity vulnerability; CVE-2024-5806, having a CVSS 9.1 Critical assessment. MOVEit was responsible for the biggest data breaches in 2023 affecting over 2,000 organizations.

Greenbone issued an active check and version detection vulnerability tests (VTs) to detect CVE-2024-24919 within days of their publication, and a VT to detect CVE-2024-5806 within hours, allowing defenders to swiftly mitigate.

Summary

Even tech giants struggle to deliver software free from vulnerabilities, underscoring the need for vigilance in securing enterprise IT infrastructure – threats demand continuous visibility and swift action. The global landscape is rife with attacks against perimeter network services and devices as attackers large and small, sophisticated and opportunistic seek to gain a foothold on an organization’s network.

17 July, 2024 12:34PM by Joseph Lee

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2024.07~dev0 with CLI Installer’s home encryption and Midori

This is an update of Sparky semi-rolling iso images (MinimalGUI and MinimalCLI only) of the Debian testing line, which provides 2 notable changes: 1. Sparky CLI Installer with home partition encrypting The Sparky CLI Installer got a new option which lets you encrypt and secure your separate home partition. If you choose this option, the Plymouth will be disabled, even it is installed…

Source

17 July, 2024 10:43AM by pavroo

hackergotchi for ARMBIAN

ARMBIAN

Armbian Leaflet #27

Dear Armbians,

This week, we bring you a roundup of exciting developments and updates from the Armbian community. From kernel enhancements to device-specific improvements, there’s plenty to dive into. Plus, we announce the winners of the Radxa Rock 5 ITX giveaway! Read on for all the details.

Kernel and Device Tree Updates:

  • sunxi-6.1: Fixed a kernel loading issue by reversing commit 75317a0.
  • linux-rk35xx-vendor.config: Added support for RTW89_8852be module.
  • linux-rk35xx-vendor.config: Updated kernel configuration settings.
  • arm64: dts: rockchip: Introduced support for radxa-e52c board.

Documentation and Configuration Enhancements:

  • firstlogin: Implemented quoting for values to handle spaces (#6942).
  • Desktops: Corrected missing packages in desktop environments.
  • wifi: Included rtl8852bs driver to support entire device families.
  • minor fixes (#446): Addressed various minor issues for documentation improvements.
  • networking: Enhanced networking documentation introducing NetPlan as common configuration point.
  • Add DNS and route to network config (#442): Improved network configuration guide by adding DNS and route setup instructions.

Device-Specific Updates:

  • rockchip-rk3588 / edge: Removed redundant patch and updated to latest release.
  • mainline-kernel: Updated to version 6.10-rc7 for improved compatibility and features.
  • thinkpad-x13s: Updated to jhovold’s work-in-progress branch for sc8280xp-6.10-rc7.
  • odroidm1: Ensured function names consistency and updated to u-boot v2024.07.
  • u-boot: Embedded armbian artifact version into CONFIG_LOCALVERSION for local configuration.
  • Bananapi M5: Upgraded u-boot to final release version v2024.07.
  • radxa-e52c: Added comprehensive support for the new radxa-e52c board integration.

Read more: https://github.com/armbian/build/releases

Radxa Rock 5 ITX Giveaway Winners:

Congratulations to the winners!

  • 1st prize: Alexandre B., France
  • 2nd prize: Jan-Hendrik W, Germany
  • 3rd prize: Steve H., USA

We wish them happy hacking with their new Radxa Rock 5 ITX boards.

That wraps up this edition of the Armbian newsletter. Stay tuned for more updates, tutorials, and community highlights in the coming weeks. Remember to join our forums and follow us on social media for the latest news and discussions.

Thank you for being part of the Armbian community!

The post Armbian Leaflet #27 first appeared on Armbian.

17 July, 2024 10:15AM by Didier Joomun

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Charmed PostgreSQL enters General Availability

An enterprise-grade PostgreSQL you can rely on

Jul 17, 2024: Today Canonical announced the release of Charmed PostgreSQL, an enterprise solution that helps you secure and automate the deployment, maintenance and upgrades of your PostgreSQL databases across private and public clouds.

PostgreSQL is an open source database management system. It has been successfully used for more than 3 decades across all IT sectors. Its maturity and its vibrant community consistently make it a first-choice DBMS among developers.

Canonical’s Charmed PostgreSQL builds on this foundation with additional capabilities to cover all your enterprise needs:

  • Up to 10 years of security maintenance and support 
  • Advanced automation to simplify the management of your PostgreSQL fleet
  • Canonical-managed PostgreSQL service to reduce the burden on your team
  • Simple and predictable pricing model

Up to 10 years of security maintenance and support

Security and support for Charmed PostgreSQL can be purchased with an Ubuntu Pro subscription.  An Ubuntu Pro subscription provides up to 10 years of security maintenance not only for PostgreSQL server but also for some of the most popular extensions such as PostGIS, pgVector and pgAudit. You can also opt for 24/7 or weekday support. Conveniently, the subscription is priced per node, not per app. So users can benefit from a complete portfolio of data solutions with a predictable pricing model. 

Automate your PostgreSQL operations

When you opt for Canonical’s Charmed PostgreSQL, you benefit from our expert-crafted operator that lets you run PostgreSQL on Azure, AWS and Google Cloud, whether on top of VMs or their managed Kubernetes offerings. You can also run PostgreSQL in your private data centre on top of MAAS and MicroCloud, OpenStack, Kubernetes or VMWare. 

When using our operator, your PostgreSQL deployments will be:

  • Highly available with built-in replication and automatic failover.
  • Reliable with expert-crafted and self healing state-machine based automation.
  • Upgraded automatically during your maintenance windows and with minimum downtime.
  • Disaster recovery ready with built-in backup, restore and cluster-to-cluster replication capabilities to ensure business continuity.
  • Hybrid and multi-cloud ready with automation that supports deployment within and across private and public clouds.
  • Shipped with a holistic observability and alerting solution based on Prometheus, Loki and Grafana.

Self-managed or Canonical managed, your choice

Canonical adapts to your needs and constraints. You can opt for self-managed PostgreSQL operations while relying on our support, consultancy and training to ramp-up your team and back them up when needed. You can also offload management tasks related to your PostgreSQL servers to our experts for more peace of mind.

Try Charmed PostgreSQL today

Our PostgreSQL operator is open-source. You can get started with Charmed PostgreSQL using the following documentation:

Contact us

For all your PostgreSQL inquiries, please fill in this form.

Further reading

17 July, 2024 09:28AM

The Fridge: Ubuntu 23.10 (Mantic Minotaur) reached End of Life on July 11, 2024

This is a follow-up to the End of Life warning sent earlier to confirm that as of July 11, 2024, Ubuntu 23.10 is no longer supported. No more package updates will be accepted to 23.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

Additionally, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 23.10.

The supported upgrade path from Ubuntu 23.10 is to Ubuntu 24.04 LTS.
Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/NobleUpgrades

Ubuntu 24.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004, Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Tue Jul 16 17:56:46 UTC 2024 by Graham Inggs on behalf of the Ubuntu Release Team

17 July, 2024 01:43AM

July 16, 2024

hackergotchi for Pardus

Pardus

Pardus 23.2 Sürümü Yayımlandı

TÜBİTAK ULAKBİM tarafından geliştirilmeye devam edilen Pardus’un 23.2 sürümü yayımlandı. Pardus 23.2; Pardus 23 ailesinin ikinci ara sürümüdür.

En yeni Pardus’ u hemen şimdi indirebilir, bu sürüm hakkında detaylı bilgi edinmek için sürüm notlarını inceleyebilirsiniz.

16 July, 2024 02:06PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The guide to cloud storage security for public sector

<noscript> <img alt="" height="785" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1500,h_785/https://ubuntu.com/wp-content/uploads/a4d5/storage-security.png" width="1500" /> </noscript>

Cloud storage solutions can provide public sector organisations with a high degree of flexibility when it comes to their storage needs, either public cloud based, or in their own private clouds. In our previous blog post we looked at the economic differences between these two approaches.

In this blog we will explore some of the security best practices when using cloud storage, so that you can ensure that sensitive data remains securely stored and compliance objectives are met. The points we cover will be relevant to both on-premise storage and storage solutions in a public cloud.

Risks associated with storing data

In the public sector, is it very common to handle sensitive datasets, such as Personally Identifiable Information (PII) about citizens, medical information, or digital evidence for crime investigation purposes.

It is important to ensure that these data sets are only ever accessible to users with the correct permissions, and whenever transferred, that this is done across a network that cannot be eavesdropped upon. Similarly, whenever stored “at rest” the data should also be encrypted in case hardware is lost or stolen. Furthermore, being able to create point in time snapshots of datasets can ensure that even accidental changes do not cause destruction of important data.

Cloud storage best practices

Access control mechanisms exist in most IT systems, and storage is no different. On premise cloud storage solutions like Ceph, and public cloud storage systems like S3 can integrate with organisation wide authorisation systems like LDAP. This allows an organisation to centrally control access to storage resources and easily add or remove permissions when needed.

When using storage resources over external network connections, it is imperative to ensure that those communications are secure and that there is no possibility of a third party being able to intercept any information that has been transmitted. That goes for internal communications too: it is possible that a malicious actor could gain access to an internal network that previously may have been considered secure, so ensuring internal communication is always encrypted is paramount. Cloud storage systems are able to enforce the use of encrypted communications and reject insecure connections.

Sometimes it is necessary to prove that a dataset has not changed since it was stored, for example, digital evidence used in a criminal trial will need to be accompanied with guarantees that there has been no tampering. Cloud storage systems use solutions like snapshots of either a block volume or filesystem. Another solution they offer is versioning of objects to ensure that the original data can always be recalled. This kind of solution can also be useful as a defence mechanism against ransomware attacks, allowing an organisation to roll back to a known good state.

Once data has reached a storage system, there is another aspect to consider: what happens if the hardware used in that system is lost, recycled or stolen? Imagine a disk fails and needs to be sent back for warranty purposes – what if the data stored on it could be read? Could that lead to a breach of data security? Most modern storage systems allow for data to be encrypted before it is written to disk, so that data cannot be read by unauthorised parties.

Learn more

Both on-premise storage solutions (like Ceph) and public clouds have features that reduce the chances of unauthorised access or changes to the sensitive data stored in them. 

But which option is right for your organisation? Our recent whitepaper shows that there are significant savings by using an on-premise or cloud-adjacent approach that still provides the same high availability and performance that can be found in a public cloud. Find out more below:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXcadmcqkKx44m1MuxzGEXlpx6MWN3LxsVB2FqFR3Rnveuij0b1wK3MxCci5PClTiJGQRs5ZeuNXzQvICPDCec_Xror5xtOQp1j5WkAYtzimEr-qHPfEIfwVX6mzgw43G05ADEJ_o_poyC9qXzT3_YPScRc?key=XbztNRDD9LfpgORpSDpXGw" width="720" /> </noscript>

Additional resources

16 July, 2024 08:32AM

hackergotchi for Deepin

Deepin

Linglong Project Upgraded, Linyaps Officially Launched!

On July 13th, at the deepin Meetup in Shanghai, we officially announced the brand-new name of our project - Linyaps (hereinafter referred to as "Linglong"). Additionally, we shared the news that the project signed a donation agreement with the Open Atom Open Source Foundation on May 24, 2024. Linglong has now become an official incubator project of the foundation.   Linyaps: A New Independent Package Management Toolset In the development of the Linux open-source software ecosystem, the software ecosystem has faced numerous challenges, especially regarding software compatibility and security issues. Packaging and distributing applications across different operating systems not only ...Read more

16 July, 2024 03:45AM by aida

hackergotchi for Purism PureOS

Purism PureOS

Purism Announcing MiMi Robot Crowdfunding Campaign

Crowdfund the Ideal Robotics Future We at Purism want to revolutionize robotics and are seeking your support to fund this research and development. We believe robotics, AI, and technology as a whole need a new direction from what Big Tech is building. We consistently deliver on revolutionary technology, and with your support will invest to […]

The post Purism Announcing MiMi Robot Crowdfunding Campaign appeared first on Purism.

16 July, 2024 03:00AM by Purism

hackergotchi for Qubes

Qubes

XSAs released on 2024-07-16

The Xen Project has released one or more Xen security advisories (XSAs). The security of Qubes OS is affected.

XSAs that DO affect the security of Qubes OS

The following XSAs do affect the security of Qubes OS:

XSAs that DO NOT affect the security of Qubes OS

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • XSA-459
    • Qubes OS does not use Xapi.

About this announcement

Qubes OS uses the Xen hypervisor as part of its architecture. When the Xen Project publicly discloses a vulnerability in the Xen hypervisor, they issue a notice called a Xen security advisory (XSA). Vulnerabilities in the Xen hypervisor sometimes have security implications for Qubes OS. When they do, we issue a notice called a Qubes security bulletin (QSB). (QSBs are also issued for non-Xen vulnerabilities.) However, QSBs can provide only positive confirmation that certain XSAs do affect the security of Qubes OS. QSBs cannot provide negative confirmation that other XSAs do not affect the security of Qubes OS. Therefore, we also maintain an XSA tracker, which is a comprehensive list of all XSAs publicly disclosed to date, including whether each one affects the security of Qubes OS. When new XSAs are published, we add them to the XSA tracker and publish a notice like this one in order to inform Qubes users that a new batch of XSAs has been released and whether each one affects the security of Qubes OS.

16 July, 2024 12:00AM

QSB-103: Double unlock in x86 guest IRQ handling (XSA-458)

We have published Qubes Security Bulletin (QSB) 103: Double unlock in x86 guest IRQ handling (XSA-458). The text of this QSB and its accompanying cryptographic signatures are reproduced below, followed by a general explanation of this announcement and authentication instructions.

Qubes Security Bulletin 103


             ---===[ Qubes Security Bulletin 103 ]===---

                             2024-07-16

          Double unlock in x86 guest IRQ handling (XSA-458)

User action
------------

Continue to update normally [1] in order to receive the security updates
described in the "Patching" section below. No other user action is
required in response to this QSB.

Summary
--------

On 2024-07-16, the Xen Project published XSA-458, "double unlock in x86
guest IRQ handling" [3]:
| An optional feature of PCI MSI called "Multiple Message" allows a
| device to use multiple consecutive interrupt vectors.  Unlike for
| MSI-X, the setting up of these consecutive vectors needs to happen all
| in one go.  In this handling an error path could be taken in different
| situations, with or without a particular lock held.  This error path
| wrongly releases the lock even when it is not currently held.

Impact
-------

An attacker who compromises a qube with an attached PCI device that has
multi-vector MSI capability (e.g., sys-net or sys-usb in the default
Qubes OS configuration) can attempt to exploit this vulnerability in
order to compromise Qubes OS.

Affected systems
-----------------

Both Qubes OS 4.1 and 4.2 are affected.

Patching
---------

The following packages contain security updates that address the
vulnerabilities described in this bulletin:

  For Qubes 4.1, in dom0:
  - Xen packages, version 4.14.6-10

  For Qubes 4.2, in dom0:
  - Xen packages, version 4.17.4-4

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [2] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [1]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.

Credits
--------

See the original Xen Security Advisory.

References
-----------

[1] https://www.qubes-os.org/doc/how-to-update/
[2] https://www.qubes-os.org/doc/testing/
[3] https://xenbits.xen.org/xsa/advisory-458.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

Source: qsb-103-2024.txt

Marek Marczykowski-Górecki’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEELRdx/k12ftx2sIn61lWk8hgw4GoFAmaWVqUACgkQ1lWk8hgw
4GpsVRAAl7tx8Ur4I658gDws+17f3JC9pNCk5Fzo2OFCc49gAxtcuvIiMcjYrgji
YLqtOIvTI5VjizJvtelfP3xcNQT3eGmg9uknvAHBnfcjLEUgU1mnk4R2+mmSfjvn
Im9pJK2kUdJsVi38oTUY7DepIsg/ExM4GLZj9JK2s6CQX5xCUFOaJemp3Vyt4d4t
HvZCss4dt51zI9NNv/CV0QTI+1inE1X2l+8BROquEjtF16Vxj0yOWnE0xvn5or4H
thgOy1A7PPD0Shv992sKUF8atq1EXD1KEMwX9mOm8eIc6tCWQcaAg401TfZ4KV8J
2uHLPhiZQ6TC6PMBxv63W4DxKsK/hub/DZhpCFJrHGSVUKNrueQTh0mlfH4lJLnp
GZ3M6haJL9vLXGJp/erhhp/lZn/4Ho1EYrLJ4JM8kINnw5/Le9iSC555GPNkyqU2
x4kvkpFU6Ab4heKSijnM2L8sb3aP3NcrsBE3dSLxFuIOvpdIuQLIUz1ILH/AaMuK
JdbNMpiZGUtPV7xrHWf7di/sUmwOzCxfSpl8dLO1+tyoVCkBdAVs/UO4D2n6V3Tu
e/9ob6DP9Qj/FiD8O44qO326SGzxEIiDOHA9FX4qK090CxYAkJCG6UDLgk7eJWyB
FMlDC5X0XDXAclKDkPYkNQNCn0i1yQK0jQFz/d+JeuOJBBqsWs4=
=fupv
-----END PGP SIGNATURE-----

Source: qsb-103-2024.txt.sig.marmarek

Simon Gaiser (aka HW42)’s PGP signature

-----BEGIN PGP SIGNATURE-----

iQIzBAABCgAdFiEE6hjn8EDEHdrv6aoPSsGN4REuFJAFAmaWRBMACgkQSsGN4REu
FJDU0w//adiXcXjwXcL19qA53FUAZNbxnuQaV2W7pVBqVQwMj32aQRXb0TzWLzRP
cJtGHG3l4Ft1YCd7+m8cVXD3H5onb0ScyYkyNPdIZWLdA3uWEjWq2/9POHnVh4Ly
RU4BKFwnZuA3WPpMSnskgfJO/F6nIvNI12aOULDN+zxOUm8HJvigdbVOVi8hSshd
KTeox2GsoEmF++/LT9nMmuQp/pL5Tki1czmYZaEIk4muaV9Omb4bbY7Dxkya8R76
2zrbykR4tdWFUbMJhYepGiVilyU8JUd0FmbjZVVStxUJ9LhrI9n7VmEJlxpeULFg
ZsT3xw2f4Kox+d3gPwlhs2ndQ9z4SbpLOLfXTAz/cNda94XDDINMRiAnYERHs5fU
J0AU0eGrXRd6rCl4XlbKgZw6SOOkHSgTv2yXW1Z0wezTz+70nLvsWRJfEvUrN1tD
1KuP61xjHcAIV8/sIPEbPGAN7S3SMkdrubjK9OygPGXwycnuzPoujifFsINAyDNO
n5rpAPBK1L77rlBxaSG0ITicIZ650fDfl9pbHJ1UNdWAxYuys2r7d4nwcg9cY6zg
WAN6PIPV6y4bK1tFpO+mzcWFCXZ2PiqTJvPYZcuv4Ok4/PEdTLtjIdRl6RiG/fLY
anQB494vrm39hMQWZJeYu60JQFBM7ObFf+yPgF3TQSu64GEkYUg=
=ARv2
-----END PGP SIGNATURE-----

Source: qsb-103-2024.txt.sig.simon

What is the purpose of this announcement?

The purpose of this announcement is to inform the Qubes community that a new Qubes security bulletin (QSB) has been published.

What is a Qubes security bulletin (QSB)?

A Qubes security bulletin (QSB) is a security announcement issued by the Qubes security team. A QSB typically provides a summary and impact analysis of one or more recently-discovered software vulnerabilities, including details about patching to address them. For a list of all QSBs, see Qubes security bulletins (QSBs).

Why should I care about QSBs?

QSBs tell you what actions you must take in order to protect yourself from recently-discovered security vulnerabilities. In most cases, security vulnerabilities are addressed by updating normally. However, in some cases, special user action is required. In all cases, the required actions are detailed in QSBs.

What are the PGP signatures that accompany QSBs?

A PGP signature is a cryptographic digital signature made in accordance with the OpenPGP standard. PGP signatures can be cryptographically verified with programs like GNU Privacy Guard (GPG). The Qubes security team cryptographically signs all QSBs so that Qubes users have a reliable way to check whether QSBs are genuine. The only way to be certain that a QSB is authentic is by verifying its PGP signatures.

Why should I care whether a QSB is authentic?

A forged QSB could deceive you into taking actions that adversely affect the security of your Qubes OS system, such as installing malware or making configuration changes that render your system vulnerable to attack. Falsified QSBs could sow fear, uncertainty, and doubt about the security of Qubes OS or the status of the Qubes OS Project.

How do I verify the PGP signatures on a QSB?

The following command-line instructions assume a Linux system with git and gpg installed. (For Windows and Mac options, see OpenPGP software.)

  1. Obtain the Qubes Master Signing Key (QMSK), e.g.:

    $ gpg --fetch-keys https://keys.qubes-os.org/keys/qubes-master-signing-key.asc
    gpg: directory '/home/user/.gnupg' created
    gpg: keybox '/home/user/.gnupg/pubring.kbx' created
    gpg: requesting key from 'https://keys.qubes-os.org/keys/qubes-master-signing-key.asc'
    gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
    gpg: key DDFA1A3E36879494: public key "Qubes Master Signing Key" imported
    gpg: Total number processed: 1
    gpg:               imported: 1
    

    (For more ways to obtain the QMSK, see How to import and authenticate the Qubes Master Signing Key.)

  2. View the fingerprint of the PGP key you just imported. (Note: gpg> indicates a prompt inside of the GnuPG program. Type what appears after it when prompted.)

    $ gpg --edit-key 0x427F11FD0FAA4B080123F01CDDFA1A3E36879494
    gpg (GnuPG) 2.2.27; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
       
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    gpg> fpr
    pub   rsa4096/DDFA1A3E36879494 2010-04-01 Qubes Master Signing Key
     Primary key fingerprint: 427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494
    
  3. Important: At this point, you still don’t know whether the key you just imported is the genuine QMSK or a forgery. In order for this entire procedure to provide meaningful security benefits, you must authenticate the QMSK out-of-band. Do not skip this step! The standard method is to obtain the QMSK fingerprint from multiple independent sources in several different ways and check to see whether they match the key you just imported. For more information, see How to import and authenticate the Qubes Master Signing Key.

    Tip: After you have authenticated the QMSK out-of-band to your satisfaction, record the QMSK fingerprint in a safe place (or several) so that you don’t have to repeat this step in the future.

  4. Once you are satisfied that you have the genuine QMSK, set its trust level to 5 (“ultimate”), then quit GnuPG with q.

    gpg> trust
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: unknown       validity: unknown
    [ unknown] (1). Qubes Master Signing Key
       
    Please decide how far you trust this user to correctly verify other users' keys
    (by looking at passports, checking fingerprints from different sources, etc.)
       
      1 = I don't know or won't say
      2 = I do NOT trust
      3 = I trust marginally
      4 = I trust fully
      5 = I trust ultimately
      m = back to the main menu
       
    Your decision? 5
    Do you really want to set this key to ultimate trust? (y/N) y
       
    pub  rsa4096/DDFA1A3E36879494
         created: 2010-04-01  expires: never       usage: SC
         trust: ultimate      validity: unknown
    [ unknown] (1). Qubes Master Signing Key
    Please note that the shown key validity is not necessarily correct
    unless you restart the program.
       
    gpg> q
    
  5. Use Git to clone the qubes-secpack repo.

    $ git clone https://github.com/QubesOS/qubes-secpack.git
    Cloning into 'qubes-secpack'...
    remote: Enumerating objects: 4065, done.
    remote: Counting objects: 100% (1474/1474), done.
    remote: Compressing objects: 100% (742/742), done.
    remote: Total 4065 (delta 743), reused 1413 (delta 731), pack-reused 2591
    Receiving objects: 100% (4065/4065), 1.64 MiB | 2.53 MiB/s, done.
    Resolving deltas: 100% (1910/1910), done.
    
  6. Import the included PGP keys. (See our PGP key policies for important information about these keys.)

    $ gpg --import qubes-secpack/keys/*/*
    gpg: key 063938BA42CFA724: public key "Marek Marczykowski-Górecki (Qubes OS signing key)" imported
    gpg: qubes-secpack/keys/core-devs/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 8C05216CE09C093C: 1 signature not checked due to a missing key
    gpg: key 8C05216CE09C093C: public key "HW42 (Qubes Signing Key)" imported
    gpg: key DA0434BC706E1FCF: public key "Simon Gaiser (Qubes OS signing key)" imported
    gpg: key 8CE137352A019A17: 2 signatures not checked due to missing keys
    gpg: key 8CE137352A019A17: public key "Andrew David Wong (Qubes Documentation Signing Key)" imported
    gpg: key AAA743B42FBC07A9: public key "Brennan Novak (Qubes Website & Documentation Signing)" imported
    gpg: key B6A0BB95CA74A5C3: public key "Joanna Rutkowska (Qubes Documentation Signing Key)" imported
    gpg: key F32894BE9684938A: public key "Marek Marczykowski-Górecki (Qubes Documentation Signing Key)" imported
    gpg: key 6E7A27B909DAFB92: public key "Hakisho Nukama (Qubes Documentation Signing Key)" imported
    gpg: key 485C7504F27D0A72: 1 signature not checked due to a missing key
    gpg: key 485C7504F27D0A72: public key "Sven Semmler (Qubes Documentation Signing Key)" imported
    gpg: key BB52274595B71262: public key "unman (Qubes Documentation Signing Key)" imported
    gpg: key DC2F3678D272F2A8: 1 signature not checked due to a missing key
    gpg: key DC2F3678D272F2A8: public key "Wojtek Porczyk (Qubes OS documentation signing key)" imported
    gpg: key FD64F4F9E9720C4D: 1 signature not checked due to a missing key
    gpg: key FD64F4F9E9720C4D: public key "Zrubi (Qubes Documentation Signing Key)" imported
    gpg: key DDFA1A3E36879494: "Qubes Master Signing Key" not changed
    gpg: key 1848792F9E2795E9: public key "Qubes OS Release 4 Signing Key" imported
    gpg: qubes-secpack/keys/release-keys/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key D655A4F21830E06A: public key "Marek Marczykowski-Górecki (Qubes security pack)" imported
    gpg: key ACC2602F3F48CB21: public key "Qubes OS Security Team" imported
    gpg: qubes-secpack/keys/security-team/retired: read error: Is a directory
    gpg: no valid OpenPGP data found.
    gpg: key 4AC18DE1112E1490: public key "Simon Gaiser (Qubes Security Pack signing key)" imported
    gpg: Total number processed: 17
    gpg:               imported: 16
    gpg:              unchanged: 1
    gpg: marginals needed: 3  completes needed: 1  trust model: pgp
    gpg: depth: 0  valid:   1  signed:   6  trust: 0-, 0q, 0n, 0m, 0f, 1u
    gpg: depth: 1  valid:   6  signed:   0  trust: 6-, 0q, 0n, 0m, 0f, 0u
    
  7. Verify signed Git tags.

    $ cd qubes-secpack/
    $ git tag -v `git describe`
    object 266e14a6fae57c9a91362c9ac784d3a891f4d351
    type commit
    tag marmarek_sec_266e14a6
    tagger Marek Marczykowski-Górecki 1677757924 +0100
       
    Tag for commit 266e14a6fae57c9a91362c9ac784d3a891f4d351
    gpg: Signature made Thu 02 Mar 2023 03:52:04 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    

    The exact output will differ, but the final line should always start with gpg: Good signature from... followed by an appropriate key. The [full] indicates full trust, which this key inherits in virtue of being validly signed by the QMSK.

  8. Verify PGP signatures, e.g.:

    $ cd QSBs/
    $ gpg --verify qsb-087-2022.txt.sig.marmarek qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 04:05:51 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify qsb-087-2022.txt.sig.simon qsb-087-2022.txt
    gpg: Signature made Wed 23 Nov 2022 03:50:42 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    $ cd ../canaries/
    $ gpg --verify canary-034-2023.txt.sig.marmarek canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 03:51:48 AM PST
    gpg:                using RSA key 2D1771FE4D767EDC76B089FAD655A4F21830E06A
    gpg: Good signature from "Marek Marczykowski-Górecki (Qubes security pack)" [full]
    $ gpg --verify canary-034-2023.txt.sig.simon canary-034-2023.txt
    gpg: Signature made Thu 02 Mar 2023 01:47:52 AM PST
    gpg:                using RSA key EA18E7F040C41DDAEFE9AA0F4AC18DE1112E1490
    gpg: Good signature from "Simon Gaiser (Qubes Security Pack signing key)" [full]
    

    Again, the exact output will differ, but the final line of output from each gpg --verify command should always start with gpg: Good signature from... followed by an appropriate key.

For this announcement (QSB-103), the commands are:

$ gpg --verify qsb-103-2024.txt.sig.marmarek qsb-103-2024.txt
$ gpg --verify qsb-103-2024.txt.sig.simon qsb-103-2024.txt

You can also verify the signatures directly from this announcement in addition to or instead of verifying the files from the qubes-secpack. Simply copy and paste the QSB-103 text into a plain text file and do the same for both signature files. Then, perform the same authentication steps as listed above, substituting the filenames above with the names of the files you just created.

16 July, 2024 12:00AM

July 15, 2024

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 848

Welcome to the Ubuntu Weekly Newsletter, Issue 848 for the week of July 7 -13, 2024. The full version of this issue is available here.

In this issue we cover:

  • Ubuntu 23.10 Reached End of Life: Here’s What You Can Do!
  • Ubuntu Stats
  • Hot in Support
  • LoCo Events
  • Call for Volunteers: Core Dev Office Hours @ Ubuntu Summit 2024
  • LXD 6.1 has been released
  • The 2024.07.08 SRU Cycle started
  • Ubuntu & Flavour Member alias update
  • Ubuntu Desktop’s 24.10 Dev Cycle – Part 3: July Update
  • RISC-V Summit Munich 2024
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

15 July, 2024 10:44PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Deploying and scaling Apache Spark on Amazon EKS

Introduction

Apache Spark, a framework for parallel distributed data processing, has become a popular choice for building streaming applications, data lake houses and big data extract-transform-load data processing (ETL). It is horizontally scalable, fault-tolerant, and performs well at high scale. Historically however, managing and scaling Spark jobs running on Apache Hadoop clusters could be challenging and often time-consuming for many reasons, but surely at least due to the availability of physical systems and configuring the Kerberos security protocol that Hadoop uses. But there is a new kid in town – Kubernetes – as an alternative to Apache Hadoop. Kubernetes is an open-source platform for deployment and management of nearly any type of containerized application. In this article we’ll walk through the process of deploying Apache Spark on Amazon EKS with Canonical’s Charmed Spark solution.

Kubernetes provides a robust foundation platform for Spark based data processing jobs and applications. Versus Hadoop, it offers more flexible security and networking models, and a ubiquitous platform that can co-host auxiliary applications that complement your Spark workloads – like Apache Kafka or MongoDB. Best of all, most of the key capabilities of Hadoop YARN are also available to Kubernetes – such as gang scheduling – through Kubernetes extensions like Volcano.

You can launch Spark jobs on a Kubernetes cluster directly from the Spark command line tooling, without the need for any extras, but there are some helpful extra components that can be deployed to Kubernetes with an operator. An operator is a piece of software that “operates” the component for you – taking care of deployment, configuration and other tasks associated with the component’s lifecycle.

With no further ado, let’s learn how to deploy Spark on Amazon Elastic Kubernetes Service (Amazon EKS) using Juju charms from Canonical. Juju is an open-source orchestration engine for software operators that helps customers to simplify working with sophisticated, distributed applications like Spark on Kubernetes and on cloud servers.

To get a Spark cluster environment up and ready on EKS, we’ll use the spark-client and juju snaps. Snaps are applications bundled with their dependencies, able to work across a wide range of Linux distributions without modifications. It is a hardened software packaging format with an enhanced security posture. You can learn more about snaps at snapcraft.io.

Solution overview

The following diagram shows the solution that we will implement in this post.

<noscript> <img alt="" height="2103" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_2077,h_2103/https://ubuntu.com/wp-content/uploads/458d/spark-on-eks.jpg" width="2077" /> </noscript>
Diagram illustrating the architecture of an Apache Spark lake house running on an Amazon EKS cluster.

In this post, you will learn how to provision the resources depicted in the diagram from your Ubuntu workstation. These resources are:

  • A Virtual Private Cloud (VPC)
  • An Amazon Elastic Kubernetes Service (Amazon EKS) Cluster with one node group using two spot instance pools
  • Amazon EKS Add-ons: CoreDNS, Kube_Proxy, EBS_CSI_Driver
  • A Cluster Autoscaler
  • Canonical Observability Stack deployed to the EKS cluster
  • Prometheus Push Gateway deployed to the EKS cluster
  • Spark History Server deployed to the EKS cluster
  • Traefik deployed to the EKS cluster
  • An Amazon EC2 edge node with the spark-client and juju snaps installed
  • An S3 bucket for data storage
  • An S3 bucket for job log storage

Walkthrough

Prerequisites

Ensure that you are running an Ubuntu workstation, have an AWS account, a profile with administrator permissions configured and the following tools installed locally:

  • Ubuntu 22.04 LTS
  • AWS Command Line Interface (AWS CLI)
  • kubectl snap
  • eksctl
  • spark-client snap
  • juju snap

Deploy infrastructure

You will need to set up your AWS credentials profile locally before running AWS CLI commands. Run the following commands to deploy the environment and EKS cluster. The deployment should take approximately 20 minutes.

snap install aws-cli --classic
snap install juju
snap install kubectl

aws configure
# enter the necessary details when prompted

wget https://github.com/eksctl-io/eksctl/releases/download/v0.173.0/eksctl_Linux_amd64.tar.gz
tar xzf eksctl_Linux_amd64.tar.gz
cp eksctl $HOME/.local/bin

cat > cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
    name: spark-cluster
    region: us-east-1
    version: "1.29"
iam:
  withOIDC: true

addons:
- name: aws-ebs-csi-driver
  wellKnownPolicies:
    ebsCSIController: true

nodeGroups:
    - name: ng-1
      minSize: 2
      maxSize: 5
      iam:
        withAddonPolicies:
          autoScaler: true
        attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
        - arn:aws:iam::aws:policy/AmazonS3FullAccess
      instancesDistribution:
        maxPrice: 0.15
        instanceTypes: ["m5.xlarge", "m5.large"]
        onDemandBaseCapacity: 0
        onDemandPercentageAboveBaseCapacity: 50
        spotInstancePools: 2
EOF

eksctl create cluster --ssh-access -f cluster.yaml

Verify the deployment

List Amazon EKS nodes

The following command will update the kubeconfig on your local machine and allow you to interact with your Amazon EKS Cluster using kubectl to validate the deployment.

aws eks --region $AWS_REGION update-kubeconfig --name spark-on-eks

Check if the deployment has created two nodes.

kubectl get nodes -l 'NodeGroupType=ng01'

# Output should look like below
NAME  STATUS  ROLES AGE  VERSION
ip-10-1-0-100.us-west-2.compute.internal   Ready    <none>   62m   v1.27.7-eks-e71965b
ip-10-1-1-101.us-west-2.compute.internal   Ready    <none>   27m   v1.27.7-eks-e71965b

Configure Spark History Server

Once the cluster has been created, you will need to adapt the kubeconfig configuration file so that the spark-client tooling can use it.

TOKEN=$(aws eks get-token --region us-east-1 --cluster-name spark-cluster --output json)

sed -i "s/^\ \ \ \ token\:\ .*$/^\ \ \ \ token\:\ $TOKEN/g" $HOME/.kube/config

The following commands create buckets on S3 for spark’s data and logs.

aws s3api create-bucket --bucket spark-on-eks-data --region us-east-1
aws s3api create-bucket --bucket spark-on-eks-logs --region us-east-1

The next step is to configure Juju so that we can deploy the Spark History Server. Run the following commands:

cat $HOME/.kube/config | juju add-k8s eks-cloud

juju add-model spark eks-cloud
juju deploy spark-history-server-k8s --channel=3.4/stable
juju deploy s3-integrator
juju deploy traefik-k8s --trust
juju deploy prometheus-pushgateway-k8s --channel=edge

juju config s3-integrator bucket="spark-on-eks-logs" path="spark-events"
juju run s3-integrator/leader sync-s3-credentials access-key=${AWS_ACCESS_KEY_ID} secret-key=${AWS_SECRET_ACCESS_KEY}
juju integrate s3-integrator spark-history-server-k8s
juju integrate traefik-k8s spark-history-server-k8s

Configure monitoring

We can integrate our Spark jobs with our monitoring stack. Run the following commands to deploy the monitoring stack and integrate the Prometheus Pushgateway.

juju add-model observability eks-cloud

curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/storage-small-overlay.yaml -O

juju deploy cos-lite \
  --trust \
  --overlay ./storage-small-overlay.yaml

juju deploy cos-configuration-k8s --config git_repo=https://github.com/canonical/charmed-spark-rock --config git_branch=dashboard \
  --config git_depth=1 --config grafana_dashboards_path=dashboards/prod/grafana/
juju-wait

juju integrate cos-configuration-k8s grafana

juju switch spark
juju consume admin/observability.prometheus prometheus-metrics
juju integrate prometheus-pushgateway-k8s prometheus-metrics
juju integrate scrape-interval-config prometheus-pushgateway-k8s
juju integrate scrape-interval-config:metrics-endpoint prometheus-metrics

PROMETHEUS_GATEWAY_IP=$(juju status --format=yaml | yq ".applications.prometheus-pushgateway-k8s.address")

Create and run a sample Spark job

Spark jobs are data processing applications that you develop using either Python or Scala. Spark jobs distribute data processing across multiple Spark executors, enabling parallel, distributed processing so that jobs complete faster.

We’ll start an interactive session that launches Spark on the cluster and allows us to write a processing job in real time. First we’ll set some configuration for our spark jobs.

cat > spark.conf <<EOF
spark.eventLog.enabled=true
spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
spark.hadoop.fs.s3a.connection.ssl.enabled=true
spark.hadoop.fs.s3a.path.style.access=true
spark.hadoop.fs.s3a.access.key=${AWS_ACCESS_KEY_ID}
spark.hadoop.fs.s3a.secret.key=${AWS_SECRET_ACCESS_KEY}
spark.eventLog.dir=s3a://spark-on-eks-logs/spark-events/ 
spark.history.fs.logDirectory=s3a://spark-on-eks-logs/spark-events/
spark.driver.log.persistToDfs.enabled=true
spark.driver.log.dfsDir=s3a://spark-on-eks-logs/spark-events/
spark.metrics.conf.driver.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
spark.metrics.conf.driver.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
spark.metrics.conf.driver.sink.prometheus.enable-dropwizard-collector=true
spark.metrics.conf.driver.sink.prometheus.period=1
spark.metrics.conf.driver.sink.prometheus.metrics-name-capture-regex=([a-zA-Z0-9]*_[a-zA-Z0-9]*_[a-zA-Z0-9]*_)(.+)
spark.metrics.conf.driver.sink.prometheus.metrics-name-replacement=\$2
spark.metrics.conf.executor.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
spark.metrics.conf.executor.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
spark.metrics.conf.executor.sink.prometheus.enable-dropwizard-collector=true
spark.metrics.conf.executor.sink.prometheus.period=1
spark.metrics.conf.executor.sink.prometheus.metrics-name-capture-regex=([a-zA-Z0-9]*_[a-zA-Z0-9]*_[a-zA-Z0-9]*_)(.+)
spark.metrics.conf.executor.sink.prometheus.metrics-name-replacement=\$2
EOF

spark-client.service-account-registry create --username spark --namespace spark --primary --properties-file spark.conf --kubeconfig $HOME/.kube/config

Start a Spark shell

To start an interactive pyspark shell, you can run the following command. This will enable you to interactively run commands from your Ubuntu workstation, which will be executed in a spark session running on the EKS cluster. In order for this to work, the cluster nodes need to be able to route IP traffic to the Spark “driver” running on your workstation. To enable routing between your EKS worker nodes and your Ubuntu workstation, we will use sshuttle.

sudo apt install sshuttle
eks_node=$(kubectl get nodes -l 'NodeGroupType=ng01' -o wide | tail -n 1 | awk '{print $7}')
sshuttle --dns -NHr ec2-user@${eks_node} 0.0.0.0/0
eks-node

Now open another terminal and start a pyspark shell:

spark-client.pyspark --username spark --namespace spark

You should see output similar to the following:

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 3.4.2
      /_/

Using Python version 3.10.12 (main, Nov 20 2023 15:14:05)
Spark context Web UI available at http://10.1.0.1:4040
Spark context available as 'sc' (master = k8s://https://10.1.0.15:16443, app id = spark-83a5f8365dda47d29a60cac2d4fa5a09).
SparkSession available as 'spark'.
>>> 

Write a Spark job

From the interactive pyspark shell, we can write a simple demonstration job that will be processed in a parallel, distributed manner on the EKS cluster. Enter the following commands:

lines = """Canonical's Charmed Data Platform solution for Apache Spark runs Spark jobs on your Kubernetes cluster.
You can get started right away with MicroK8s - the mightiest tiny Kubernetes distro around! 
The spark-client snap simplifies the setup process to get you running Spark jobs against your Kubernetes cluster. 
Spark on Kubernetes is a complex environment with many moving parts.
Sometimes, small mistakes can take a lot of time to debug and figure out.
"""

def count_vowels(text: str) -> int:
  count = 0
  for char in text:
    if char.lower() in "aeiou":
      count += 1
  return count

from operator import add
spark.sparkContext.parallelize(lines.splitlines(), 2).map(count_vowels).reduce(add)

To exit the pyspark shell, type quit().

Access Spark History Server

To access the Spark History Server, we’ll use a Juju command to get the URL for the service, which you can copy and paste into your browser:

juju run traefik-k8s/leader -m spark show-proxied-endpoints

# you should see output like
Running operation 53 with 1 task
  - task 54 on unit-traefik-k8s-0

Waiting for task 54...
proxied-endpoints: '{"spark-history-server-k8s": {"url": "https://10.1.0.186/spark-model-spark-history-server-k8s"}}'

You should see a URL in the response which you can use in order to connect to the Spark History Server.

Scaling your Spark cluster

The ability to scale a Spark cluster can be useful because scaling out the cluster by adding more capacity allows the cluster to run more Spark executors in parallel. This means that large jobs can be completed faster. Furthermore, more jobs can run concurrently at the same time.

Spark is designed to be scalable. If you need more capacity at certain times of the day or week, you can scale out by adding nodes to the underlying Kubernetes cluster or scale in by removing nodes. Since data is persisted externally to the Spark cluster in S3, there is limited risk of data loss. This flexibility allows you to adapt your system to meet changing demands and ensure optimal performance and cost efficiency.

To run a spark job with dynamic resource scaling, use the additional configuration parameters shown below.

spark-client.spark-submit \
…
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.shuffleTracking.enabled=true \
--conf spark.dynamicAllocation.shuffleTracking.timeout=120 \
--conf spark.dynamicAllocation.minExecutors=10 \
--conf spark.dynamicAllocation.maxExecutors=40 \
--conf spark.kubernetes.allocation.batch.size=10 \
--conf spark.dynamicAllocation.executorAllocationRatio=1 \
--conf spark.dynamicAllocation.schedulerBacklogTimeout=1 \
…

The EKS cluster is already configured to support auto scaling of the node group, so that as demand for resources from Spark jobs increases, additional EKS worker nodes are brought online.

View Spark job stats in Grafana

The solution installs Canonical Observability Stack (COS), which includes Prometheus and Grafana, and comes with ready to use Grafana dashboards. You can fetch the secret for login as well as the URL to the Grafana Dashboard by running the following command:

juju switch observability
juju run grafana/leader get-admin-password

Enter admin as username and the password from the previous command.

Open Spark dashboard

Navigate to the Spark dashboard. You should be able to see metrics from long running Spark jobs.

<noscript> <img alt="" height="2018" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3838,h_2018/https://ubuntu.com/wp-content/uploads/781a/Spark-Grafana-Dashboard.png" width="3838" /> </noscript>

Conclusion

In this post, we saw how to deploy Spark on Amazon EKS with autoscaling. Additionally, we explored the benefits of using Juju charms to rapidly deploy and manage a complete Spark solution. If you would like to learn more about Charmed Spark – Canonical’s supported solution for Apache Spark, then you can visit the Charmed Spark product page, contact the commercial team, or chat with the engineers on Matrix.

15 July, 2024 03:51PM

hackergotchi for Tails

Tails

Tails 6.5

Changes and updates

Fixed problems

  • Fix preparation for first use often breaking legacy BIOS boot and creation of Persistent Storage. (#20451)

  • Fix language of Tor Browser when started from Tor Connection. (#20318)

  • Fix connection via mobile broadband, LTE, and PPPoE DSL. (#20291, #20433)

For more details, read our changelog.

Known issues

  • It is impossible to connect using the default Tor bridges already included in Tails. (#20467)

    If you habitually use default bridges, try to connect without bridges: this is just as safe. If this fails, configure a custom bridge.

Get Tails 6.5

To upgrade your Tails USB stick and keep your Persistent Storage

  • Automatic upgrades are available from Tails 6.0 or later to 6.5.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails 6.5 on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 6.5 directly:

15 July, 2024 12:34PM

hackergotchi for Purism PureOS

Purism PureOS

Your Phone is Giving Away More Than You Ever Bargained For

The year 2007 represented a watershed moment for the modern smartphone industry as we know it. This was the year that Apple introduced its first iPhone device and Google announced Android via its participation in The Open Handset Alliance (which notably leveraged the Linux kernel and open-source code to create its mobile OS). Fast forward […]

The post Your Phone is Giving Away More Than You Ever Bargained For appeared first on Purism.

15 July, 2024 12:05AM by Randy Siegel

Federal Government Mobility Veteran Randy Siegel Joins Purism

Purism is pleased to announce that long-time mobility industry insider Randy Siegel has joined the company to direct our strategic government business development efforts. “Adding Randy to lead our secure mobile offering is a natural fit to expand our governmental focused Liberty Phone manufactured at our facility on US Soil. We are extremely excited that […]

The post Federal Government Mobility Veteran Randy Siegel Joins Purism appeared first on Purism.

15 July, 2024 12:00AM by Purism

July 14, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Salih Emin: uCareSystem 24.07.14: Improved System Restart Detection

uCareSystem has had the ability to detect if a system reboot is needed after applying maintenance tasks for some time now. With the new release, it will also show you the list of packages that requested the reboot. Additionally, the new release has squashed some annoying bugs. Restart ? Why though ? uCareSystem has had […]

14 July, 2024 05:03PM

July 13, 2024

hackergotchi for Whonix

Whonix

Whonix 17.2.0.1 - All Platforms - Point Release!

Download

(What is a point release?)


Upgrade

Alternatively, in-place release upgrade is possible upgrade using Whonix repository.


This release would not have been possible without the numerous supporters of Whonix!


Please Donate!


Please Contribute!


Major Changes


Full difference of all changes

https://github.com/Whonix/derivative-maker/compare/17.1.3.1-developers-only…17.2.0.1-developers-only

1 post - 1 participant

Read full topic

13 July, 2024 10:26PM by Patrick

hackergotchi for Qubes

Qubes

Qubes OS 4.2.2 has been released!

We’re pleased to announce the stable release of Qubes OS 4.2.2! This patch release aims to consolidate all the security patches, bug fixes, and other updates that have occurred since the previous stable release. Our goal is to provide a secure and convenient way for users to install (or reinstall) the latest stable Qubes release with an up-to-date ISO. The ISO and associated verification files are available on the downloads page.

What’s new in Qubes 4.2.2?

For more information about the changes included in this version, see the Qubes OS 4.2 release notes and the full list of issues completed since the previous stable release.

Copying and moving files between qubes is less restrictive

Qubes 4.2.2 includes a fix for #8332: File-copy qrexec service is overly restrictive. As explained in the issue comments, we introduced a change in Qubes 4.2.0 that caused inter-qube file-copy/move actions to reject filenames containing, e.g., non-Latin characters and certain symbols. The rationale for this change was to mitigate the security risks associated with unusual unicode characters and invalid encoding in filenames, which some software might handle in an unsafe manner and which might cause confusion for users. Such a change represents a trade-off between security and usability.

After the change went live, we received several user reports indicating more severe usability problems than we had anticipated. Moreover, these problems were prompting users to resort to dangerous workarounds (such as packing files into an archive format prior to copying) that carry far more risk than the original risk posed by the unrestricted filenames. In addition, we realized that this was a backward-incompatible change that should not have been introduced in a minor release in the first place.

Therefore, we have decided, for the time being, to restore the original (pre-4.2) behavior by introducing a new allow-all-names argument for the qubes.Filecopy service. By default, qvm-copy and similar tools will use this less restrictive service (qubes.Filecopy +allow-all-names) whenever they detect any files that would be have been blocked by the more restrictive service (qubes.Filecopy +). If no such files are detected, they will use the more restrictive service.

Users who wish to opt for the more restrictive 4.2.0 and 4.2.1 behavior can do so by modifying their RPC policy rules. To switch a single rule to the more restrictive behavior, change * in the argument column to + (i.e., change “any argument” to “only empty”). To use the more restrictive behavior globally, add the following “deny” rule before all other relevant rules:

qubes.Filecopy    +allow-all-names    @anyvm    @anyvm    deny

For more information, see RPC policies and Qube configuration interface.

How to get Qubes 4.2.2

You have a few different options, depending on your situation:

  • If you’d like to install Qubes OS for the first time or perform a clean reinstallation on an existing system, there’s never been a better time to do so! Simply download the Qubes 4.2.2 ISO and follow our installation guide.

  • If you’re currently on Qubes 4.1, learn how to upgrade to Qubes 4.2.

  • If you’re currently on Qubes 4.2 (including 4.2.0, 4.2.1, and 4.2.2-rc1), update normally (which includes upgrading any EOL templates you might have) in order to make your system essentially equivalent to the stable Qubes 4.2.2 release. No reinstallation or other special action is required.

In all cases, we strongly recommend making a full backup beforehand.

Reminder: new signing key for Qubes 4.2

As a reminder for those upgrading from Qubes 4.1 and earlier, we published the following special announcement in Qubes Canary 032 on 2022-09-14:

We plan to create a new Release Signing Key (RSK) for Qubes OS 4.2. Normally, we have only one RSK for each major release. However, for the 4.2 release, we will be using Qubes Builder version 2, which is a complete rewrite of the Qubes Builder. Out of an abundance of caution, we would like to isolate the build processes of the current stable 4.1 release and the upcoming 4.2 release from each other at the cryptographic level in order to minimize the risk of a vulnerability in one affecting the other. We are including this notice as a canary special announcement since introducing a new RSK for a minor release is an exception to our usual RSK management policy.

As always, we encourage you to authenticate this canary by verifying its PGP signatures. Specific instructions are also included in the canary announcement.

As with all Qubes signing keys, we also encourage you to authenticate the Qubes OS Release 4.2 Signing Key, which is available in the Qubes Security Pack (qubes-secpack) as well as on the downloads page.

What is a patch release?

The Qubes OS Project uses the semantic versioning standard. Version numbers are written as <major>.<minor>.<patch>. Hence, we refer to releases that increment the third number as “patch releases.” A patch release does not designate a separate, new major or minor release of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.2) inclusive of all updates up to a certain point. (See supported releases for a comprehensive list of major and minor releases.) Installing the initial Qubes 4.2.0 release and fully updating it results in essentially the same system as installing Qubes 4.2.2. You can learn more about how Qubes release versioning works in the version scheme documentation.

13 July, 2024 12:00AM

July 12, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: Announcing Incus 6.3

This release includes the long awaited OCI/Docker image support!
With this, users who previously were either running Docker alongside Incus or Docker inside of an Incus container just to run some pretty simple software that’s only distributed as OCI images can now just do it directly in Incus.

In addition to the OCI container support, this release also comes with:

  • Baseline CPU definition within clusters
  • Filesystem support for io.bus and io.cache
  • Improvements to incus top
  • CPU flags in server resources
  • Unified image support in incus-simplestreams
  • Completion of libovsdb transition

The full announcement and changelog can be found here.
And for those who prefer videos, here’s the release overview video:

You can take the latest release of Incus up for a spin through our online demo service at: https://linuxcontainers.org/incus/try-it/

And as always, my company is offering commercial support on Incus, ranging from by-the-hour support contracts to one-off services on things like initial migration from LXD, review of your deployment to squeeze the most out of Incus or even feature sponsorship. You’ll find all details of that here: https://zabbly.com/incus

Donations towards my work on this and other open source projects is also always appreciated, you can find me on Github Sponsors, Patreon and Ko-fi.

Enjoy!

12 July, 2024 05:05PM

Simos Xenitellis: Running OCI images (i.e. Docker) directly in Incus

One of the cool new features in Incus 6.3 is the ability to run OCI images (such as those for Docker) directly in Incus.

You can certainly install the Docker packages in an Incus instance but that would put you in a situation of running a container in a container. Why not let Docker be a first-class citizen in the Incus ecosystem?

Note that this feature is new to Incus, which means that if you encounter issues, please discuss and report them.

Launching the docker.io nginx OCI container image in Incus.

Table of Contents

Background

In Incus you typically run system containers, which are containers that have been setup to resemble a virtual machine (VM). That is, you launch a system container in Incus, and this system container keeps running until you stop it. Just like with VMs.

In contrast, with Docker you are running application containers. You launch the Docker container with some configuration to perform a task, the task is performed, and the container stops. The task might also be something long-lived, like a Web server. In that case, the application container will have a longer lifetime. With application containers you are thinking primarily about tasks. You stop the task and the container is gone.

Prerequisites

You need to install Incus 6.3. If you are using Debian or Ubuntu, you would select the stable repository of Incus.

$ incus version
Client version: 6.3
Server version: 6.3
$

Adding the Docker repository to Incus

The container images from Docker follow the Open Container Image (OCI) format. There is also a special way to access those images through the Docker Hub Container Image Repository, which is distinctive from the other ways supported by Incus.

We will be adding (once only) a remote for the Docker repository. A remote is a configuration to access images from a particular repository of such images. Let’s see what we already have. We run incus remote list which invokes the list command for the functionality about remotes (incus remote). There are two remotes, the images which is the standard repository for container images and virtual machine images for Incus. And then there is local, which is the remote of the local installation of Incus. Every installation of Incus has such a default remote.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

The Docker repository has the URL https://docker.io and it is accessed through a different protocol. It is called oci.

Therefore, to add the Docker repository, we need to run incus remote add with the appropriate parameters. The URL is https://docker.io and the --protocol is oci.

$ incus remote add docker https://docker.io --protocol=oci
$

Let’s list again the available Incus remotes. The docker remote has been added successfully.

$ incus remote list
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
|      NAME       |                URL                 |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC | GLOBAL |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| docker          | https://docker.io                  | oci           | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| images          | https://images.linuxcontainers.org | simplestreams | none        | YES    | NO     | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
| local (current) | unix://                            | incus         | file access | NO     | YES    | NO     |
+-----------------+------------------------------------+---------------+-------------+--------+--------+--------+
$ 

If we ever want to remove a remote called myremote, we would incus remote remove myremote.

Launching a Docker image in Incus

When you launch (install and run) a container in Incus, you use incus launch with the appropriate parameters. In this first example, we launched the image hello-world, which is one of the Docker official images. As an image, it runs, printing some text and then it stops. In this case we used the parameter --console in order to see the text output. Finally, we use the --ephemeral parameter that would automatically delete the container image as soon as it stops. Ephemeral (εφήμερο) is a Greek word, meaning that it lasts only a brief time. Both these two additional parameters are not essential but are helpful in this specific case.

$ incus launch docker:hello-world --console --ephemeral
Launching the instance
Instance name is: best-feature                       

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

$ 

Note that we did not specify a name for the container. Astonishingly, Incus randomly selected the name best-feature with some editorial help. Since this is an ephemeral container and the specific image hello-world is short-lived, it is gone in a flash. Indeed, if you then run incus list, the Docker container is not found because it has been auto-deleted.

Let’s try another Docker image, the official nginx Docker image. It launches the nginx image which serves an empty nginx Web server. We need to run incus list and search for the IP address that was given to the container. Then, we can view the default page of the Web server in our Web browser.

$ incus launch docker:nginx --console --ephemeral
Launching the instance
Instance name is: best-feature                               
To detach from the console, press: <ctrl>+a q
2024/07/12 16:31:59 [notice] 21#21: using the "epoll" event method
2024/07/12 16:31:59 [notice] 21#21: nginx/1.27.0
2024/07/12 16:31:59 [notice] 21#21: built by gcc 12.2.0 (Debian 12.2.0-14) 
2024/07/12 16:31:59 [notice] 21#21: OS: Linux 6.5.0-41-generic
2024/07/12 16:31:59 [notice] 21#21: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/07/12 16:31:59 [notice] 21#21: start worker processes
2024/07/12 16:31:59 [notice] 21#21: start worker process 45
2024/07/12 16:31:59 [notice] 21#21: start worker process 46
2024/07/12 16:31:59 [notice] 21#21: start worker process 47
2024/07/12 16:31:59 [notice] 21#21: start worker process 48
2024/07/12 16:31:59 [notice] 21#21: start worker process 49
2024/07/12 16:31:59 [notice] 21#21: start worker process 50
2024/07/12 16:31:59 [notice] 21#21: start worker process 51
2024/07/12 16:31:59 [notice] 21#21: start worker process 52
2024/07/12 16:31:59 [notice] 21#21: start worker process 53
2024/07/12 16:31:59 [notice] 21#21: start worker process 54
2024/07/12 16:31:59 [notice] 21#21: start worker process 55
2024/07/12 16:31:59 [notice] 21#21: start worker process 56
10.10.10.1 - - [12/Jul/2024:16:33:29 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36" "-"
...
^C
...
2024/07/12 16:39:10 [notice] 21#21: signal 29 (SIGIO) received
2024/07/12 16:39:10 [notice] 21#21: signal 17 (SIGCHLD) received from 55
2024/07/12 16:39:10 [notice] 21#21: worker process 55 exited with code 0
2024/07/12 16:39:10 [notice] 21#21: exit
Error: stat /proc/-1: no such file or directory
$ 

Discussion

How long is the lifecycle time of a minimal Docker container?

How long does it take to launch the docker:hello-world container? I prepend time to the command, and check the stats at the end of the execution. It takes about four seconds to launch and run a simple Docker container. On my system and my Internet connection (it’s cached).

$ time incus launch docker:hello-world --console --ephemeral
...
real	0m3,956s
user	0m0,016s
sys	0m0,016s
$ 

How long does it take to repeatedly run a minimal Docker container?

We are removing the --ephemeral option. We launch the container and we give some relevant name, mydocker. This container remains after execution. We can see that after launching it, the container stays in a STOPPED state. We then incus start the container with the command time incus start mydocker --console, and we can see that it takes a bit more than half a second to complete the execution.

$ incus launch docker:hello-world mydocker --console
Launching mydocker

Hello from Docker!
...
$ incus list mydocker
+----------+---------+------+------+-----------------+-----------+
|   NAME   |  STATE  | IPV4 | IPV6 |      TYPE       | SNAPSHOTS |
+----------+---------+------+------+-----------------+-----------+
| mydocker | STOPPED |      |      | CONTAINER (APP) | 0         |
+----------+---------+------+------+-----------------+-----------+
$ time incus start mydocker --console

Hello from Docker!
...

Hello from Docker!
...
real	0m0,638s
user	0m0,003s
sys	0m0,013s
$ 

Are the container images cached?

Yes, they are cached. When you launch a container for the first time, you can visibly the downloading of the image components. Also, if you run incus image list, you can see the cached Docker.io images in the output.

Troubleshooting

Error: Can’t list images from OCI registry

You tried to list the images of the Docker Hub repository. Currently this is not supported. Technically it could be supported though the repository has more than ten thousand images. Using incus list without parameters may not make sense as the output would require to download the full list of images from the repository. Searching for images does make sense, but if you can search, you should be able to list as well. I am not sure what’s the maintainer’s view on this.

$ incus image list docker:
Error: Can't list images from OCI registry
$ 

As a workaround, you can simply locate the image name from the Website at https://hub.docker.com/search?q=&image_filter=official

Error: stat /proc/-1: no such file or directory

I ran many many instances of Docker containers and in a few cases I got the above error. I do not know what it is and I am adding it here in case someone manages to replicate. It feels that it’s some kind of race condition.

Failed getting remote image info: Image not found

You have configured Incus properly and you definitely have Incus version 6.3 (both the client and the server). But still, Incus cannot find any Docker image, not even hello-world.

This can happen if you are using a packaging of Incus that does not include the skopeo and umoci packages. If you are using the Zabbly distribution of Incus, these programs are included in the Incus packaging. Therefore, if you are using alternative packaging for Incus, you can manually install the versions of those packages as provided by your Linux distribution.

12 July, 2024 04:50PM

Ubuntu Blog: Managing OTA and telemetry in always-connected fleets

If you’ve been reading my blogs for the past two years, you know that the automotive industry is probably the most innovative one today. As a matter of fact, some of the biggest company valuations revolve around electric vehicles (EVs), autonomous driving (AD) and artificial intelligence (AI). As with any revolution, this one comes with its set of challenges. 

I’ve noticed that the most difficult technologies to grasp and master aren’t always the ones that seem the most complex at first sight. Over-the-air (OTA) updates and managing the telemetry of a fleet are typically some of the most promising, but also deceptively complicated, technologies in automotive today.

The unspoken power of OTA

Over the air updates have the potential to completely reshuffle the cards when it comes to the value of a vehicle. By enabling remote software and firmware updates, OTA updates not only eliminate the need for physical intervention, but also provide security updates, as well as new features, making your car feel brand new. When applied throughout an entire fleet, you can imagine the savings, but also the value that is added to vehicles driving on the road today.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXeGHajtAJaCLVXjaedrILZ3xOKbIaPwRce-l-heooQYsvuhQuxgXP72roJ5tPBuGBlDQWX43Bp6NSfC9bRaBcpOkjjzsfKGGuZ8zWUWpzHAD-7oP9m-aGfyA2SIQOvUV9IQgi0sZWHqZcu-pXnlo83eYG_e?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

By minimizing the need for vehicles to be taken to dealerships or garages for updates and maintenance, OTA updates ensure higher availability of vehicles. Similarly, by adding new features and improvements to your vehicles, you enhance the overall user experience.

You’ve probably heard of new cybersecurity automotive regulatory requirements that OEMs need to comply with, like ISO 21434. OTA updates can help OEMs meet these regulatory requirements, by remotely ensuring that vehicles have the latest and greatest security patches against vulnerabilities and cybersecurity threats.

At Canonical, we believe that one of the most effective ways to manage OTA updates is through the use of a Dedicated Snap Store. These provide a centralised platform for distributing and managing your software packages, while keeping the whole process secure. Being a single point of control for your software updates, it makes it easier to manage and employ these packages across your fleet.

For example, an OEM could use a Snap Store to manage updates for various car models, and versions. Allowing for precise control over which updates are deployed to which variant, so that each vehicle receives the necessary updates for its specific configuration.

All of these updates follow very strict security guidelines, ensuring that only authorised packages are delivered to your vehicles. By using delta mechanisms, you can optimise your download sizes and update times, which is especially useful for large fleets.

Telemetry for enhanced efficiency

Telemetry involves the collection of data coming from vehicles. This data can be used for improving the vehicles themselves, fleet operations, efficiency, etc. With fine-tuned, telemetry parameters, you can obtain relevant data at the right frequency to optimise analytics. 

Whether you want to track vehicle location, speed, fuel consumption, or ECU diagnostics, you want to make sure that you are monitoring your fleet with best-in-class performance. One of the most common use cases that justify the investment in feet telemetry is predictive maintenance. By analysing data from your fleet, you can predict the need for repairs and schedule proactively for maintenance. This helps you reduce the downtime of your fleet, and enables you to extend the lifetime of your vehicles.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXddYbK-DNnU6KdrVDrAbT9kGzI3ZY8aQfwD1TfJ1lRKF2Jc4SmctrUdLu6itdKGuhlrDC9A5M3TnP17909RLX2HYOnkz-7pnSdBVgXgmA8JyXKAv-N0CBnOaTK8TU-faVtQ1mR1266HGFJ_Y58OGj90y0eZ?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

A different use case relies on driver behaviour monitoring. Although this use case is often frowned upon, from an insurance company perspective it can provide a lot of value. From more accurate premiums based on your actual vehicle usage and driving behaviour, analysing this data can lead to potential cost savings. 

Yet another use case that is frequently mentioned when it comes to fleet management is route optimisation. By combining traffic information and geolocation data, it becomes possible to find highly optimised routes; reducing travel time and fuel consumption.

From interoperability to security, managing a large fleet of vehicles comes with its challenges. In fact, integrating data generated by very diverse systems and components from different OEMs can be extremely challenging. It’s important that your solutions abstract that complexity and ensure seamless communication and data exchange.

Cybersecurity wise, having vehicles constantly connected means that the cybersecurity threats can happen anytime. Security needs to be applied from the ground up; from the vehicles to the cloud. Moreover, your backend will sometimes be handling confidential information. The way you decide to collect, store and analyse these large amounts of data requires advanced frameworks and tools.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXdc1HXpcOz0YYNeDExtRRG1yerWG0zuSEiaFkBDAktd_u6m9lKDyFrxCMxp6jkuHN_a7RHAGw2Sct6f46ZZcrCSDNMmxJoPAsfq-PcMtae_QkXr0Nxo3Cogh6vaNvxwOG4loyczYf541vJECBZ58lhXXbk?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

Open source solutions offer several advantages for fleet management, including scalability, security, and interoperability. Community-driven development offers continuous improvements and optimisations, while transparent development processes can further improve security and flexibility.

Driving towards excellence by future-proofing fleet management

The automotive industry’s shift to software-driven operations necessitates a deep understanding of interconnected systems. OTA updates and fleet telemetry are at the front lines of this transformation, offering substantial benefits in terms of efficiency, security, and operational excellence. 

By taking advantage of Dedicated Snap Stores, edge computing, and open source solutions, automotive companies can embrace these challenges with confidence, benefiting from the full potential of open source software.

As the industry opens up to this software revolution, stakeholders must understand the intricacies of these complex systems. Failure to do so could lead to compromised security. Our latest white paper aims to address this knowledge gap and empower you to understand the full scope of the possible existing solutions.

If you want to learn more about achieving effective V2X communication, understanding OTA updates, and overcoming fleet management challenges, I recommend you download our white paper. This guide will help you understand the intricacies of these challenging technologies that are pushing the automotive software landscape forward.

To learn more about Canonical and our engagement in automotive: 

Contact Us

Check out our webpage

Watch our webinar with Elektrobit about SDV

Download our whitepaper on V2X (Vehicle-to-Everything)

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/docsz/AD_4nXc5xmqNvmPTy5UjNJ8_wGwK2avZdWz6qTYZpcxVOauHCUQgfgCA9LOLcifzqy4AZAjOTx2KzuiRo-7NUAh0vdBqVzmqnDjg8zMihmgQXHfkgluSDpH4ZzoNBnYEU6jphc_rGFDspz881P4BqekvMc1O9GKS?key=BJGWMwy860VQYscskFm5Lg" width="720" /> </noscript>

12 July, 2024 08:00AM

July 11, 2024

hackergotchi for SparkyLinux

SparkyLinux

Zed

There is a new application available for Sparkers: Zed What is Zed? Installation (Sparky 7 & 8 amd64): License: GNU AGPL/GPL, Apache Web: github.com/zed-industries/zed …

Source

11 July, 2024 07:13PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Studio: Ubuntu Studio 23.10 Has Reached End-Of-Life (EOL)

As of July 11, 2024, all flavors of Ubuntu 23.10, including Ubuntu Studio 23.10, codenamed “Mantic Minotaur”, have reached end-of-life (EOL). There will be no more updates of any kind, including security updates, for this release of Ubuntu.

If you have not already done so, please upgrade to Ubuntu Studio 24.04 LTS via the instructions provided here. If you do not do so as soon as possible, you will lose the ability without additional advanced configuration.

No single release of any operating system can be supported indefinitely, and Ubuntu Studio has no exception to this rule.

Regular Ubuntu releases, meaning those that are between the Long-Term Support releases, are supported for 9 months and users are expected to upgrade after every release with a 3-month buffer following each release.

Long-Term Support releases are identified by an even numbered year-of-release and a month-of-release of April (04). Hence, the most recent Long-Term Support release is 24.04 (YY.MM = 2024.April), and the next Long-Term Support release will be 26.04 (2026.April). LTS releases for official Ubuntu flavors (not Desktop or Server which are supported for five years) are three years, meaning LTS users are expected to upgrade after every LTS release with a one-year buffer.

11 July, 2024 03:27PM

Ubuntu Blog: Charmed Kubeflow 1.9 Beta is here: try it out

After releasing a new version of Ubuntu every six months for 20 years, it’s safe to say that we like keeping our traditions. Another of those traditions is our commitment to giving our Kubeflow users early access to the latest version – and that promise still stands. Kubeflow 1.9 is about to go out in a couple of weeks and that only means one thing: Canonical has just released its Charmed Kubeflow beta. Are you ready to try it out? 

If you can put some time aside, we’re looking for data scientists, ML engineers, MLOps experts, creators and AI enthusiasts to take Charmed Kubeflow 1.9 for a ride and share their feedback with us. You can really help us by:

  • Trying it out and letting us know how the experience goes
  • Asking us any questions about the product and how it works
  • Reporting bugs or any issues you have with Charmed Kubeflow 1.9 Beta (and beyond)
  • Giving us improvement suggestions for the product and portfolio  

What’s new in Kubeflow 1.9?

Kubeflow is now going through the CNCF process to graduate from the incubation program. This challenges the community to evolve quickly and work on different aspects of the projects:

  • Improving the MLOps platform’s security features
  • Adding new capabilities to the project
  • Centralising communication channels and growing the community

Security as a priority

This release has the first updates from the security working group. One of the key features that the group announced was about Network Policies, which control the traffic flow at the IP address or port level. They will be enabled as a second security layer for core services to give users a better network overview and segmentation in line with common enterprise security guidelines.

ML integrations as part of a growing ecosystem

Kubeflow is designed to work in partnership with other ML & data tools. The latest release brings news integrations with leading ML tools and libraries such as BentoML, used for inference, or Ray, for training LLMs. One long-standing bug that Charmed Kubeflow users reported was related to the access to MLflow when deployed alongside the MLOs platform. Charmed Kubeflow 1.9 will solve this issue and give users clear guidance on how to use it.

Community growth

As part of CNCF, the community aims to integrate better into the ecosystem and enable new contributors to the project. One of the changes that the upstream community just made was to move to the CNCF Slack channel. Join us there to get in touch with a vibrant community and learn more from some of the industry experts.

We’re going live; join our MLOps tech talk.

Speaking of traditions, you might already know that all our betas bring the product engineering team live for a tech talk. This time is no exception, and I’ll be joined by two new faces. Michal Hucko and Orfeas Kourkakis, Software Engineers at Canonical, are ready to talk tomorrow, 11 July 2024, at 5 PM CET to Kubeflow users about the platform, latest news and how the industry is being shaped. Join us live, and you will:

  • Learn about the latest release and how our distribution handles it
  • Discover the key features covered in Charmed Kubeflow 1.9, in upstream and beyond.
  • Understand the differences between the upstream release and Canonical’s Charmed Kubeflow.
  • Get answers to any other question, technical or not, you have about MLOps, open source or Canonical’s portfolio.

 Don’t wait any longer, and add the event to your calendar!

Charmed Kubeflow 1.9 is out. Try it now!

Are you already a Charmed Kubeflow user?

Your job is even easier since you will only have to upgrade to the latest version to try the 1.9 beta. We’ve already prepared a guide with all the steps you need to take. 

Please be mindful that this is not a stable version, so there is always a risk that something might go wrong. Save your work and proceed with caution. If you encounter any difficulties, Canonical’s MLOps team is here to hear your feedback and help you out. Since this is a Beta version, Canonical does not recommend running or upgrading it on any production environment.

Are you new to Charmed Kubeflow?

Now, I can tell you are a real adventurer. Welcome to the MLOps world! Starting with a beta release might result in a few more challenges for you, but it’ll give you the chance to share in the product development and really contribute to the open source world. For all the prerequisites, check out the getting started tutorial.

Shortly after you deploy and install MicroK8s and Juju, you will need to add the Kubeflow model and then make sure you have the latest version. Follow the instructions below to get this up and running:

juju deploy kubeflow –channel 1.9/beta –trust

Now, you can go back to the tutorial to finish the configuration of Charmed Kubeflow or read the documentation to learn more.

You tried it out – what do you think?

You are part of something really important for us. As with any other open source project, joining a beta gives you a glimpse of the latest innovations, and it also gives you the chance to shape the product. Let’s make Charmed Kubeflow 1.9 better together.

11 July, 2024 11:52AM

hackergotchi for GreenboneOS

GreenboneOS

Vulnerability scanner Notus supports Amazon Linux

Most virtual servers in the Amazon Elastic Compute Cloud EC2 run a version of Linux that has been specially customised for the needs of the cloud. The latest generation of scanners from Greenbone has also been available for the Amazon Web Services operating system for a few weeks now. Over 1,900 additional, customised tests for the latest versions of Amazon Linux (Linux 2 and Linux 2023) have been integrated in recent months, explains Julio Saldana, Product Owner at Greenbone.

Significantly better performance thanks to Notus

Greenbone has been supplementing its vulnerability management with the Notus scan engine since 2022. The innovations in the architecture are primarily aimed at significantly increasing the performance of the security checks. Described as a “milestone” by Greenbone CIO Elmar Geese, the new scanner generation works in two parts: A generator queries the extensive software version data from the company’s servers and saves it in a handy Json format. Because this no longer happens at runtime, but in the background, the actual scanner (the second part of Notus) can simply read and synchronise the data from the Json files in parallel. Waiting times are eliminated. “This is much more efficient, requires fewer processes, less overhead and less memory,” explain the Greenbone developers.

Amazon Linux

Amazon Linux is a fork of Red Hat Linux sources that Amazon has been using and customising since 2011 to meet the needs of its cloud customers. It is largely binary-compatible with Red Hat, initially based on Fedora and later on CentOS. Amazon Linux was followed by Amazon Linux 2, and the latest version is now available as Amazon Linux 2023. The manufacturer plans to release a new version every two years. The version history of the official documentation also includes a feature comparison, as the differences are significant: Amazon Linux 2023 is the first version to also use Systemd, for example. Greenbone’s vulnerability scan was also available on Amazon Linux from the very beginning.

11 July, 2024 10:38AM by Markus Feilner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Bringing Real-time Ubuntu to Amazon EKS Anywhere customers with Ubuntu Pro

Earlier this year at Mobile World Congress (MWC) 2024 in Barcelona, Canonical announced the availability of Real-time Ubuntu on Amazon Elastic Kubernetes Services Anywhere (EKS Anywhere). With this technology enablement, a telecom operator can confidently run its Open Radio Access Network (RAN) software workloads on Amazon EKS Anywhere, thanks to the necessary support for ultra low-latency data processing with high reliability in Real-time Ubuntu.

The enablement work was part of a collaboration between partner companies to make Amazon EKS Anywhere an ideal platform for Open RAN workloads. At the MWC event, leaders from these companies discussed what has been achieved and what the future holds. These discussions focused on the roadmap to achieving successful Open RAN deployments on Amazon EKS Anywhere. The panel included representatives from NTT DOCOMO, Qualcomm, NEC and Canonical, and was moderated by AWS. In this blog, we’ll run through how Canonical engineers helped bring this work to fruition, and the advantages of having Real-time Ubuntu on Amazon EKS Anywhere.

Why Open RAN matters in telecom

Telecom operators constantly seek innovative ways to launch new services that generate revenue while simultaneously reducing operational costs. With the right infrastructure solutions, Open RAN offers operators an ecosystem where technology providers can deliver cost-effective solutions that are interoperable through open and standardised APIs. This technology enables telecom operators to run their RAN systems as disaggregated and distributed components across their edge infrastructure. Implementing virtual RAN functions as software enhances flexibility and operational efficiency in network operations.

What is a real-time kernel?

A real-time OS kernel enables the deployment of any software application that requires bounded low latency in the operating system kernel when executing the application.

For telecom applications with strict latency requirements, a real-time OS kernel is now essential. It significantly enhances performance by efficiently processing and delivering information between applications, external systems, and devices.

How Amazon EKS Anywhere benefits from Real-time Ubuntu

RAN software workloads are highly sensitive to delays in information processing and delivery. This makes it essential to integrate real-time kernel capabilities into Amazon EKS Anywhere for telecom operators like NTT DOCOMO. At their edge locations, these capabilities are crucial for their Open RAN deployments, enabling virtual RAN functions to operate effectively. In fact, beyond virtual RAN workloads, a real-time OS kernel is crucial for enabling any time-sensitive business application on cloud-native infrastructure like Amazon EKS Anywhere, a need that is addressed by Canonical’s Real-time Ubuntu.

The enablement journey

During the panel at the Mobile World Congress, Arno Van Huysteen (Chief Technology Officer for Telco at Canonical) highlighted the collaborative work of the partner companies and the ability to create innovative solutions to deliver what customers truly need. For example, Canonical engineers enabled out-of-tree kernel drivers in an integrated system, and our approach ensured the process was as smooth as possible.

Canonical engineers provided precise guidelines to AWS engineers on how to fine-tune the Ubuntu real-time kernel for the set of necessary boot parameters and the bindings between processes and specific CPUs, to achieve superior performance in Amazon EKS Anywhere operational environments. This process is publicly documented to assist customers and partners in executing similar workloads that necessitate a real-time kernel. 

In addition to optimising the operating system, Canonical engineers also worked with Amazon EKS Anywhere, a Kubernetes distribution, to port any necessary changes and integrations into Ubuntu Pro, Canonical’s subscription service to open source software security. With Ubuntu Pro, Ubuntu machines receive expanded security coverage (ESM) for over 25,000 software packages. 

Extensive testing was conducted across multiple environments where Amazon EKS Anywhere can operate to ensure smooth operations in various deployment scenarios. With Amazon EKS Anywhere seamlessly integrated with Ubuntu Pro images, operators can now take advantage of real-time processing capabilities, combined with the most comprehensive security and compliance support for open source software in Linux. This makes Ubuntu Pro on Amazon EKS Anywhere an excellent platform for telecom operators seeking secure systems with high performance. It will promptly facilitate Open RAN deployment for telecom operators, including NTT Docomo, who have chosen EKS Anywhere with Real-time Ubuntu.

<noscript> <img alt="" height="167" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_248,h_167/https://ubuntu.com/wp-content/uploads/06f4/contact-us.png" width="248" /> </noscript>

Learn more about Canonical’s solutions for telco

To discover more about Real-time Ubuntu and its advantages for telecommunications networks and applications, visit our blog. For additional details about our telecommunications services, please visit https://ubuntu.com/telco.

11 July, 2024 09:00AM

hackergotchi for Deepin

Deepin

deepin V23 Successfully Adapted to EIC7700X !

Recently, the deepin community announced the successful adaptation of the Eswin Computing EIC7700X, achieving the stable operation of the RISC-V version of deepin V23. This move once again confirms deepin's commitment and strength to the RISC-V ecosystem, and also opens the door to a brand new desktop experience for developers and users.   EIC7700X The EIC7700X is a powerful RISC-V intelligent computing System on Chip (SoC), equipped with a 64-bit out-of-order execution RISC-V processor and a proprietary high-efficiency Neural Processing Unit (NPU), offering a peak computing power of 19.95 TOPS, with the dual-die version reaching up to 39.9 TOPS. It ...Read more

11 July, 2024 02:38AM by aida

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How often do you apply security patches on Linux?

<noscript> <img alt="" height="286" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720,h_286/https://ubuntu.com/wp-content/uploads/e15a/blog.jpg" width="720" /> </noscript>

Regular patching is essential for maintaining a secure environment, but there is no one-size fits all approach for keeping your Linux estate safe. So how do you balance the frequency of updates with operational stability?  There are strategies for enabling security patching automations in a compliant and safe way, for even the most restrictive and regulated environments. Understanding Canonical’s release schedules for software updates and knowing security patching coverage windows are essential pieces of information when defining a security patching strategy. I recently hosted a live webinar and security Q&A whereI explained how to minimise your patching cadence, or minimise the time unpatched vulnerabilities can be exploited. In this article, I’ll summarise my main points from that webinar and explain the most vital considerations for updates scheduling. 

Security patching for the Linux kernel

There are two types of kernels available in Ubuntu, and there are two ways these kernels can be packaged. The two types are the general availability (GA) kernel and the variant kernel. The two package types are debian packages, and snap packages. The GA kernel is the kernel version included on day 1 of an Ubuntu LTS release. Every Ubuntu LTS release receives a point release update every February and August, and typically there are 5 point releases. Ubuntu Server defaults to staying on the GA kernel for the lifetime of the Ubuntu Pro coverage for that release. Ubuntu Desktop defaults to upgrading the kernel from the second point release onwards, to a later version of the upstream kernel, referred to as the hardware enablement (HWE) kernel.

Security coverage for the GA kernel extends for the lifetime of the Ubuntu Pro coverage for that release. Security coverage for the HWE kernel extends for the lifetime of the HWE kernel, which is 6 months, plus another 3 months. This additional 3 months of security coverage beyond the end of life of the HWE kernel gives users a window to perform an upgrade to the next HWE kernel.

Is a reboot required to apply a security patch?

When the kernel package is updated, the Ubuntu instance must be rebooted to load the patched kernel into memory. When the general availability kernel is installed as a snap, updates to this snap package will reboot the device. When the general availability kernel is installed as a deb package, a reboot is not automatically performed, but it must be performed to apply the security patch.

There are some other packages in Ubuntu that also require a reboot, when they are updated. Any security updates to glibc, libc, CPU microcode, and the grub bootloader, will require a reboot to effect. Software which runs as a service will require those services to restart when security patches are applied, examples of such software includes ssh, web servers, and others. Other software which doesn’t run as a service, and runs on demand, doesn’t require a system reboot or service restart to take effect.

The Livepatch service will apply high and critical security patches to the running kernel, in memory. It will not upgrade the installed kernel package, so rebooting the machine and clearing its memory would result in removal of the Livepatch applied security patch. Livepatch provides 13 months of security patches to a GA kernel, and 9 months of security patches to an HWE kernel. After these 13- or 9-month windows, the kernel package must be upgraded and the Ubuntu instance must be rebooted, for continued security coverage through Livepatch.

3 approaches for security patching

Canonical provides an array of tools and services, ranging from Livepatch, Landscape, Snaps, and command line utilities such as unattended-upgrade. These tools and services can be used together, or selectively, and they provide security patching automation capabilities in Ubuntu. You have flexibility with using these tools to achieve a variety of different security patching goals on desktop, server, and IoT devices. Given the assumption that nobody wants to run software that isn’t being secured or supported by its maintainers, you may prefer one of the following security patching approaches:

  1. Delay security patching for as long as possible, to achieve maximum procrastination.
  2. Perform security patching with the least frequency, but on a predefined regularly recurring schedule.
  3. Minimise the window of the time systems can be exploited by a vulnerability, by reducing the time between the release of a security patch, and its installation.

Regardless of which approach is used, it’s worth noting that unscheduled security maintenance windows may be required, to patch security vulnerabilities in glibc, libc, or CPU microcode.

Security Patching Automations on Linux

The best practices for scheduling security patching automations webinar dives deeper into the rationale behind the implementation details of these 3 security patching approaches.

Access the Webinar

Security patching for procrastinators

With Livepatch enabled, Ubuntu LTS instances that stay on the GA kernel need to be upgraded and rebooted every 13 months. When running on an HWE kernel, the first upgrade and reboot must happen after 13 months, in May. Another upgrade and reboot must happen after 6 months, in November. After another upgrade and reboot activity in the subsequent May and November, the HWE kernel will be promoted to the next GA kernel. The GA kernel will need an upgrade and reboot every 13 months.

Many months may pass between upgrades and reboots, and so when procrastinating in this way, medium and below kernel vulnerabilities will remain unpatched. This approach maximises the amount of time medium and below kernel vulnerabilities remain unpatched.

Security patching on a recurring schedule

If a GA kernel is being used, an annual patching cadence can work. Taking the GA kernel’s 13-month Livepatch security coverage window into account, installing security patches and rebooting every May should keep the kernel and other packages on the machine in secure shape.

Assuming the HWE kernel is being used, an annual patching cadence cannot be used to apply security patches in the same month year after year. This approach would result in a lapse in security patching coverage for the kernel, for a period of time. Excluding the third year of a Ubuntu LTS release, when the fourth point release is published, it is possible to apply security patches once a year when using the HWE kernel.

A bi-annual security patching cadence every May and September would provide peace of mind, regardless of your choice of kernel. This security patching cadence takes Canonical’s release schedule and kernel security coverage windows into account. There is no lapse in security coverage when scheduling security maintenance windows bi-annually, every May and September.

Security patching for the smallest exploit footprint

More frequent security maintenance windows are obviously better. A very common schedule is monthly: there is no chance of running a stale kernel when upgrading and rebooting monthly. Weekly security patching and reboot cadences are recommended, and a daily security patching regimen is applauded. Canonical’s security patches are published as they are made available, it is recommended to take advantage of these security patches as they are released. Running your workloads in high availability, enabling security patching automations, and rolling out phased upgrades and reboots for groups of machines on a daily basis, provides the strongest security posture.

Best practices for enabling security patching automations

The best practices for scheduling security patching automations webinar answers all the big questions:

  • What patching automation options are available?
  • Where do security patches come from, and where can they be distributed from?
  • How are security patches distributed, and how should they be applied? How does Canonical’s rolling kernel strategy extend the security coverage window, for certain kernels?
  • When should security maintenance events be scheduled?

11 July, 2024 12:01AM

Podcast Ubuntu Portugal: E307 Dégradés De Ultravioletas

Um Raspberry Pi serve para muita coisa: testar Plasma e XFCE, trabalhar o dia todo a ouvir podcasts; saber quando devemos sair à rua para apanharmos um melanoma - e muito mais! O Miguel continua a saborear ambientes gráficos num Pi 5 e a brincar com automatizações e dégradés de cores com Home Assistant; mas vale a pena gastar dinheiro com mais aparelhos? Meh. O Diogo teve uma recaída no número de abas abertas; actualizou o seu Ubuntu mesmo em cima do prazo; destruiu o seu Thunderbird; promoveu descaradamente o Outro Podcast…e está a habilitar-se a arranjar problemas com o Роскомнадзор da Federação Russa e com a República Popular da China. Será que o voltaremos a ver depois de ir de férias?…

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

11 July, 2024 12:00AM

July 10, 2024

hackergotchi for Tails

Tails

Tails report for June 2024

Highlights

  • The European summer is here, and with it are summer holidays! We took some time off for some quality rest and recreation. How we vacationed: music festivals in Milan, hiking in the Alps, the Sierra Nevada, and the High Sierra, and biking in the Pyrenees. We do love the mountains! ⛰️

  • But before we went away for some quality R&R, we continued making it easier for Tails users to recover from the most common failure modes without requiring technical expertise:

    • We finalized a design to detect corruption of the Persistent Storage on a Tails USB stick, reporting it to users, and repairing it.

    • We made incremental progress towards warning Tails users when they have low available memory. We don't detect all the problematic cases yet but, when we do, GNOME gently notifies the user.

  • We have been working on a new user journey for backups. In June, we finished designing all interfaces and solicited feedback. The proposal was well received by 2 volunteers who have contributed code related to backups.

Releases

📢 We released Tails 6.4!

In Tails 6.4, we brought:

  • even stronger cryptographic protections, as Tails now stores a random seed on the Tails USB stick
  • fixes to make unlocking the Persisted Storage smoother
  • more reliable installation of Additional Software, due to a switch to using HTTPS addresses instead of onion addresses for the Debian and Tails APT repositories

To know more, check out the Tails 6.4 release notes and the changelog.

Metrics

Tails was started more than 775,377 times this month. That's a daily average of over 25,946 boots.

10 July, 2024 04:43PM

July 09, 2024

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.1.3-11 Released

This release of Clonezilla live (3.1.3-11) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.1.2-22

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2024/Jun/28).
  • Linux kernel was updated to 6.9.7-1.
  • Partclone was updated to 0.3.31.
  • Removed package cpufrequtils from lists of live system. It's not in the Debian repo anymore.
  • Removed thin-provisioning-tools from packages list of clonezilla live due to it breaks the dependence.
  • Added package yq, and removed package deborphan in live system.
  • Merged pull request #31 from iamzhaohongxin/patch-1. Update zh_CN.UTF-8.
  • Language file ca_ES was updated. Thanks to René Mérou.
  • Language file de_DE was updated. Thanks to Savi G and Michael Vinzenz.
  • Package live-boot was updated to 1:20240525.drbl1.
  • Package live-config was updated to 11.0.5+drbl3.
  • Set a bigger scrollback for screen in live system. It's easier to debug.

BUG FIXES

09 July, 2024 02:46AM by Steven Shiau

July 08, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 847

Welcome to the Ubuntu Weekly Newsletter, Issue 847 for the week of June 30 – July 6, 2024. The full version of this issue is available here.

In this issue we cover:

  • Developer Membership Board Election Results
  • Ubuntu Stats
  • Hot in Support
  • ¡Teaser Trailer de UbuCon Latinoamérica 2024!
  • LoCo Events
  • Checkbox 4.0.0 stable release
  • Thunderbird beta/core24 snap available for testing
  • Upstream release of cloud-init 24.2
  • Ubuntu Cloud News
  • Canonical News
  • In the Press
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

08 July, 2024 10:29PM