May 21, 2024

hackergotchi for Elive

Elive

Elive 3.8.42 released

The Elive Team is pleased to announce the release of 3.8.42 This new version includes: SecureBoot: Enhanced support and compatibility. Installer: This version includes many bug fixes and enhancements for the installation Debian: Enhanced compatibility, e.g., when installing external software like Tailscale. Nvidia Installer: Bug fixes and support for the latest video cards. Persistence: Multiple fixes and improvements have been made. Emoji: Fully supported across the entire OS. Nerdfonts: Included by default. WiFi: Significant improvements in compatibility drivers for Broadcom cards. Also includes fixes for WPA3 connections. Browser: Default homepageSEE DETAILS

Check more in the Elive Linux website.

21 May, 2024 01:10AM by Thanatermesis

May 20, 2024

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 840

Welcome to the Ubuntu Weekly Newsletter, Issue 840 for the week of May 12 -18, 2024. The full version of this issue is available here.

In this issue we cover:

  • Philipp Kewisch: Time to set the sails for a new journey
  • Welcome New Members and Developers
  • Ubuntu Stats
  • Hot in Support
  • UbuCon Korea 2024 – CFP deadline extended by May 26
  • Mini UbuCon Malaysia 2024
  • LoCo Events
  • Introducing the Enhanced KubuQA: Revolutionizing ISO Testing Across Ubuntu Flavors
  • Social Gatherings
  • Mir release 2.17.0
  • Anbox Cloud 1.22.0 has been released
  • Ubuntu Desktop’s 24.10 Dev Cycle – The Roadmap
  • Other Community News
  • Ubuntu Cloud News
  • Canonical News
  • In the Blogosphere
  • Other Articles of Interest
  • Featured Audio and Video
  • Meeting Reports
  • Upcoming Meetings and Events
  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
  • And much more!

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

.

20 May, 2024 10:43PM

Faizul "Piju" 9M2PJU: Gaming on Ubuntu: Current Landscape and Future Prospects

Introduction

Ubuntu, a popular Linux distribution, has seen significant growth in the gaming community over the past few years. Traditionally, Windows has dominated the gaming market due to its extensive library of games and superior hardware support. However, Ubuntu, along with other Linux distributions, is gradually becoming a viable alternative for gamers. This article explores the current gaming culture on Ubuntu, compares it to Windows, evaluates open-source software alternatives, discusses hardware support, and projects the future of gaming on Linux.

Current Gaming Culture on Ubuntu

Gaming on Ubuntu has evolved considerably, thanks to various developments in software, hardware support, and community engagement. Several factors contribute to the growing acceptance of Ubuntu as a gaming platform:

  1. Steam on Linux: The introduction of Steam for Linux in 2013 was a significant milestone. Valve, the company behind Steam, has actively supported Linux, making a substantial number of games available on the platform.
  2. Proton and Wine: Proton, a compatibility layer developed by Valve, allows Windows games to run on Linux. Built on Wine, Proton has improved compatibility and performance for many popular titles, making Linux gaming more accessible.
  3. Native Game Development: Some developers are creating native Linux versions of their games, recognizing the growing demand. Titles like “0 A.D.” and “SuperTuxKart” showcase the potential for high-quality gaming experiences on Linux.
  4. Community and Support: The Linux gaming community is active and supportive, with forums, subreddits, and websites dedicated to helping users optimize their gaming setups on Ubuntu.

Can Ubuntu Compete with Windows?

While Ubuntu has made significant strides, Windows still holds several advantages in the gaming world. However, Ubuntu can compete in specific areas:

  1. Game Library: Windows boasts a more extensive game library, including many AAA titles. However, Ubuntu’s library is growing, especially with the help of Proton and native game development.
  2. Performance: While Windows typically offers better performance due to optimized drivers and broader developer support, Ubuntu’s performance has improved. With advancements in Proton and better hardware drivers, many games run smoothly on Ubuntu.
  3. Cost and Security: Ubuntu, being free and open-source, offers cost savings and enhanced security. Gamers who prioritize these aspects may prefer Ubuntu over Windows.
  4. Customization and Control: Ubuntu offers greater customization and control over the gaming environment, appealing to advanced users who enjoy tweaking their systems for optimal performance.

Open Source Software Alternatives for Gaming

The open-source community has developed various software alternatives to enhance the gaming experience on Ubuntu:

  1. Lutris: A gaming platform that manages, installs, and optimizes games on Linux. Lutris supports games from various sources, including Steam, GOG, and Humble Bundle.
  2. PlayOnLinux: A graphical frontend for Wine, PlayOnLinux simplifies the installation and management of Windows games and software on Linux.
  3. RetroArch: An open-source frontend for emulators, game engines, and media players, allowing users to play retro games from various consoles on Ubuntu.
  4. Open Source Games: Numerous high-quality open-source games are available, such as “Battle for Wesnoth,” “0 A.D.,” and “Xonotic,” showcasing the potential for native Linux gaming.

Hardware Support

Hardware support has historically been a challenge for Linux gaming, but significant improvements have been made:

  1. Graphics Drivers: Both AMD and NVIDIA have improved their Linux driver support. AMD’s open-source drivers are highly regarded, while NVIDIA’s proprietary drivers offer solid performance.
  2. Peripheral Compatibility: Many gaming peripherals, such as controllers, keyboards, and mice, are now compatible with Ubuntu, either natively or with community-developed drivers.
  3. Performance Tools: Tools like MangoHud (for monitoring) and GameMode (for performance optimization) enhance the gaming experience on Ubuntu by providing real-time performance data and optimizing system resources for gaming.

Future of Gaming on Linux

The future of gaming on Ubuntu and Linux looks promising, with several trends indicating continued growth:

  1. Increased Developer Support: As the Linux gaming community grows, more developers are likely to support the platform natively, reducing reliance on compatibility layers like Proton.
  2. Advancements in Proton and Wine: Continued development of Proton and Wine will improve compatibility and performance for Windows games on Linux, narrowing the gap with Windows.
  3. Cloud Gaming: Services like Google Stadia and NVIDIA GeForce Now are platform-agnostic, allowing Ubuntu users to play high-quality games via the cloud, bypassing hardware and compatibility issues.
  4. Valve’s Steam Deck: Valve’s Steam Deck, a handheld gaming device running SteamOS (a Linux-based OS), is expected to boost Linux gaming by encouraging developers to ensure their games run well on Linux.
  5. Community and Open Source Projects: The dedicated Linux gaming community will continue to drive innovation and support, creating and maintaining tools and resources that enhance the gaming experience on Ubuntu.

Conclusion

While Ubuntu still faces challenges in competing directly with Windows for gaming, it has made significant progress. With a growing library of compatible games, improved hardware support, and active community engagement, Ubuntu is becoming a viable platform for gamers. The future of gaming on Ubuntu looks bright, with continued advancements in software, hardware, and cloud gaming technologies poised to further enhance the gaming experience. As more developers recognize the potential of Linux, Ubuntu could become an increasingly attractive option for gamers worldwide.

The post Gaming on Ubuntu: Current Landscape and Future Prospects appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

20 May, 2024 04:34PM

Faizul "Piju" 9M2PJU: Should the Malaysian Government Adopt Ubuntu as Its Operating System?

Adopting Ubuntu, a popular Linux-based operating system, across Malaysian government computers could bring numerous benefits such as cost savings, enhanced security, and greater control over IT infrastructure. However, this transition would also pose several challenges, including compatibility issues, training needs, and resistance to change. This article explores the potential benefits and obstacles, evaluates current software usage, and assesses the feasibility of this transition.

Benefits of Adopting Ubuntu

  1. Cost Savings:
    • Licensing Fees: Ubuntu is free to use, which can save substantial costs on operating system licenses and associated proprietary software.
    • Hardware Longevity: Ubuntu can run efficiently on older hardware, potentially delaying the need for expensive hardware upgrades.
  2. Enhanced Security:
    • Reduced Malware and Viruses: Linux systems, including Ubuntu, are less susceptible to malware and viruses compared to Windows. This could lead to fewer security breaches and lower maintenance costs.
    • Regular Updates: The open-source community regularly updates Ubuntu, ensuring that it remains secure and up-to-date.
  3. Control and Customization:
    • Open-Source Flexibility: Ubuntu’s open-source nature allows for extensive customization to meet specific governmental needs. This flexibility can lead to better integration with existing systems and processes.
  4. Support for Open Standards:
    • Interoperability: Ubuntu supports open standards, which can improve interoperability between different government systems and ensure long-term accessibility of data.

Obstacles to Adopting Ubuntu

  1. Compatibility Issues:
    • Proprietary Software: Many government agencies rely on proprietary software that may not have direct equivalents in the open-source world or may not run on Linux without significant modifications.
    • Specialized Applications: Certain specialized applications used in various departments might not be available for Linux or may require extensive reconfiguration.
  2. Training and Adaptation:
    • Learning Curve: Government servants accustomed to Windows may find it challenging to transition to Ubuntu. This necessitates comprehensive training programs.
    • User Resistance: There might be resistance to change among employees who are comfortable with the current systems.
  3. Technical Support and Maintenance:
    • Availability of Expertise: Ensuring there is sufficient technical expertise to support and maintain Ubuntu systems can be a challenge. This might require hiring or training additional IT staff.
    • Vendor Support: Unlike commercial software, open-source solutions may not offer dedicated support. The government would need to rely on community support or third-party providers.

Current Software Usage and Open-Source Alternatives

Office Suites
  • Current Use: Microsoft Office (Word, Excel, PowerPoint)
  • Open-Source Alternatives: LibreOffice, OpenOffice
Email and Calendaring
  • Current Use: Microsoft Outlook
  • Open-Source Alternatives: Thunderbird with Lightning, Evolution
Web Browsing
  • Current Use: Google Chrome, Microsoft Edge
  • Open-Source Alternatives: Mozilla Firefox, Chromium
Database Management
  • Current Use: Microsoft SQL Server, Oracle Database
  • Open-Source Alternatives: PostgreSQL, MySQL, MariaDB
Graphic Design and Multimedia
  • Current Use: Adobe Photoshop, Adobe Illustrator
  • Open-Source Alternatives: GIMP, Inkscape
Specialized Government Software

Many government agencies use proprietary and often custom-built software for various functions, from finance and human resources to public service applications. Replacing or adapting these for Ubuntu could be a significant challenge.

  • Tax Systems: Proprietary tax management software might need to be replaced or adapted with open-source solutions like OpenTaxSolver or integrated web-based solutions.
  • Document Management: Systems like SharePoint would need alternatives like Alfresco or Nextcloud.
  • ERP Systems: Proprietary ERP systems could be replaced with open-source ERP solutions like Odoo or ERPNext, though these transitions could be complex and require customization.

Cost Implications

Initial Investment
  • Training: Significant investment in training government staff to use Ubuntu and its associated software.
  • Migration: Costs related to migrating data and ensuring compatibility of existing systems and documents.
Long-Term Savings
  • Licensing: Elimination of licensing fees for the operating system and associated proprietary software.
  • Maintenance: Potential reduction in maintenance costs due to lower susceptibility to viruses and malware.

Training and Adaptation for Government Servants

  1. Comprehensive Training Programs:
    • Basic Training: Courses to cover the basics of using Ubuntu, including navigation, file management, and using common applications.
    • Advanced Training: Specialized training for IT staff and power users on system administration, troubleshooting, and customization.
  2. Gradual Transition:
    • Pilot Programs: Implementing Ubuntu in a few departments initially to gather feedback and refine the transition process.
    • Phased Rollout: Gradually expanding the use of Ubuntu across departments to ensure a smooth transition and allow time for adaptation.
  3. Support Systems:
    • Help Desks: Establishing dedicated help desks to assist with the transition and ongoing use of Ubuntu.
    • Online Resources: Providing access to online tutorials, forums, and documentation to support self-directed learning.

Feasibility and Conclusion

The Malaysian government could feasibly adopt Ubuntu, but the transition would require careful planning and execution. The benefits of cost savings, enhanced security, and greater control are compelling, but the obstacles of compatibility, training, and resistance to change must be addressed.

By leveraging pilot programs, comprehensive training, and gradual implementation, the government can mitigate these challenges. The shift to Ubuntu represents not just a technological change but also a cultural one, necessitating strong leadership and clear communication of the benefits to all stakeholders.

Ultimately, while the transition to Ubuntu could lead to significant long-term benefits, it must be managed strategically to ensure success and minimize disruption to government operations. With the right approach, Malaysia could set a precedent for other countries considering similar transitions to open-source solutions.

The post Should the Malaysian Government Adopt Ubuntu as Its Operating System? appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

20 May, 2024 04:29PM

Faizul "Piju" 9M2PJU: Should Malaysian Preschools, Schools, Colleges, and Universities Adopt Edubuntu?

Adopting Edubuntu in the Malaysian education system could have significant implications for the quality of education, operational costs, and overall accessibility of educational resources. This detailed exploration examines the potential impacts, migration processes, cost implications, and the feasibility of adopting Edubuntu across educational institutions in Malaysia.

What is Edubuntu?

Edubuntu is a Linux-based operating system tailored for educational environments. It is built on Ubuntu and includes a variety of pre-installed educational software, making it a comprehensive platform for teaching and learning. The system is designed to be user-friendly and accessible, even for users with minimal technical knowledge.

Impact of Adopting Edubuntu

Enhanced Educational Experience
  1. Broad Range of Educational Tools: Edubuntu comes with numerous pre-installed applications suitable for different educational levels. These include GCompris for preschoolers, Tux Paint for creative development, and advanced tools like Stellarium (astronomy software) and Geogebra (mathematics).
  2. Customization and Flexibility: Teachers can customize Edubuntu to suit their curriculum needs, adding or removing software as necessary. This flexibility ensures that the learning environment is always aligned with educational goals.
  3. Open Source Advantages: Being open-source, Edubuntu allows students to explore and understand the underlying technology, fostering a deeper understanding of computing principles. This is particularly beneficial for tertiary education students studying computer science or IT.
Cost Reduction
  1. License Fees: Unlike proprietary software, Edubuntu is free. This can significantly reduce the costs associated with software licensing, especially for large institutions like universities and colleges.
  2. Hardware Requirements: Edubuntu can run on older hardware, which means schools and colleges can extend the life of their existing computers, delaying the need for expensive upgrades.
Accessibility and Inclusivity
  1. Low-Cost Access: By using free and open-source software, educational institutions can make technology more accessible to students from all economic backgrounds.
  2. Language Support: Edubuntu supports multiple languages, which can be advantageous in a multilingual country like Malaysia. This support includes Bahasa Malaysia, Mandarin, Tamil, and other regional languages.
Security and Stability
  1. Enhanced Security: Linux-based systems are generally more secure against malware and viruses compared to their proprietary counterparts. This can reduce downtime and maintenance costs associated with security issues.
  2. Regular Updates: The open-source community actively maintains and updates Edubuntu, ensuring that the system remains secure and up-to-date with the latest educational software.

Migration from Existing Operating Systems

Transitioning from current systems to Edubuntu requires careful planning and execution. The process can be broadly divided into the following steps:

A. Assessment and Planning:

    • Evaluate the current IT infrastructure and determine the compatibility of existing hardware with Edubuntu.
    • Identify key software requirements and find suitable replacements within the Edubuntu ecosystem.

    B. Pilot Testing:

      • Implement Edubuntu in a small number of classrooms or computer labs to test compatibility and performance.
      • Gather feedback from teachers and students to address any initial challenges.

      C. Training and Support:

        • Conduct training sessions for teachers and IT staff to familiarize them with Edubuntu.
        • Provide ongoing technical support to ensure a smooth transition.

        D. Full Deployment:

          • Gradually roll out Edubuntu across the institution, starting with lower-risk environments and progressively moving to critical systems.
          • Monitor the implementation and make adjustments as necessary based on feedback and performance metrics.

          Cost Implications

          Initial Investment

          While the software itself is free, there are costs associated with the transition:

          1. Training: Investment in training teachers and IT staff to use and support Edubuntu.
          2. Pilot Testing: Costs related to setting up and running pilot tests.
          3. Technical Support: Initial increased demand for technical support as the transition takes place.
          Long-term Savings
          1. Software Licenses: Significant savings on licensing fees for operating systems and educational software.
          2. Hardware: Potential savings from extending the life of existing hardware.
          3. Maintenance: Reduced costs associated with malware, viruses, and system failures.

          Software Availability and Replacement

          Edubuntu offers a wide array of educational and productivity software that can replace commonly used proprietary applications:

          A. Office Suites:

            • Microsoft Office can be replaced with LibreOffice, which includes a word processor, spreadsheet, and presentation software.

            B. Educational Software:

              • GCompris for early education activities.
              • Tux Paint and Tux Math for creative and mathematical development.
              • Stellarium and Celestia for astronomy.
              • Geogebra for mathematics.

              C. Development Tools:

                • Scratch for programming.
                • Eclipse and NetBeans for advanced coding projects.

                D. Multimedia:

                  • Audacity for audio editing.
                  • Kdenlive for video editing.
                  • GIMP as an alternative to Adobe Photoshop.

                  Feasibility for Malaysia

                  Malaysia has a robust and evolving education system, making it feasible to adopt Edubuntu. However, there are several considerations:

                  1. Government and Policy Support: Strong backing from the government and educational authorities can facilitate the transition. Policies supporting the use of open-source software in education can provide the necessary impetus.
                  2. Infrastructure: Ensuring that schools and colleges have the necessary IT infrastructure to support Edubuntu is crucial. This includes reliable internet access, adequate hardware, and technical support.
                  3. Training Programs: Developing comprehensive training programs for educators and IT staff to ensure they are comfortable using and supporting Edubuntu.
                  4. Community and Industry Involvement: Collaborating with the open-source community and industry experts can provide additional resources and support for the transition.

                  Conclusion

                  Adopting Edubuntu in Malaysian preschools, schools, colleges, and universities could significantly enhance the educational experience, reduce costs, and increase accessibility to quality education. While the transition requires careful planning and investment in training and support, the long-term benefits could outweigh the initial challenges. With strong government support, adequate infrastructure, and community involvement, Malaysia is well-positioned to embrace this transformative change in its education system.

                  The post Should Malaysian Preschools, Schools, Colleges, and Universities Adopt Edubuntu? appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  20 May, 2024 04:23PM

                  Aaron Rainbolt: OR...

                  Contrary to what you may be thinking, this is not a tale of an inexperienced coder pretending to know what they’re doing. I have something even better for you.

                  It all begins in the dead of night, at my workplace. In front of me is a typical programmer’s desk - two computers, three monitors (one of which isn’t even plugged in), a mess of storage drives, SD cards, 2FA keys, and an arbitrary RPi 4, along with a host of items that most certainly don’t belong on my desk, and a tangle of cables that would give even a rat a migraine. My dev laptop is sitting idle on the desk, while I stare intently at the screen of a system running a battery of software tests. In front of me is the logs of a failed script run.

                  Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

                  Generally when this particular script fails, it gives me some indication as to what went wrong. There are thorough error catching measures (or so I thought) throughout the code, so that if anything goes wrong, I know what went wrong and where. This time though, I’m greeted by something like this:

                  $ systemctl status test-sh.service
                  test-sh.service - does testing things
                  ...
                  May 20 23:00:00 desktop-pc systemd[1]: Starting test-sh.service - does testing things
                  May 20 23:00:00 desktop-pc systemd[1]: test-sh.service: Failed with result ‘exit-code’.
                  May 20 23:00:00 desktop-pc systemd[1]: Failed to start test-sh.service.

                  I stare at the screen in bewilderment for a few seconds. No debugging info, no backtraces, no logs, not even an error message. It’s as if the script simply decided it needed some coffee before it would be willing to keep working this late at night. Having heard the tales of what happens when you give a computer coffee, I elected to try a different approach.

                  $ vim /usr/bin/test-sh
                  1 #!/bin/bash
                  2 #
                  3 # Copyright 2024 ...
                  4 set -u;
                  5 set -e;

                  Before I go into what exactly is wrong with this picture, I need to explain a bit about how Bash handles the ultimate question of life, “what is truth?”

                  (RED ALERT: I do not know if I’m correct about the reasoning behind the design decisions I talk about in the rest of this article. Don’t use me as a reference for why things work like this, and please correct me if I’ve botched something. Also, a lot of what I describe here is simplified, so don’t be surprised if you notice or discover that things are a bit more complex in reality than I make them sound like here.)

                  Bash, as many of you probably know, is primarily a “glue” language - it glues applications to each other, it glues the user to the applications, and it glues one’s sanity to the ceiling, far out of the user’s reach. As such, it features a bewildering combination of some of the most intuitive and some of the least intuitive behaviors one can dream up, and the handling of truth and falsehood is one of these bewildering things.

                  Every command you run in Bash reports back whether or not what it did “worked”. (“Worked” is subjective and depends on the command, but for the most part if a command says “It worked”, you can trust that it did what you told it to, at least mostly.) This is done by means of an “exit code”, which is nothing more than a number between 0 and 255. If a program exits and hands the shell an exit code of 0, it usually means “it worked”, whereas a non-zero exit code usually means “something went wrong”. (This makes sense if you know a bit about how programs written in C work - if your program is written to just “do things” and then exit, it will default to exiting with code zero.)

                  Because zero = good and non-zero = not good, it makes sense to treat zero as meaning “true” and non-zero as meaning “false”. That’s exactly what Bash does - if you do something like “if command; then commandIfTrue; else commandIfFalse; fi”, Bash will run “commandIfTrue” if “command” exits with 0, and will run “commandIfFalse” if “command” exits with 1 or higher.

                  Now since Bash is a glue language, it has to be able to handle it if a command runs and fails. This can be done with some amount of difficulty by testing (almost) every command the script runs, but that can be quite tedious. There’s a (generally) easier way however, which is to tell the script to immediately exit if any command exits with a non-zero exit code. This is done by using the command “set -e” at or near the top of the script. Once “set -e” is active, any command that fails will cause the whole script to stop.

                  So back to my script. I’m using “set -e” so that if anything goes wrong, the script stops. What could go wrong other than a failed command? To answer that question, we have to take a look at how some things work in C.

                  C is a very different language than Bash. Whereas Bash is designed to take a bunch of pieces and glue them together, C is designed to make the pieces themselves. You can think of Bash as being a glue gun and C as being a 3d printer. As such, C does not concern itself nearly as much with things like return codes and exiting when a command fails. It focuses on taking data and doing stuff with it.

                  Since C is more data- and algorithm-oriented, true and false work significantly differently here. C sees 0 as meaning “none, empty, all bits set to 0, etc.” and thus treats it as meaning “false”. Any number greater than 0 has a value, and can be treated as “on” or “true”. An astute reader will notice this is exactly the opposite of how Bash works, where 0 is true and non-zero is false. (In my opinion this is a rather lamentable design decision, but sadly these behaviors have been standardized for longer than I’ve been alive, so there’s not much point in trying to change them. But I digress.)

                  C also of course has features for doing math, called “operators”. One of the most common operators is the assignment operator, “=”. The assignment operator’s job is to take whatever you put on the right side of it, and store it in whatever you put on the left side. If you say “a = 0”, the value “0” will be stored in the variable “a” (assuming things work right). But the assignment operator has a trick up its sleeve - not only does it assign the value to the variable, it also returns the value. Basically what that means is that the statement “a = 0” spits out an extra value that you can do things with. This allows you to do things like “a = b = 0”, which will assign 0 to “b”, return zero, and then assign that returned zero to "a”. (The assignment of the second zero to “a” also returns a zero, but that simply gets ignored by the program since there’s nothing to do with it.)

                  You may be able to see where I’m going with this. Assigning a value to a variable also returns that value… and 0 means “false”… so “a = 0” succeeds, but also returns what is effectively “false”. That means if you do something like “if (a = 0) { ... } else { explodeComputer(); }”, the computer will explode. “a = 0” returns “false”, thus the “if” condition does not run and the “else” condition does. (Coincidentally, this is also a good example of the “world’s last programming bug” - the comparison operation in C is “==”, which is awfully easy to mistype as the assignment operator, “=”. Using an assignment operator in an “if” statement like this will almost always result in the code within the “if” being executed, as the value being stored in the variable will usually be non-zero and thus will be seen as “true” by the “if” statement. This also corrupts the variable you thought you were comparing something to. Some fear that a programmer with access to nuclear weapons will one day write something like “if (startWar = 1) { destroyWorld(); }” and thus the world will be destroyed by a missing equals sign.)

                  “So what,” you say. “Bash and C are different languages.” That’s true, and in theory this would mean that everything here is fine. Unfortunately theory and practice are the same in theory but much different in practice, and this is one of those instances where things go haywire because of weird differences like this. There’s one final piece of the puzzle to look at first though - how to do math in Bash.

                  Despite being a glue language, Bash has some simple math capabilities, most of which are borrowed from C. Yes, including the behavior of the assignment operator and the values for true and false. When you want to do math in Bash, you write “(( do math here... ))”, and everything inside the double parentheses is evaluated. Any assignment done within this mode is executed as expected. If I want to assign the number 5 to a variable, I can do “(( var = 5 ))” and it shall be so.

                  But wait, what happens with the return value of the assignment operator?

                  Well, take a guess. What do you think Bash is going to do with it?

                  Let’s look at it logically. In C (and in Bash’s math mode), 0 is false and non-zero is true. In Bash, 0 is true and non-zero is false. Clearly if whatever happen within math mode fails and returns false (0), Bash should not misinterpret this as true! Things like “(( 5 == 6 ))” shouldn’t be treated as being true, right? So what do we do with this conundrum? Easy solution - convert the return value to an exit code so that its semantics are retained across the C/Bash barrier. If the return value of the math mode statement is false (0), it should be converted to Bash’s concept of false (non-zero), therefore the return value of 0 is converted to an exit code of 1. On the other hand, if the return value of the math mode statement is true (non-zero), it should be converted to Bash’s concept of true (0), therefore the return value of anything other than 0 is converted to an exit code of 0. (You probably see the writing on the wall at this point. Spoiler, my code was weighed in the balances and found wanting.)

                  So now we can put all this nice, logical, sensible behavior together and make a glorious mess with it. Guess what happens if you run “(( var = 0 ))” in a script where “set -e” is enabled.

                  • “0” is assigned to “var”.

                  • The statement returns 0.

                  • Bash dutifully converts that to a 1 (false/failure).

                  • Bash now sees the command as having failed.

                  • set -e” says the script should immediately stop if anything fails.

                  • The script crashes.

                  You can try this for yourself - pop open a terminal and run “set -e; (( var = 0 ));” and watch in awe as your terminal instantly closes (or otherwise shows an indication that Bash has exited).

                  So back to the code. In my script, I have a function that helps with generating random numbers within any specified bounds. Basically it just grabs the value of “$RANDOM” (which is a special variable in Bash that always returns an integer between 0 and 32767) and does some manipulations on it so that it becomes a random number between a “lower bound” and an “upper bound” parameter. In the guts of that function’s code I have many “math mode” statements for getting those numbers into shape. Those statements include variable assignments, and those variable assignments were throwing exit codes into the script. I had written this before enabling “set -e”, so everything was fine before, but now “set -e” was enabled and Bash was going to enforce it as ruthlessly as possible.

                  While I will never know what line of code triggered the failure, it’s a fairly safe bet that the culprit was:

                  88 (( _val = ( _val % ( _adj_upper_bound + 1 ) ) ));

                  This basically takes whatever is in “_val” , divides it by “_adj_upper_bound + 1”, and then assigns the remainder of that operation to “_val”. This makes sure that “_val” is lower than “_adj_upper_bound + 1”. (This is typically known as a “getting the modulus”, and the “%” operator here is the “modulo operator”. For the math people reading this, don’t worry, I did the requisite gymnastics to ensure this code didn’t have modulo bias.) If “_val” happens to be equal to “_adj_upper_bound + 1”, the code on the right side of the assignment operator will evaluate to 0, which will become an exit code of 1, thus exploding my script because of what appeared to be a failed command.

                  Sigh.

                  So there’s the problem. What’s the solution? Turns out it’s pretty simple. Among Bash’s feature set, there is the profoundly handy “logical or operator”, “||”. This operator lets us say “if this OR that is true, return true.” In other words, “Run whatever’s on the left hand of the ||. If it exits 0, move on. If it exits non-zero, run whatever’s on the right hand of the ||. If it exits 0, move on and ignore the earlier failure. Only return non-zero if both commands fail.” There’s also a handy command in Bash called “true” that does nothing except for give an exit code of 0. That means that if you ever have a line of code in Bash that is liable to exit non-zero but it’s no big deal if it does, you can just slap an “|| true” on the end and it will magically make everything work by pretending that nothing went wrong. (If only this worked in real life!) I proceeded to go through and apply this bandaid to every standalone math mode call in my script, and it now seems to be behaving itself correctly again. For now anyway.

                  tl;dr: Faking success is sometimes a perfectly valid way to solve a computing problem. Just don’t live the way you code and you’ll be alright.

                  Thanks for reading Arraybolt's Archives! Subscribe for free to receive new posts and support my work.

                  20 May, 2024 08:06AM

                  Faizul "Piju" 9M2PJU: Introducing RF Swift: A Comprehensive Toolbox for HAM Radio and RF Professionals

                  RF-Swift is an advanced, cross-platform RF toolbox designed to meet the needs of HAM radio enthusiasts and RF professionals. Developed using Go and shell scripts, this toolbox simplifies the deployment of Docker containers for various RF tools, allowing users to maintain their preferred operating systems.

                  Key Features:

                  • Multi-Platform Support: Compatible with both Linux and Windows.
                  • Ease of Use: Provides scripts for easy installation and deployment.
                  • Customizable Docker Files: Offers specialized Docker file recipes to conserve space and suit specific needs.
                  • Open Source: Inspired by the Exegol project, RF-Swift aims to integrate essential RF analysis tools without affecting the host system.

                  Installation Requirements:

                  • Linux: Only requires Docker engine installation.
                  • Windows: Needs Docker Desktop, GoLang, and usbipd for USB device management.

                  Usage:

                  • Building: Simple build scripts for both Linux (build.sh) and Windows (build-windows.bat).
                  • Running Containers: Use ./rfswift run with various flags for customization.
                  • Executing Commands: Easily execute commands inside existing containers.

                  Contributions and Future Plans:

                  RF-Swift encourages community contributions to expand its toolset and aims to develop a dedicated page for developers. Future updates will also address issues like sound management in certain tools.

                  For detailed instructions and to contribute, visit the RF-Swift GitHub repository.

                  The post Introducing RF Swift: A Comprehensive Toolbox for HAM Radio and RF Professionals appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  20 May, 2024 08:03AM

                  Ubuntu Blog: Cloudify your data centre – A guide to VMware infrastructure transformation

                  You know what’s going on. You’ve been monitoring the situation around VMware for at least a year now. There is no need to convince you that whatever comes next, you have to prepare for a big change. You and your team are already well prepared. You have a budget, timeline and necessary resources. However, one thing that you’re missing is the answer to the “HOW?”.

                  If this is the case, you are in the right place.

                  Join our webinar on 11 June at 4:30 PM CET and learn how to move from VMware to the future.

                  If you don’t have time for this 60-minute webinar, he’s a condensed summary for you to read instead. In this short blog we present Canonical’s proven path to VMware infrastructure transformation which comes through a process of full data centre cloudification. We also discuss why it’s better to move to the future rather than stay in the past.

                  Past vs future

                  The last two decades have brought a significant change in how enterprises run their IT estates. Many organisations that used to operate their data centres in a traditional way decided to fully virtualise their workloads for better resource consumption and improved agility. This is where VMware came in with its comprehensive vSphere suite. By providing an enterprise-grade platform and answering its customers’ needs, VMware quickly established itself as a dominant player in the virtualisation market.

                  <noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/t_FuQW-k5YN_OLnmAEIxIcd7USup8ZUftoF8DlyCW8zPcY7jfHnEOolGIOzU9Hq-s1zFA0a34ifHnC4WYlsosmUDExgAL-t9u2qXGpFGcXIhL8yK9pre50-L4wP67BoNOlrd1EtajMWsCwrVi4Ed7v4" width="720" /> </noscript>

                  Unstoppable technological continuum

                  However, 20 years after this spectacular early success, VMware customers face a dilemma in what to do next. In search of a reasonable alternative they often choose other proprietary virtualisation solutions – but even though solutions like that often look like low-hanging fruit, they might lead to exactly the same challenges in the future as those organisations are facing now. Those include vendor lock-in and a total cost of ownership (TCO) increase. This is why staying in the past – on VMware or not – is sub-optimal, in general.

                  There is no doubt that the cloud computing paradigm is the next big thing that comes after this initial virtualisation wave. According to Gartner, overall spending on cloud infrastructure will approach 50% of organisations’ IT budgets by 2025. However, using public clouds only might lead to the exact same challenges you’re facing right now. Fortunately, there are ways to build a fully functional cloud infrastructure on your premises. 

                  Cloudify your data centre

                  You’ve likely heard about OpenStack, the world’s leading open source cloud platform. In fact, you should also consider some other alternatives. Take a look at Canonical MicroCloud, for example. Whichever alternative they’re considering when migrating out of VMware, organisations usually expect the cloud to behave exactly as the vSphere suite.

                  This is a trap!

                  The cloud is a cloud with all its pros and cons. No matter which cloud environment you’re in, its underlying architecture and operational principles are slightly different from VMware’s technology stack. This stems from the fact that the cloud computing paradigm was invented to solve slightly different challenges than pure virtualisation.

                  So what? Does it mean the cloud cannot supersede VMware?

                  Not at all! It only means that the migration is going to be an exciting journey. While the vast majority of VMware’s features have an equivalent in the open source space, in some cases changes to workload architecture might be required as well. How significant are those changes and is it really worth the overall investment? Continue reading to learn how to make your workload cloud-ready.

                  Cloudify your workload

                  We all know what is cloud-native, but how about cloud-ready? Let’s take a step back. Cloud-native is definitely where you should aim to be in the long term; However, for the initial wave of migration, being cloud-ready is sufficient.

                  The challenge is that some of VMware workloads are so-called legacy “pets”. Those are workloads which were visualised in the past and have never been re-designed since then. They usually rely on some VMware native features, such as vSphere HA or Fault Tolerance. Expecting such workloads to behave exactly in the same way when running on the cloud proves to be a little bit challenging. It is not impossible, though.

                  <noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh7-us.googleusercontent.com/PFn5HOQFMYwIbHpXo--SqKsV81OKbad4sNlUvx_Fj-0C0p5-DANvTkrboN5bI1O7MBG0bJpNYfRu-nbmKSdRbOl0Mt17wT1elrCjSJ0BEMZ34dYjBybBTLr1kCQzy_RY4Lji-BPv3kg5AKf3adNt2EY" width="720" /> </noscript>

                  Canonical infrastructure stack

                  Ideally, the workload should be cloud-ready before you attempt to migrate it to the cloud. This means it should preferably meet the following criteria:

                  • Provisioned and terminated on-demand
                  • Launched from an image and customised during provisioning
                  • Designed to store its state on a nonvolatile storage
                  • Designed to scale out rather than scale up
                  • HA mechanisms implemented in software or based on native cloud features
                  • DR mechanisms based on native cloud features or third-party tools
                  • Designed to use other native cloud features, such as LBaaS, etc.

                  The most effective and proven way is a gradual, iterative migration. Build the cloud first and move your workloads there starting with quick wins. Many customers actually run both environments in parallel during this initial period. For example Sicredi:

                  Learn how Sicredi embraces the cloud with Canonical OpenStack >

                  To sum up, the migration to the cloud is a non-trivial task and it doesn’t happen overnight. However, if properly planned and executed, it brings tangible benefits, uplifting organisations far into the future.

                  Learn more about cloudification

                  If you found this topic interesting, we’d like to encourage you to explore it in more detail.

                  Join our webinar on 11 June at 4:30 PM CET and learn how to move from VMware to the future.

                  In this webinar we will discuss the future of an on-prem infrastructure and show how the cloudification process looks like under the hood. We will also demonstrate how to effectively migrate from VMware and present some success stories with our reference customers.

                  20 May, 2024 07:00AM

                  May 19, 2024

                  Faizul "Piju" 9M2PJU: Unraveling the History and Significance of Amateur Radio

                  Introduction to Amateur Radio

                  Amateur radio, often referred to as “ham radio,” is a hobby and service that allows individuals to communicate with others around the world using designated radio frequencies. It’s a diverse community of enthusiasts who share a passion for radio technology, experimentation, and communication. While often overshadowed by modern communication technologies, amateur radio has a rich history and continues to play a vital role in global communication networks.

                  The Birth of Amateur Radio: Pioneering the Wireless Frontier

                  Early Experiments

                  The roots of amateur radio trace back to the late 19th and early 20th centuries when pioneers like Guglielmo Marconi and Nikola Tesla made groundbreaking discoveries in wireless communication. Amateur radio operators were among the first to explore these new technologies, experimenting with antennas, transmitters, and receivers to communicate over increasingly longer distances.

                  World War I and Beyond

                  During World War I, amateur radio operators played a crucial role in maintaining communication when traditional channels were disrupted. Governments recognized the value of amateur radio and began issuing licenses to regulate the growing number of enthusiasts. After the war, amateur radio flourished as surplus military equipment became available, and radio clubs formed worldwide.

                  Amateur Radio: The First Hackers

                  Innovation and Experimentation

                  Amateur radio operators were the original hackers, pushing the boundaries of technology through experimentation and innovation. They built their own equipment, modified existing devices, and developed new techniques to improve communication. This culture of tinkering and exploration laid the groundwork for the hacker ethos of today, emphasizing creativity, collaboration, and curiosity.

                  Community and Collaboration

                  Amateur radio has always been an open-source movement, with operators freely sharing knowledge, designs, and ideas. Online forums, mailing lists, and amateur radio clubs serve as hubs for collaboration and knowledge exchange. This spirit of openness and inclusivity has fostered a vibrant community that welcomes newcomers and encourages learning and growth.

                  Contributions to the World

                  Emergency Communication

                  One of the most significant contributions of amateur radio is its role in emergency communication. During natural disasters, conflicts, and other crises, amateur radio operators provide vital communication links when conventional systems fail. Their ability to establish networks quickly and operate under adverse conditions has saved lives and facilitated disaster relief efforts around the world.

                  Advancements in Technology

                  Amateur radio has been a catalyst for technological innovation, driving advancements in radio frequency engineering, signal processing, and digital communication. Many modern technologies, such as GPS, satellite communication, and software-defined radio, have roots in amateur radio experimentation. The amateur radio community continues to push the boundaries of technology, exploring new modes of communication and pushing for spectrum access for experimentation.

                  Education and Outreach

                  Amateur radio serves as a platform for education and outreach, inspiring future generations of scientists, engineers, and innovators. Amateur radio clubs, school programs, and youth initiatives introduce young people to the wonders of radio technology and provide hands-on learning experiences. Through licensing exams, mentoring programs, and educational resources, amateur radio fosters a culture of lifelong learning and skill development.

                  Current Status of Amateur Radio

                  Global Community

                  Despite the proliferation of modern communication technologies, the amateur radio community remains strong and vibrant. With millions of licensed operators worldwide, amateur radio continues to attract enthusiasts of all ages and backgrounds. Advances in digital modes, satellite technology, and portable equipment have expanded the hobby’s reach and appeal.

                  Challenges and Opportunities

                  While amateur radio faces challenges such as spectrum congestion and regulatory constraints, it also presents opportunities for growth and innovation. The rise of digital communication modes, the development of low-cost equipment, and the integration of amateur radio into STEM education programs are driving factors for the hobby’s continued relevance and evolution.

                  Future Prospects

                  As we look to the future, amateur radio is poised to play a vital role in shaping the next generation of communication technologies. With its rich history, culture of innovation, and global community, amateur radio remains a beacon of exploration and discovery in the ever-evolving world of wireless communication.

                  Conclusion

                  Amateur radio stands as a testament to the power of human ingenuity and curiosity. From its humble beginnings as an experimental hobby to its present-day role as a global community of communicators, innovators, and educators, amateur radio has left an indelible mark on the world. As we celebrate its past achievements and look towards the future, one thing remains clear: amateur radio will continue to inspire, connect, and empower individuals for generations to come.

                  The post Unraveling the History and Significance of Amateur Radio appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  19 May, 2024 04:38PM

                  hackergotchi for BunsenLabs Linux

                  BunsenLabs Linux

                  Obsolete packages will be removed from BunsenLabs repositories

                  The oldest currently supported Debian distribution is the LTS Buster, whose support will end on 30th June 2024. The BunsenLabs distribution based on Debian Buster is Lithium, but packages for the older distributions Hydrogen and Helium are still on our server. There is probably no longer any realistic use case for Hydrogen or Helium packages, and they will soon be removed from the server. Anyone still using BunsenLabs Hydrogen or Helium is strongly urged to upgrade their system. Lithium packages will be removed later, when Debian Buster support ends.

                  Discussion here: https://forums.bunsenlabs.org/viewtopic.php?id=8980

                  Last edited by johnraff (Yesterday 06:46:43)

                  19 May, 2024 12:00AM

                  May 18, 2024

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Faizul "Piju" 9M2PJU: Unleash Your Musical Creativity: Creating Songs with Ubuntu Studio

                  Are you an aspiring musician or songwriter looking to bring your musical visions to life? Whether you’re a seasoned professional or a complete novice, Ubuntu Studio offers a powerful platform for unleashing your creativity and producing high-quality music right from your Linux system. In this comprehensive guide, we’ll walk you through the process of creating a song using Ubuntu Studio, covering everything from the basics of Ubuntu Studio to the essential software tools for music production.

                  What is Ubuntu Studio?

                  Ubuntu Studio is an officially recognized Ubuntu flavor tailored for creative individuals involved in audio, graphics, video, and photography production. It provides a comprehensive suite of open-source software tools designed to meet the needs of artists, musicians, filmmakers, and multimedia enthusiasts. Ubuntu Studio comes pre-installed with a vast array of audio production software, making it an ideal choice for musicians and composers seeking a professional-grade platform for their creative endeavors.

                  Getting Started with Ubuntu Studio

                  If you’re new to Ubuntu Studio, getting started is easy. Simply download the Ubuntu Studio ISO from the official website and create a bootable USB drive or DVD. You can then boot your computer from the installation media and follow the on-screen instructions to install Ubuntu Studio on your system.

                  Once Ubuntu Studio is installed, familiarize yourself with the desktop environment and the various software applications included in the distribution. Ubuntu Studio features the XFCE desktop environment, known for its lightweight and customizable nature, providing a smooth and efficient user experience for music production tasks.

                  Essential Software for Music Production

                  Now that you’re acquainted with Ubuntu Studio, let’s explore the essential software tools for creating music:

                  1. Ardour: Ardour is a versatile digital audio workstation (DAW) that allows you to record, edit, and mix multitrack audio projects with ease. It features advanced audio editing capabilities, support for VST plugins, and comprehensive MIDI functionality, making it a powerful tool for music production.
                  2. Audacity: Audacity is a popular open-source audio editor that provides a simple and intuitive interface for recording, editing, and manipulating audio files. It offers a wide range of effects and plugins, making it ideal for basic audio editing tasks and podcast production.
                  3. LMMS (Linux Multimedia Studio): LMMS is a feature-rich music production software that enables you to create melodies, beats, and arrangements using virtual instruments, synthesizers, and samplers. It includes a variety of built-in instruments and presets, allowing you to experiment with different sounds and styles.
                  4. Hydrogen: Hydrogen is a powerful drum machine software that lets you create realistic drum patterns and sequences. It features a user-friendly interface, support for multiple drum kits, and a flexible pattern editor, making it a valuable tool for rhythm composition and beatmaking.
                  5. Jack Audio Connection Kit: Jack is an advanced audio routing system that allows you to connect and route audio between different applications in real-time. It provides low-latency audio processing and flexible routing options, essential for integrating multiple software tools in your music production workflow.
                  6. Plugins and Virtual Instruments: Ubuntu Studio includes a variety of plugins and virtual instruments for adding effects, synthesizers, and sampled instruments to your music projects. Explore the vast collection of plugins available in the repositories, including popular options like Carla, Calf Studio Gear, and Guitarix.

                  Creating Your First Song

                  Now that you have the necessary software tools installed, it’s time to unleash your creativity and start creating music. Here’s a basic outline to guide you through the process of creating your first song:

                  1. Set Up Your Workspace: Launch Ardour or LMMS and create a new project. Configure your audio settings, including sample rate, buffer size, and input/output devices, to ensure optimal performance and audio quality.
                  2. Compose Your Music: Start by laying down the foundation of your song, whether it’s a catchy melody, a rhythmic groove, or a chord progression. Experiment with different instruments, sounds, and textures to develop your musical ideas.
                  3. Record and Arrange: Use Ardour to record audio tracks, MIDI sequences, and instrument performances. Arrange your recorded clips and MIDI patterns on the timeline, organizing them into cohesive sections such as verses, choruses, and bridges.
                  4. Edit and Mix: Fine-tune your recordings and MIDI sequences using Ardour’s editing tools, including cut, copy, paste, and quantize. Adjust the volume, panning, and effects settings for each track to achieve a balanced mix and bring your music to life.
                  5. Add Effects and Processing: Enhance your sounds with audio effects and processing plugins, such as reverb, delay, compression, and equalization. Experiment with different effect settings to create depth, space, and texture in your mix.
                  6. Finalize and Export: Once you’re satisfied with your song, listen to it in its entirety and make any final adjustments as needed. When you’re ready, export your project to a high-quality audio file format, such as WAV or FLAC, and share your music with the world.

                  Conclusion

                  With Ubuntu Studio and the right software tools at your disposal, creating music has never been more accessible and rewarding. Whether you’re a hobbyist musician, a professional composer, or anything in between, Ubuntu Studio provides a versatile and powerful platform for realizing your musical ambitions. So, fire up your creative imagination, dive into the world of music production, and let Ubuntu Studio be your gateway to musical excellence.

                  The post Unleash Your Musical Creativity: Creating Songs with Ubuntu Studio appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  18 May, 2024 08:31AM

                  Faizul "Piju" 9M2PJU: Unleashing the Power of Ubuntu: A Beginner’s Guide to Reverse Engineering

                  Greetings, fellow tech enthusiasts! Today, we embark on an exciting journey into the world of reverse engineering using the powerful Ubuntu operating system. Reverse engineering allows us to understand and modify software and hardware systems, making it an indispensable skill for those who wish to delve into the depths of technology. In this blog post, we will explore the essential packages related to reverse engineering, discuss learning strategies, and provide some valuable tips for beginners.

                  1. Essential Packages for Reverse Engineering:

                  To begin our reverse engineering adventure on Ubuntu, we need to equip ourselves with the right tools. Here are some essential packages that you should consider installing:

                  a) Radare2: This versatile framework offers a wide array of tools for reverse engineering, such as disassembling, debugging, analyzing binaries, and much more. Install it on Ubuntu using the package manager with the command: sudo apt-get install radare2.

                  b) GDB (GNU Debugger): GDB is a powerful tool for debugging programs. It allows you to step through code, examine memory, and analyze runtime behavior. Install it by running: sudo apt-get install gdb.

                  c) IDA Pro: Although IDA Pro is a commercial tool, a free version called IDA Free is available for Linux. It provides a comprehensive disassembly and debugging environment, making it an excellent choice for advanced reverse engineering tasks.

                  d) Wireshark: This network protocol analyzer is a valuable asset when reverse engineering network communications. Use the command: sudo apt-get install wireshark to install it.

                  1. Learning Reverse Engineering:

                  Now that we have the necessary tools, let’s dive into the process of learning reverse engineering on Ubuntu. Here are some effective strategies:

                  a) Online Resources: The internet is a treasure trove of knowledge. Websites like Reverse Engineering for Beginners (https://beginners.re/) and the Reverse Engineering subreddit (https://www.reddit.com/r/ReverseEngineering/) provide valuable tutorials, articles, and forums to help you get started.

                  b) Books and Documentation: Books like “Practical Reverse Engineering” by Bruce Dang, Alexandre Gazet, and Elias Bachaalany offer comprehensive insights into the theory and practice of reverse engineering. Additionally, official documentation for tools like Radare2 and GDB can be a valuable resource.

                  c) Practice, Practice, Practice: Reverse engineering is a hands-on skill. Engage in practical exercises by attempting crackmes (small programs designed for reverse engineering practice) or analyzing open-source projects. Joining Capture The Flag (CTF) competitions can also provide valuable experience.

                  1. Tips for Beginners:

                  As a beginner, it’s essential to approach reverse engineering with patience and a methodical mindset. Here are some tips to help you navigate this fascinating domain:

                  a) Start Small: Begin with simple programs and gradually work your way up to more complex ones. This will allow you to build a solid foundation of knowledge and skills.

                  b) Analyze Open-Source Projects: Studying open-source software provides an opportunity to explore how programs are built, understand their inner workings, and practice reverse engineering techniques.

                  c) Join the Community: Engage with the vibrant reverse engineering community. Participate in forums, attend conferences, and connect with like-minded individuals. Sharing knowledge and experiences can accelerate your learning journey.

                  d) Embrace the Documentation: Documentation for tools like Radare2 and GDB may seem overwhelming at first, but they hold the key to unlocking advanced reverse engineering techniques. Take the time to understand them and experiment with their features.

                  Congratulations on taking your first steps into the captivating world of reverse engineering on Ubuntu! By familiarizing yourself with essential packages, adopting effective learning strategies, and following our beginner’s tips, you are well on your way to mastering this exciting skill. Remember, patience and persistence are key, so keep exploring, keep learning, and unlock the extraordinary potential that reverse engineering offers. Happy hacking!

                  The post Unleashing the Power of Ubuntu: A Beginner’s Guide to Reverse Engineering appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  18 May, 2024 08:17AM

                  Faizul "Piju" 9M2PJU: Understanding SOAR: Security Orchestration, Automation, and Response for Ubuntu Servers

                  In today’s rapidly evolving cybersecurity landscape, organizations face a growing number of sophisticated threats that require more efficient and effective response strategies. Security Orchestration, Automation, and Response (SOAR) has emerged as a crucial technology to enhance security operations by automating routine tasks, orchestrating complex workflows, and improving incident response. For organizations using Ubuntu servers, several SOAR solutions are available that can seamlessly integrate into their existing infrastructure.

                  What is SOAR?

                  SOAR stands for Security Orchestration, Automation, and Response. It is a category of security solutions that help organizations manage and respond to security incidents more efficiently by automating routine tasks, orchestrating workflows, and providing tools for effective incident management. SOAR platforms combine multiple technologies to streamline security operations, improve threat detection, and enhance incident response.

                  The main components of SOAR are:

                  1. Security Orchestration: This involves integrating and coordinating disparate security tools and processes. Orchestration ensures that these tools work together cohesively, allowing security teams to manage incidents more effectively.
                  2. Automation: Automation uses scripts and software to perform repetitive and routine tasks without human intervention. This reduces the manual workload on security analysts and minimizes the risk of human error, enabling faster and more reliable threat responses.
                  3. Response: SOAR platforms provide detailed workflows and playbooks that guide security teams through the steps needed to contain and remediate threats. These tools facilitate a coordinated and efficient response to security incidents.

                  Key Features of SOAR Platforms

                  1. Incident Management: SOAR platforms offer comprehensive incident management capabilities, enabling organizations to track, manage, and resolve security incidents effectively. Features include ticketing systems, incident timelines, and detailed reporting.
                  2. Playbooks and Automation: Playbooks are predefined sets of actions that automate responses to common security incidents. SOAR platforms provide extensive libraries of playbooks that can be customized to meet an organization’s specific needs.
                  3. Threat Intelligence Integration: Effective threat intelligence is crucial for identifying and understanding threats. SOAR platforms integrate with various threat intelligence sources to provide real-time data on emerging threats and vulnerabilities.
                  4. Case Management: Detailed case management features allow security teams to document and track the progress of investigations, ensuring that all relevant information is captured and accessible.
                  5. Collaboration and Communication: SOAR platforms facilitate better communication and collaboration within security teams and across different departments. They often include chat and collaboration tools that enable team members to share information and coordinate responses.

                  Top SOAR Solutions for Ubuntu Servers

                  Here are some leading SOAR solutions that can be installed on Ubuntu servers:

                  1. Splunk Phantom

                  Overview: Splunk Phantom is a robust SOAR platform that provides extensive automation and orchestration capabilities. It enables security teams to automate routine tasks and orchestrate complex workflows across a wide range of security tools.

                  Key Features:

                  • Extensive library of pre-built playbooks.
                  • Integration with over 200 security tools.
                  • Visual playbook editor for easy customization.
                  • Real-time collaboration features.

                  Installation on Ubuntu:

                  • Splunk Phantom can be installed on an Ubuntu server by downloading the appropriate package from the Splunk website. Installation involves using the dpkg tool to install the package and then configuring the system according to the provided documentation.

                  2. Palo Alto Networks Cortex XSOAR

                  Overview: Formerly known as Demisto, Cortex XSOAR by Palo Alto Networks is a comprehensive SOAR platform designed to enhance the efficiency and effectiveness of security operations centers (SOCs).

                  Key Features:

                  • Automated playbooks with machine learning capabilities.
                  • Robust case management and incident tracking.
                  • Integration with a wide array of security and IT tools.
                  • Interactive investigation and collaboration tools.

                  Installation on Ubuntu:

                  • Cortex XSOAR can be installed on Ubuntu by following the installation guides provided by Palo Alto Networks. The process typically involves setting up the necessary dependencies, downloading the installation package, and configuring the platform for use.

                  3. IBM Resilient

                  Overview: IBM Resilient is a highly regarded SOAR platform that focuses on helping organizations respond to incidents quickly and effectively. It offers powerful automation and orchestration features designed to streamline incident response.

                  Key Features:

                  • Dynamic playbooks that adapt to changing incident conditions.
                  • Integration with IBM’s Watson for advanced threat intelligence.
                  • Detailed incident reporting and analytics.
                  • Collaboration and communication tools for incident response teams.

                  Installation on Ubuntu:

                  • IBM Resilient can be deployed on Ubuntu servers using the installation packages provided by IBM. The setup involves preparing the server environment, installing the necessary software, and configuring the platform to integrate with existing security tools.

                  4. ServiceNow Security Operations

                  Overview: ServiceNow Security Operations is a SOAR solution that leverages the capabilities of the broader ServiceNow platform to provide integrated security incident response and automation.

                  Key Features:

                  • Unified platform for IT and security operations.
                  • Automated workflows for incident response.
                  • Integration with threat intelligence sources.
                  • Comprehensive reporting and analytics.

                  Installation on Ubuntu:

                  • ServiceNow Security Operations can be installed on Ubuntu servers through the use of ServiceNow’s cloud-based platform. The configuration involves integrating the platform with the organization’s existing security infrastructure and setting up automated workflows.

                  Benefits of Implementing SOAR on Ubuntu Servers

                  1. Improved Efficiency: By automating repetitive tasks, SOAR platforms free up valuable time for security analysts, allowing them to focus on more complex and strategic activities.
                  2. Faster Response Times: Automation and orchestration enable faster detection and response to security incidents, reducing the potential impact of threats.
                  3. Enhanced Accuracy: Automation reduces the risk of human error, leading to more accurate and reliable incident responses.
                  4. Better Use of Resources: SOAR platforms help organizations make better use of their existing security tools and personnel, improving overall security posture.
                  5. Scalability: As organizations grow, SOAR platforms can scale to handle increasing volumes of security data and incidents, ensuring continued effective security operations.

                  Conclusion

                  In an era where cyber threats are constantly evolving, SOAR platforms provide a vital solution for organizations looking to enhance their security operations. By integrating, automating, and orchestrating security processes, SOAR helps improve efficiency, accuracy, and response times. Leading solutions like Splunk Phantom, Cortex XSOAR, IBM Resilient, and ServiceNow Security Operations offer robust features that can transform the way security teams operate, making them better equipped to handle today’s complex threat landscape.

                  Deploying these SOAR solutions on Ubuntu servers ensures a stable, secure, and scalable environment for managing security operations. By leveraging the power of SOAR, organizations can significantly enhance their ability to detect, respond to, and mitigate cyber threats, ultimately strengthening their overall cybersecurity posture.

                  The post Understanding SOAR: Security Orchestration, Automation, and Response for Ubuntu Servers appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  18 May, 2024 05:51AM

                  Faizul "Piju" 9M2PJU: Understanding SIEM: Security Information and Event Management

                  Security Information and Event Management (SIEM) is a comprehensive approach to cybersecurity that involves the collection, analysis, and management of security-related data from various sources within an IT infrastructure. SIEM solutions enable organizations to detect, monitor, and respond to security incidents in real time by aggregating and analyzing log data from applications, network devices, servers, and other endpoints.

                  The core functions of SIEM include:

                  1. Log Collection and Management: Aggregating logs from diverse sources to provide a centralized repository for security data.
                  2. Correlation and Analysis: Applying advanced analytics to correlate and interpret log data to identify potential security threats.
                  3. Incident Detection and Response: Providing alerts and automated responses to detected security incidents, enabling swift action to mitigate risks.
                  4. Compliance and Reporting: Assisting organizations in meeting regulatory requirements by offering detailed reports and audit trails.

                  Top Commercial SIEM Solutions for Ubuntu Server

                  Ubuntu, a popular Linux distribution known for its robustness and security features, is an excellent platform for deploying SIEM solutions. Here are some of the top commercial SIEM solutions that can be installed on an Ubuntu server:

                  1. Splunk Enterprise Security

                  Overview: Splunk Enterprise Security is a leading SIEM solution known for its powerful analytics and scalability. It provides comprehensive security monitoring, incident detection, and response capabilities.

                  Features:

                  • Advanced analytics and machine learning for threat detection.
                  • Real-time monitoring and alerting.
                  • Extensive visualization options and customizable dashboards.
                  • Support for a wide range of data sources.
                  • Integration with various third-party security tools.

                  Installation on Ubuntu:

                  • Splunk offers detailed installation guides for deploying their software on Ubuntu servers. The process involves downloading the .deb package, installing it using dpkg, and configuring the necessary settings.

                  2. IBM QRadar

                  Overview: IBM QRadar is a robust SIEM solution that provides deep visibility into network activities and advanced threat detection capabilities. It’s designed to help organizations quickly identify and respond to security threats.

                  Features:

                  • Powerful correlation engine to detect sophisticated threats.
                  • Scalable architecture suitable for large enterprises.
                  • Integrated threat intelligence to enhance detection.
                  • Extensive reporting and compliance management features.

                  Installation on Ubuntu:

                  • IBM QRadar provides a virtual appliance that can be deployed on an Ubuntu server. The setup involves importing the virtual appliance into a virtualization platform like VMware or KVM, followed by configuration steps to tailor the system to specific needs.

                  3. LogRhythm NextGen SIEM

                  Overview: LogRhythm offers a next-generation SIEM platform that combines security analytics, threat detection, and response orchestration. It aims to provide holistic visibility and rapid incident response.

                  Features:

                  • User and entity behavior analytics (UEBA) for detecting insider threats.
                  • Automated playbooks and response workflows.
                  • AI-driven threat detection.
                  • Extensive integration capabilities with other security tools.

                  Installation on Ubuntu:

                  • LogRhythm provides installation packages for Linux, including Ubuntu. The installation involves using their provided scripts and packages to set up the SIEM environment, followed by detailed configuration for data collection and analysis.

                  4. ArcSight Enterprise Security Manager (ESM)

                  Overview: ArcSight ESM by Micro Focus is a comprehensive SIEM solution that offers powerful real-time correlation, threat detection, and compliance management.

                  Features:

                  • Real-time threat detection with advanced correlation rules.
                  • Scalable architecture suitable for large-scale deployments.
                  • Integration with diverse data sources and security tools.
                  • Detailed compliance reporting and auditing capabilities.

                  Installation on Ubuntu:

                  • ArcSight ESM can be installed on an Ubuntu server using their Linux installation packages. The process involves preparing the server environment, installing the necessary dependencies, and configuring the SIEM components according to best practices.

                  Conclusion

                  Deploying a commercial SIEM solution on an Ubuntu server provides a robust platform for managing and enhancing an organization’s security posture. Solutions like Splunk Enterprise Security, IBM QRadar, LogRhythm NextGen SIEM, and ArcSight ESM offer powerful tools for real-time threat detection, incident response, and compliance management. By leveraging these advanced SIEM solutions, organizations can significantly improve their ability to protect against and respond to evolving cyber threats.

                  The post Understanding SIEM: Security Information and Event Management appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  18 May, 2024 05:47AM

                  May 17, 2024

                  Ubuntu Blog: Migrating from CentOS to Ubuntu: a guide for system administrators and DevOps

                  <noscript> <img alt="" height="427" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_640,h_427/https://ubuntu.com/wp-content/uploads/0c69/sonja-langford-eIkbSc3SDtI-unsplash-1.jpg" width="640" /> </noscript>
                  Photo by Sonja Langford, Unsplash

                  CentOS 7 is on track to reach its end-of-life (EoL) on June 30, 2024. Post this date, the CentOS Project will cease to provide updates or support, including vital security patches. Moving away from the RHEL-based ecosystem might appear daunting, but if you’re considering Ubuntu the switch can be both straightforward and economically viable.

                  Pentera, a frontrunner in automated security validation, provides a compelling case study to the ease of this transition. They detail how their container-based setup was migrated to Ubuntu with minimal adjustments, leading to enhanced security measures. The move was also positively received by their clients, who appreciated Ubuntu’s reliable history of issuing Long Term Support releases every two years for the past two decades, complemented by extensive community support.

                  Nitzan Dana, the DevOps Lead at Pentera, noted: “In spite of Ubuntu and CentOS being based on different distribution families, vast sections of our deployment scripts ran smoothly on Ubuntu without requiring any modifications”.

                  For a deeper dive into Pentera’s migration journey and insights on planning your own switch to Ubuntu, you can explore the full case study or read on for further migration considerations.

                  Delivering certainty in a shifting ecosystem

                  The shift of CentOS from a free rebuild closely aligned with Red Hat Enterprise Linux (RHEL) to an upstream project previewing future RHEL updates led to the emergence of competitors such as Rocky Linux and AlmaLinux. These distributions aimed to fill the gap left by CentOS 7, positioning themselves as its natural successors.

                  However, Red Hat’s decision to make CentOS Stream the exclusive public repository for RHEL-related source code and limiting direct source code access to its customers has complicated the ability of these new distributions to maintain exact compatibility with RHEL. 

                  Canonical, as the publisher of Ubuntu, offers the same version of the operating system to both home and commercial users without distinction between paid or free versions. As an open source project, Ubuntu’s source code is readily accessible for anyone to view at any time for any purpose.

                  Ubuntu has adhered to its stable release cycle model for two decades, introducing interim updates every six months as a lead-up to the next Long Term Support (LTS) version, which is released every two years in April. Ubuntu 24.04 LTS marks the tenth in this series.

                  Each LTS release of Ubuntu is supported with five years of maintenance and security updates at no cost to all users. For those seeking expanded coverage, Ubuntu Pro offers a subscription service that includes additional security updates for the broader Ubuntu Universe repository, encompassing various tools, applications, and libraries. This coverage can be extended to 12 years.

                  Of course, every environment is different so careful consideration is key when deciding to migrate.  

                  Engineering considerations

                  When moving between a RHEL-based distribution and Ubuntu, it’s essential to consider the release cadence and licensing model, package management, service configuration, security postures and other system-level differences. Public clouds add another layer of complexity, with integrations, tooling, and cloud-specific features playing a critical role in the migration process.

                  A full exploration of the technical nuances between the two distributions can be found in our recent strategy guide for administrators.

                  For most organisations, the migration follows a structured approach encompassing several key stages:

                  1. Inventory and assessment: Documenting existing services, applications, and packages, along with their dependencies and configurations, to understand the scope of the migration.
                  2. Selection of Ubuntu version: Deciding on the appropriate Ubuntu version, often balancing the need for up-to-date packages with the stability and extended support offered by LTS releases.
                  3. Backup and data migration: Ensuring all critical data and configurations are backed up and ready for migration, minimising the risk of data loss.
                  4. Software availability: Conducting an audit of software availability to find Ubuntu equivalents for existing RHEL packages, and identifying alternatives or solutions for software not directly available.
                  5. Configuration file transition: Adapting system and service configuration files to Ubuntu’s structure and conventions, which may involve changes to file paths and syntax.
                  6. Integration with cloud services: Confirming that all cloud-related integrations, agents, and SDKs are compatible with Ubuntu, leveraging optimised cloud images where available.
                  7. Testing: Rigorously testing the new environment to ensure all applications and services function correctly, ideally in a staging environment that mirrors production.
                  8. Documentation and training: Updating internal documentation and providing training to ensure that operational teams are prepared for the new environment.
                  9. Monitoring and optimisation: Continuously monitoring the new setup to identify and implement further optimizations and adjustments as needed.

                  Each of these steps is designed to mitigate risks and ensure a smooth transition to Ubuntu, leveraging its robust community support and comprehensive documentation to address challenges as they arise.

                  Support considerations

                  In addition to the wealth of community resources available when it comes to administering an Ubuntu estate, you can also rely on Canonical’s support teams as part of an Ubuntu Pro subscription. 

                  Ubuntu Pro is designed to extend the standard security and maintenance updates provided with Ubuntu’s Long Term Support (LTS) releases, covering not just the main repository but also the universe repository, which includes thousands of additional open-source tools and applications. This extended coverage is crucial for enterprises that rely on a wide range of open source software for their operations.

                  Ubuntu Pro also includes features designed to ensure security and compliance throughout your estate with minimal downtime. This includes live kernel patching, which allows for critical kernel updates to be applied without rebooting the system, as well as fleet-wide administration with Landscape. CIS and FIPS 140-2 certified components are also available for organisations that need to meet strict regulatory requirements and security standards.

                  Ubuntu Pro uses a simple, per-node pricing model with the option for additional weekday or 24/7 phone and ticket support. This transparency translates to cost savings for organisations – one life insurance company was able to realise a cost benefit of over 60% as a result of their migration.

                  All the resources you need to make the switch

                  If you’re planning on making the move to Ubuntu, whether migrating existing workloads on a public or private cloud, or if you plan to leverage it as the foundational OS for your next company initiative, get in touch

                  Don’t forget to check out our full migration guide for a deeper dive into the technical differences between Ubuntu and RHEL-based distributions.

                  See what our users and customers are saying in the latest case studies from Pentera, Tech Mahindra and New Mexico State University.

                  17 May, 2024 10:52AM

                  May 16, 2024

                  Faizul "Piju" 9M2PJU: Exploring the Adoption of Ubuntu in the Industrial Sector: A Comprehensive Analysis

                  In the rapidly evolving landscape of industrial operations, the adoption of technology plays a pivotal role in driving efficiency, productivity, and innovation. As industrial organizations seek to modernize their infrastructure, Ubuntu Linux emerges as a promising contender for powering the next generation of industrial systems and processes. In this comprehensive analysis, we delve deep into the potential of adopting Ubuntu in the industrial sector, exploring its promises, reliability, challenges, and the path forward for integration.

                  Ubuntu’s Promise in the Industrial Sector

                  Ubuntu, a leading Linux distribution renowned for its stability, security, and versatility, holds several promises for the industrial sector:

                  1. Cost-Efficiency: Ubuntu’s open-source nature offers industrial organizations a cost-effective alternative to proprietary operating systems, enabling them to allocate resources more efficiently and invest in other strategic initiatives.
                  2. Flexibility and Customization: With its modular architecture and extensive ecosystem of software and tools, Ubuntu provides industrial organizations with the flexibility to tailor solutions to their specific requirements, integrate seamlessly with existing infrastructure, and adapt to evolving needs.
                  3. Security and Reliability: Ubuntu’s robust security features, regular updates, and long-term support (LTS) releases instill confidence in industrial environments where data integrity, uptime, and system reliability are paramount, helping organizations mitigate security risks and ensure uninterrupted operations.
                  4. Compatibility with Industry Standards: Ubuntu’s support for industry protocols, standards, and interfaces ensures interoperability and compliance with regulatory requirements, facilitating smooth integration with industrial systems and processes.

                  Adopting Ubuntu in the Industrial Sector: Strategies and Considerations

                  As industrial organizations consider adopting Ubuntu, several strategies and considerations can guide their journey:

                  1. Pilot Projects and Proof of Concepts: Initiating pilot projects and proof of concepts allows industrial organizations to evaluate Ubuntu’s suitability for specific use cases, validate its performance, and demonstrate tangible benefits to stakeholders before full-scale deployment.
                  2. Collaboration with Technology Partners: Collaborating with technology providers, integrators, and open-source communities with expertise in Ubuntu can streamline the adoption process, mitigate implementation challenges, and leverage best practices and resources for successful integration.
                  3. Training and Upskilling: Investing in training and upskilling programs for personnel enables industrial organizations to build internal expertise, familiarize themselves with Ubuntu’s ecosystem and tools, and empower employees to effectively deploy, manage, and optimize Ubuntu-based solutions.
                  4. Customization and Integration: Tailoring Ubuntu-based solutions to meet the unique requirements and workflows of industrial operations enhances usability, efficiency, and user acceptance, driving adoption and maximizing the value derived from Ubuntu’s capabilities.

                  Challenges and Considerations

                  While Ubuntu offers numerous promises and benefits for the industrial sector, several challenges and considerations must be addressed:

                  1. Legacy Systems and Vendor Lock-In: Industrial environments often have legacy systems and dependencies on proprietary software, posing challenges for migration, integration, and compatibility with Ubuntu.
                  2. Hardware Compatibility: Ensuring compatibility with industrial hardware, devices, and peripherals, such as Programmable Logic Controllers (PLCs), sensors, and actuators, is essential for seamless operation and interoperability with Ubuntu-based systems.
                  3. Regulatory Compliance: Adhering to industry regulations, standards, and certifications, such as ISO 9001, ISA-95, and IEC 62443, requires careful validation, documentation, and compliance when adopting Ubuntu, ensuring the integrity, reliability, and safety of industrial systems and processes.
                  4. Support and Maintenance: Industrial organizations must evaluate the availability of support and maintenance services for Ubuntu-based systems, including access to security updates, patches, and technical assistance, to mitigate risks, address issues promptly, and maintain optimal performance and uptime.

                  Ubuntu’s Broken Promises?

                  While Ubuntu holds great promise for the industrial sector, there are potential pitfalls and challenges that industrial organizations must be aware of:

                  1. Complexity of Integration: Integrating Ubuntu into existing industrial environments may be complex and time-consuming, particularly in environments with legacy systems, proprietary protocols, and heterogeneous infrastructure, requiring thorough planning, testing, and coordination among stakeholders.
                  2. Lack of Industry-Specific Solutions: The availability of industry-specific solutions, applications, and drivers tailored for Ubuntu in the industrial sector may be limited compared to proprietary alternatives, necessitating custom development, adaptation, or interoperability solutions to address specific requirements and use cases effectively.
                  3. Risk of Disruption: Any disruption or downtime resulting from the adoption of Ubuntu in industrial operations can have significant consequences, including production delays, financial losses, and reputational damage, underscoring the importance of risk management, contingency planning, and phased deployment strategies to minimize impact and ensure business continuity.

                  The Industrial Outlook: Embracing Innovation with Ubuntu

                  As the industrial sector embraces digitalization, automation, and connectivity, Ubuntu emerges as a promising catalyst for driving innovation, efficiency, and competitiveness. By leveraging Ubuntu’s promises, addressing challenges, and adopting best practices, industrial organizations can unlock new opportunities, optimize operations, and unlock value across the entire industrial value chain.

                  While Ubuntu may not be a panacea for all industrial challenges, its versatility, reliability, and open-source ethos position it as a strategic enabler for industrial transformation and growth. With careful planning, collaboration, and investment, industrial organizations can harness the full potential of Ubuntu to navigate the complexities of the digital age and thrive in an increasingly interconnected and dynamic industrial landscape.

                  The post Exploring the Adoption of Ubuntu in the Industrial Sector: A Comprehensive Analysis appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  16 May, 2024 06:23PM

                  Faizul "Piju" 9M2PJU: A Comprehensive Guide to Setting Up an Email Server with Custom Domain on Ubuntu

                  In today’s interconnected world, having control over your email communications is essential for privacy, security, and branding purposes. Setting up your own email server with a custom domain on Ubuntu provides you with full control over your email infrastructure and enhances your professional image. In this comprehensive guide, we’ll walk you through the process of setting up an email server on Ubuntu, complete with a custom domain name.

                  Prerequisites

                  Before we dive into the setup process, make sure you have the following prerequisites:

                  1. Ubuntu Server: Install Ubuntu Server on a dedicated machine or a virtual private server (VPS).
                  2. Static IP Address: Obtain a static IP address from your Internet Service Provider (ISP) to ensure reliable email delivery.
                  3. Domain Name: Register a domain name for your email server, such as yourdomain.com, through a domain registrar like Namecheap or GoDaddy.

                  Step 1: Install Required Software

                  Begin by installing the necessary software packages on your Ubuntu server:

                  sudo apt update
                  sudo apt install postfix dovecot postfixadmin roundcube
                  • Postfix: A popular Mail Transfer Agent (MTA) used for sending and receiving emails.
                  • Dovecot: An IMAP and POP3 server for handling incoming email retrieval.
                  • PostfixAdmin: A web-based interface for managing mailboxes and domains.
                  • Roundcube: A webmail client for accessing email via a web browser.

                  Step 2: Configure Postfix

                  During the installation process, you’ll be prompted to configure Postfix. Select “Internet Site” and enter your domain name when prompted.

                  Next, edit the main Postfix configuration file:

                  sudo nano /etc/postfix/main.cf

                  Update the following parameters:

                  myhostname = mail.yourdomain.com
                  mydomain = yourdomain.com
                  myorigin = $mydomain
                  inet_interfaces = all
                  inet_protocols = all

                  Restart Postfix to apply the changes:

                  sudo systemctl restart postfix

                  Step 3: Configure Dovecot

                  Edit the Dovecot configuration file:

                  sudo nano /etc/dovecot/dovecot.conf

                  Make sure the following lines are uncommented or added:

                  protocols = imap pop3
                  listen = *

                  Restart Dovecot:

                  sudo systemctl restart dovecot

                  Step 4: Configure MySQL Database for PostfixAdmin

                  Create a MySQL database and user for PostfixAdmin:

                  sudo mysql
                  CREATE DATABASE postfixadmin;
                  GRANT ALL PRIVILEGES ON postfixadmin.* TO 'postfixadmin'@'localhost' IDENTIFIED BY 'your_password';
                  FLUSH PRIVILEGES;
                  EXIT;

                  Import the initial database schema:

                  sudo mysql -u postfixadmin -p postfixadmin < /usr/share/postfixadmin/create_tables.mysql

                  Step 5: Configure PostfixAdmin

                  Edit the PostfixAdmin configuration file:

                  sudo nano /etc/postfixadmin/config.local.php

                  Update the database settings:

                  $CONF['configured'] = true;
                  $CONF['database_type'] = 'mysqli';
                  $CONF['database_host'] = 'localhost';
                  $CONF['database_user'] = 'postfixadmin';
                  $CONF['database_password'] = 'your_password';
                  $CONF['database_name'] = 'postfixadmin';

                  Restart Apache to apply the changes:

                  sudo systemctl restart apache2

                  Step 6: Set Up Virtual Domains and Mailboxes

                  Access PostfixAdmin in your web browser by navigating to http://your_server_ip/postfixadmin. Log in using the default username (postfixadmin) and the password you specified earlier.

                  • Create virtual domains and mailboxes using the PostfixAdmin web interface.

                  Step 7: Configure Roundcube Webmail

                  Edit the Roundcube configuration file:

                  sudo nano /etc/roundcube/config.inc.php

                  Update the following parameters:

                  $config['default_host'] = 'ssl://mail.yourdomain.com';
                  $config['default_port'] = 993;
                  $config['smtp_server'] = 'tls://mail.yourdomain.com';
                  $config['smtp_port'] = 587;
                  $config['smtp_user'] = '%u';
                  $config['smtp_pass'] = '%p';

                  Restart Apache to apply the changes:

                  sudo systemctl restart apache2

                  Step 8: Configure DNS Records

                  Create the following DNS records for your domain:

                  • MX Record: Point to your server’s static IP address.
                  • A Record: Point mail.yourdomain.com to your server’s static IP address.
                  • TXT Record: Add SPF and DKIM records for email authentication.

                  Step 9: Test Email Delivery

                  Send a test email to verify that your email server is set up correctly.

                  Congratulations! You’ve successfully set up your own email server with a custom domain on Ubuntu. Now you can enjoy full control over your email communications and ensure privacy and security for yourself and your users. Happy emailing!

                  The post A Comprehensive Guide to Setting Up an Email Server with Custom Domain on Ubuntu appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  16 May, 2024 06:11PM

                  Faizul "Piju" 9M2PJU: Exploring Malware Analysis Using Open-Source Software: A Comprehensive Guide

                  Malware analysis is a critical aspect of cybersecurity, allowing security professionals to understand the behavior, functionality, and impact of malicious software. While commercial malware analysis tools are available, open-source software provides cost-effective alternatives for conducting malware analysis. In this comprehensive guide, we will explore malware analysis using open-source software on Ubuntu, covering tools, methodologies, and practical techniques.

                  Understanding Malware Analysis

                  Malware analysis involves dissecting and examining malicious software to gain insights into its functionality, purpose, and potential impact. There are several types of malware analysis, including:

                  1. Static Analysis: Examining the characteristics of malware without executing it, such as analyzing file headers, strings, and metadata.
                  2. Dynamic Analysis: Executing malware in a controlled environment (sandbox) to observe its behavior, interactions, and network activities.
                  3. Behavioral Analysis: Observing the actions and behaviors of malware during execution, such as file system modifications, registry changes, and network communications.
                  4. Code Analysis: Analyzing the underlying code and logic of malware to understand its functionality, algorithms, and potential vulnerabilities.

                  Open-Source Tools for Malware Analysis on Ubuntu

                  Ubuntu, a popular Linux distribution, provides a robust platform for malware analysis due to its stability, security features, and extensive package repositories. Here are some open-source tools commonly used for malware analysis on Ubuntu:

                  1. Static Analysis Tools:
                  • PEiD: Detects common packers and compilers used in Windows executable files.
                  • Radare2: A powerful reverse engineering framework for analyzing binary files, including executables, libraries, and firmware.
                  1. Dynamic Analysis Tools:
                  • Cuckoo Sandbox: An automated malware analysis platform that executes malware in a controlled environment and monitors its behavior.
                  • Docker: Containerization technology that provides lightweight, isolated environments for running malware samples without affecting the host system.
                  1. Network Analysis Tools:
                  • Wireshark: A popular network protocol analyzer for capturing and analyzing network traffic generated by malware.
                  • Bro (Zeek): A powerful network security monitoring tool that provides real-time analysis of network traffic and protocol detection.
                  1. Behavioral Analysis Tools:
                  • Volatility: A memory forensics framework for analyzing volatile memory (RAM) to extract information about running processes, network connections, and loaded modules.
                  • Sysinternals Suite (Wine): A collection of Windows utilities that can be run on Ubuntu using Wine, including Process Monitor, Autoruns, and Tcpview.

                  Malware Analysis Methodology on Ubuntu

                  Step 1: Obtain Malware Samples

                  Obtain malware samples from trusted sources or repositories for analysis. Exercise caution and ensure proper handling and containment to prevent accidental infections.

                  Step 2: Static Analysis

                  1. Use static analysis tools like PEiD and Radare2 to examine the structure, attributes, and metadata of malware files.
                  2. Identify characteristics such as file headers, import/export functions, embedded resources, and obfuscation techniques.

                  Step 3: Dynamic Analysis

                  1. Set up a Cuckoo Sandbox environment on Ubuntu to automate malware analysis tasks.
                  2. Configure Cuckoo Sandbox to execute malware samples in isolated environments and monitor their behavior.
                  3. Analyze generated reports, including network traffic, file system modifications, process activity, and registry changes.

                  Step 4: Network Analysis

                  1. Capture and analyze network traffic using tools like Wireshark and Bro to observe malware communications and command-and-control (C2) activities.
                  2. Identify indicators of compromise (IOCs), such as IP addresses, domains, and network protocols associated with malware.

                  Step 5: Behavioral Analysis

                  1. Use memory forensics tools like Volatility to analyze volatile memory (RAM) for artifacts and evidence of malware activity.
                  2. Extract information about running processes, open network connections, loaded modules, and system state from memory dumps.

                  Step 6: Code Analysis

                  1. Reverse engineer malware using tools like Radare2 to analyze its underlying code, algorithms, and functionality.
                  2. Disassemble and decompile malware binaries to understand their logic, control flow, and anti-analysis techniques.

                  Practical Techniques for Malware Analysis on Ubuntu

                  1. Isolation: Conduct malware analysis in isolated environments, such as virtual machines or containers, to prevent unintended consequences and contamination.
                  2. Monitoring: Monitor system resources, network traffic, and process activity during malware analysis to detect anomalies and suspicious behavior.
                  3. Documentation: Document analysis findings, observations, and IOCs in detailed reports for future reference and knowledge sharing.
                  4. Collaboration: Engage with the cybersecurity community, participate in forums, and share insights and findings to enhance collective knowledge and expertise.

                  Conclusion

                  Malware analysis using open-source software on Ubuntu provides a cost-effective and powerful approach to understanding and combating malicious software. By leveraging a combination of static analysis, dynamic analysis, network analysis, behavioral analysis, and code analysis techniques, security professionals can gain valuable insights into malware behavior, tactics, and techniques. By following best practices and utilizing open-source tools, organizations can enhance their cybersecurity defenses and protect against evolving threats in today’s digital landscape.

                  The post Exploring Malware Analysis Using Open-Source Software: A Comprehensive Guide appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  16 May, 2024 05:47PM

                  Faizul "Piju" 9M2PJU: Building Your Own Security Operations Center (SOC) Using Open-Source Software: A Comprehensive Guide

                  In today’s digital landscape, organizations face an ever-increasing number of cyber threats. Establishing a Security Operations Center (SOC) is crucial for proactively monitoring, detecting, and responding to these threats. While building a SOC may seem daunting, leveraging open-source software can provide cost-effective solutions without compromising on security. This comprehensive guide will walk you through the process of setting up your own SOC using open-source software, covering methodology, software selection, installation, and configuration.

                  Understanding Security Operations Center (SOC) Methodology

                  A SOC is a centralized unit responsible for continuously monitoring and analyzing an organization’s security posture. It employs people, processes, and technology to identify and respond to security incidents effectively. The key components of SOC methodology include:

                  1. Threat Detection: Proactively monitor networks, systems, and applications to detect security events and anomalies.
                  2. Incident Response: Develop and implement incident response procedures to investigate and mitigate security incidents promptly.
                  3. Log Management: Collect, aggregate, and analyze log data from various sources to identify security-related events and patterns.
                  4. Vulnerability Management: Identify, prioritize, and remediate vulnerabilities in the organization’s infrastructure and applications.
                  5. Threat Intelligence: Utilize threat intelligence feeds and sources to enhance threat detection and response capabilities.
                  6. Compliance Management: Ensure compliance with regulatory requirements and industry standards through continuous monitoring and reporting.

                  Selecting Open-Source Software for Your SOC

                  When building a SOC using open-source software, it’s essential to select tools that meet your organization’s specific requirements and objectives. Here are some key categories of open-source software to consider:

                  1. SIEM (Security Information and Event Management): SIEM platforms aggregate and correlate security events from various sources to provide centralized monitoring and analysis. Examples include:
                  • Elastic Stack (Elasticsearch, Logstash, Kibana): Elastic Stack offers a versatile platform for log management, search, and visualization.
                  • Graylog: Graylog is a scalable and user-friendly SIEM solution with powerful log management and analysis capabilities.
                  1. IDS/IPS (Intrusion Detection/Prevention System): IDS/IPS systems monitor network traffic for signs of malicious activity and can block or alert on suspicious behavior. Examples include:
                  • Suricata: Suricata is a high-performance IDS/IPS capable of real-time traffic inspection and signature-based detection.
                  • Snort: Snort is an open-source network intrusion detection system known for its flexibility and extensive rule sets.
                  1. Vulnerability Management: Vulnerability management tools assess and prioritize security vulnerabilities in the organization’s infrastructure and applications. Examples include:
                  • OpenVAS (Open Vulnerability Assessment System): OpenVAS is a comprehensive vulnerability scanner with a large database of known vulnerabilities.
                  • Nessus Essentials: Nessus Essentials is a widely-used vulnerability scanner offering both free and paid versions.
                  1. Endpoint Detection and Response (EDR): EDR solutions monitor endpoint devices for signs of compromise and provide real-time threat detection and response capabilities. Examples include:
                  • Osquery: Osquery allows for real-time monitoring and querying of endpoint data for security purposes.
                  • Wazuh: Wazuh is an open-source EDR platform that integrates host-based intrusion detection, log analysis, and file integrity monitoring.

                  Installation and Configuration of Open-Source SOC Software

                  Step 1: Setting Up the Infrastructure

                  1. Server Setup: Deploy Ubuntu Server or another preferred Linux distribution on dedicated hardware or virtual machines to host SOC software components.
                  2. Network Configuration: Ensure proper network connectivity and firewall rules to allow communication between SOC components and monitored assets.

                  Step 2: Installing SIEM (Elastic Stack Example)

                  1. Install Elasticsearch: Follow the official documentation to install and configure Elasticsearch for centralized log storage.
                  2. Install Logstash: Install and configure Logstash to ingest, parse, and enrich log data from various sources.
                  3. Install Kibana: Install Kibana for data visualization, dashboard creation, and ad-hoc querying of log data.
                  4. Configure Beats: Deploy Beats (e.g., Filebeat, Metricbeat) on endpoints and servers to collect and ship log data to Elasticsearch.

                  Step 3: Deploying IDS/IPS (Suricata Example)

                  1. Install Suricata: Install Suricata using package manager or from source code, and configure it to monitor network traffic.
                  2. Create Rule Sets: Configure Suricata with custom or community-provided rule sets to detect known threats and suspicious behavior.
                  3. Set Up Logging: Integrate Suricata with Elasticsearch or another logging solution to centralize and analyze IDS alerts.

                  Step 4: Implementing Vulnerability Management (OpenVAS Example)

                  1. Install OpenVAS: Install and configure OpenVAS according to the official documentation to perform vulnerability scans.
                  2. Schedule Scans: Set up recurring vulnerability scans for critical assets and networks to identify and prioritize security vulnerabilities.
                  3. Interpret Results: Review scan results, prioritize vulnerabilities based on severity and impact, and develop remediation plans.

                  Step 5: Deploying Endpoint Detection and Response (Wazuh Example)

                  1. Install Wazuh Manager: Install and configure the Wazuh manager to centralize log data, manage agents, and perform real-time analysis.
                  2. Deploy Wazuh Agents: Deploy Wazuh agents on endpoints and servers to collect security-relevant data and report back to the Wazuh manager.
                  3. Configure Rules: Customize Wazuh rules and policies to detect and respond to specific security events and threats.

                  Conclusion

                  Establishing a Security Operations Center (SOC) using open-source software provides organizations with cost-effective and flexible solutions for monitoring, detecting, and responding to security threats. By following the methodology outlined in this guide and selecting appropriate open-source tools for each SOC component, organizations can build robust security operations capabilities tailored to their specific needs and objectives. Continuous monitoring, analysis, and improvement are essential to maintaining an effective SOC and staying ahead of evolving cyber threats in today’s dynamic threat landscape.

                  The post Building Your Own Security Operations Center (SOC) Using Open-Source Software: A Comprehensive Guide appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  16 May, 2024 05:42PM

                  hackergotchi for GreenboneOS

                  GreenboneOS

                  April 2024 Threat Tracking: Record High For Security Vulnerabilities

                  April 2024 has compounded another record breaking month for CVE disclosure on top of the last. In this month’s threat tracking report we will investigate several new actively exploited vulnerabilities and quickly review the cyber breach of US R&D giant MITRE. The report will also uncover how end-of-life (EOL) products can have a detrimental impact on an organization’s cybersecurity posture and how to manage the associated risks.

                  MITRE Exploited Via Ivanti Secure Connect Vulnerabilities

                  The MITRE Corporation is a not-for-profit organization established in 1958, that operates multiple federally funded research and development centers (FFRDCs) to support the US national defense, cybersecurity, healthcare, aviation, and more. MITRE also maintains several core cybersecurity frameworks such as MITRE ATT&CK, D3FEND, and vulnerability resources including the Common Vulnerabilities and Exposures (CVE) database, the Common Weakness and Enumeration (CWE), and the Common Attack Path Enumeration (CAPEC).

                  A recent cyber breach of MITRE shows that even the most cyber savvy organizations are not immune to targeted attacks from Advanced Persistent Threats (APTs). Initial access to one of MITRE’s research networks was gained via two Ivanti Connect Secure VPN service vulnerabilities; CVE-2023-46805 (CVSS 8.2) and CVE-2024-21887 (CVSS 9.1). We previously published a full description of these vulnerabilities which can both be detected by Greenbone’s vulnerability tests. After initial access, attackers were able to pivot to adjacent VMware infrastructure [TA0109] using stolen session tokens [T1563] to bypass multi-factor authentication and access admin accounts.

                  If it can happen to MITRE it can happen to any organization, but patching known actively exploited vulnerabilities is a critical cybersecurity activity that all organizations need to place strong emphasis on.

                  Operation MidnightEclipse: Exploited PaloAlto Zero Day

                  On April 10 2024, exploitation of a yet-undiscovered zero-day vulnerability in the GlobalProtect feature of PaloAlto PAN-OS was detected and reported by researchers at cybersecurity firm Volexity. The vulnerability, now tracked as CVE-2024-3400 (CVSS 10), allows unauthenticated remote code execution (RCE) with root privileges, and has been added to the CISA KEV (Known Exploited Vulnerabilities) catalog. The Greenbone enterprise vulnerability feed includes tests to detect CVE-2024-3400 allowing organizations to identify affected assets and plan remediation.

                  PaloAlto’s Unit42 is tracking subsequent attacks under the name Operation MidnightEclipse and along with Shadowserver Foundation, and GreyNoise, have observed simple probes and full exploitation followed by data exfiltration and installation of remote command and control (C2) tools. Also, several proof of concept (PoC) exploits have been publicly disclosed [1][2] by third parties extending the threat by enabling attacks from low-skilled cyber criminals.

                  CVE-2024-3400 affects PAN-OS 10.2, PAN-OS 11.0 and PAN-OS 11.1 firewalls configured with GlobalProtect gateway or GlobalProtect portal. Hotfix patches PAN-OS 10.2.9-h1, PAN-OS 11.0.4-h1, PAN-OS 11.1.2-h3 are currently available to remediate affected devices without requiring a restart. A comprehensive guide for remediation is available in the Palo Alto Knowledge Base.

                  D-Link End-Of-Life Products Exploited Via Hardcoded Credentials

                  Two critical vulnerabilities have been discovered in NAS devices manufactured by D-Link, labeled as CVE-2024-3272 (CVSS 9.8) and CVE-2024-3273 (CVSS 9.8). The impacted devices include DNS-320L, DNS-325, DNS-327L, and DNS-340L, all of which have reached their end of support product lifecycle. According to D-Link patches will not be provided. Both CVEs are being actively exploited, and a proof of concept (PoC) exploit for CVE-2024-3273 is available online. Globally this affects an estimated 92,000 devices.

                  Vulnerable devices all contain a default administration account that does not require a password. Attackers can execute commands remotely by sending a specially crafted HTTP GET request to the /cgi-bin/nas_sharing.cgi URI on the NAS web-interface. Combined, the two vulnerabilities pose a severe risk, as they allow root remote code execution (RCE) without authentication on the target device [T1584]. This gives attackers access to potentially sensitive data [TA0010] stored on the compromised NAS device itself, but also a foothold on the victim’s network to attempt lateral penetration [TA0008] to other systems on the network, or launch attacks globally as part of a botnet [T1584.005].

                  Securing End-Of-Life (EOL) Digital Products

                  End-of-life (EOL) digital products demand special security considerations due to discontinued vendor support. Here are some defensive tactics for protecting EOL digital products:

                  1. Risk Assessment: Conduct regular risk assessments to identify the potential impact of legacy devices on your organization, especially considering that newly disclosed vulnerabilities may not have vendor provided remediation issued.
                  2. Vulnerability and Patch Management: Although EOL products may be officially unsupported by their vendors, in some emergency cases, patches are still issued. Vulnerability scanning and patch management help identify new vulnerabilities and allow defenders to seek guidance from the vendor on remediation options.
                  3. Isolation and Segmentation: If possible, isolate EOL products from the rest of the network to limit their exposure to potential threats. Segmenting these devices can help contain security breaches and prevent them from affecting other systems.
                  4. Harden Configuration and Policies: In some cases, additional policies or security measures such as removing Internet access altogether are appropriate to further mitigate risk.
                  5. Update to Supported Products: Update IT infrastructure to replace EOL products with supported alternatives. Transitioning to newer technologies can enhance security posture and reduce the reliance on outdated systems.
                  6. Monitoring and Detection: Implement additional monitoring and detection mechanisms to detect any suspicious activity exploitation attempts or attempts at unauthorized access to EOL products. Continuous monitoring can help identify malicious activity promptly and allow appropriate responses.

                  CVE-2024-4040 CrushFTP VFS Sandbox Escape Vulnerability

                  CISA has issued an order for all federal US government agencies to patch systems using CrushFTP service due to active exploitation by politically motivated hackers. Tracked as CVE-2024-4040 (CVSS 9.8), the vulnerability allows an unauthenticated attacker to access sensitive data outside of the CrushFTP’s Virtual File System (VFS) and achieve full system compromise. The vulnerability stems from a failure to correctly authorize commands issued via the CrushFTP API [CWE-1336].

                  CrushFTP is a proprietary file transfer software designed for secure file transfer and file sharing. It supports a wide range of protocols, including FTP, SFTP, FTPS, HTTP, HTTPS, WebDAV, and more. The vulnerability lies in CrushFTP’s Java web-interface API for administering and monitoring the CrushFTP server.

                  CrushFTP said there is no way to identify a compromised instance from inspecting the application logs. It turned out that CVE-2024-4040 is trivial to exploit and publically available exploits are available, greatly increasing the risk. Greenbone’s Enterprise feed includes a vulnerability test to identify the HTTP header sent by vulnerable versions of CrushFTP.

                  There are an estimated 6,000 publicly exposed instances of CrushFTP in the US alone and over 7,000 public instances globally. CVE-2024-4040 impacts all versions of the application before 10.7.1 and 11.1.0 on all platforms, and customers should upgrade to a patched version with urgency.

                  Summary

                  April 2024 was a record breaking month for CVE disclosure and new cybersecurity challenges, including several high-profile incidents. Ivanti’s Secure Connect VPN was used to gain unauthorized access to MITRE’s development infrastructure leading to internal network attacks.

                  Various politically motivated threat actors were observed exploiting a zero-day vulnerability in Palo Alto’s PAN-OS now tracked as CVE-2024-3400, and two new critical vulnerabilities in EOL D-Link NAS devices highlight the need for extra security when legacy products must remain in active service. Also, a critical vulnerability in the CrushFTP server was found and quickly added to CISA KEV forcing US government agencies to patch with urgency.

                  16 May, 2024 02:09PM by Joseph Lee

                  hackergotchi for Tails

                  Tails

                  Tails 6.3

                  Changes and updates

                  • Add back translations in Romanian and Malayalam.

                  • Update Tor Browser to 13.0.15.

                  • Disable temporarily the PDF reader of Thunderbird to protect from a security vulnerability related to the handling of fonts in PDF files. (CVE-2024-4367)

                    You can still open PDF files in the Document Viewer from Thunderbird.

                  • Make Restart later the default button, instead of Restart now, at the end of an automatic upgrade.

                  Fixed problems

                  • Fix the configuration of new printers when some printers were already configured in the Persistent Storage in Tails 5.23 or earlier. (#20271)

                  • Remove the long delay between the Welcome Screen and the GNOME desktop when MAC address anonymization fails. (#17813)

                  For more details, read our changelog.

                  Get Tails 6.3

                  To upgrade your Tails USB stick and keep your Persistent Storage

                  • Automatic upgrades are available from Tails 6.0 or later to 6.3.

                    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

                  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

                  To install Tails 6.3 on a new USB stick

                  Follow our installation instructions:

                  The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

                  To download only

                  If you don't need installation or upgrade instructions, you can download Tails 6.3 directly:

                  16 May, 2024 11:29AM

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Faizul "Piju" 9M2PJU: Discover the Future of Technology at Mini UbuCon Malaysia 2024

                  Linux stands as one of the most reliable and versatile general-purpose operating systems available today. Among its various distributions, Ubuntu shines brightly, gaining widespread adoption and recognition across the globe. For those venturing into the world of cybersecurity, Ubuntu is particularly suitable, offering an ideal environment for learning and mastering Linux-based systems.

                  This year, in a commendable initiative to support the local Ubuntu Malaysia Community, Siber Siaga and the CyberDSA team have partnered with the Ubuntu Malaysia LoCo Team to bring you the much-anticipated Mini UbuCon Malaysia 2024.

                  Why Attend Mini UbuCon Malaysia 2024?

                  • Stay Ahead with Cutting-Edge Technology
                  • Dive deep into the latest advancements Ubuntu has to offer. Whether you are a seasoned professional or a beginner, this conference provides invaluable insights into the evolving landscape of Ubuntu and its applications in cybersecurity.
                  • Network with Experts and Enthusiasts
                  • Meet and connect with like-minded individuals, industry experts, and key contributors to the Ubuntu community. This is a unique opportunity to expand your network and collaborate with others who share your passion for open-source technology.
                  • Support the Ubuntu Malaysia Community
                  • The local Ubuntu Malaysia Community plays a crucial role in the ongoing development and support of this versatile operating system. Your participation helps bolster this community and ensures the continuous growth and innovation of Ubuntu.

                  Event Details

                  • Event: Mini UbuCon Malaysia 2024
                  • Dates: 6 to 8 August 2024
                  • Location: Kuala Lumpur Convention Centre
                  • Organizers: Siber Siaga, CyberDSA, and the Local Community of Ubuntu Malaysia (Ubuntu Malaysia LoCo Team)
                  • Highlights: Latest technology updates, expert sessions, networking opportunities

                  Registration Now Open!

                  Don’t miss your chance to be a part of this exciting event. Register now to secure your spot and stay updated with the latest information.

                  👉 Register Here

                  Join us at Mini UbuCon Malaysia 2024 from 6 to 8 August at the Kuala Lumpur Convention Centre and be a part of the future of technology. Whether you are looking to enhance your skills, learn about the latest innovations, or simply support the Ubuntu community, this conference is the place to be.

                  The post Discover the Future of Technology at Mini UbuCon Malaysia 2024 appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  16 May, 2024 07:26AM

                  hackergotchi for VyOS

                  VyOS

                  VyOS is featured in GigaOm Radar reports for network operating systems

                  Hello, Community!
                  This year, VyOS is featured in GigaOm Radar reports on disaggregated network operating systems again, this time as a challenger and outperformer. Let us discuss that in more detail, and remember that we are happy to share the reports with anyone interested — let us know and we will send them to you!

                  16 May, 2024 04:59AM by Yuriy Andamasov (yuriy@sentrium.io)

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Podcast Ubuntu Portugal: E299 Budo Dos Dados Abertos, Com Frederico Muñoz

                  Esta semana fomos conhecer um utilizador de Gnu/Linux de nível ancião-guru, que construiu a sua carreira graças ao Software Livre. Além de ser um CNCF Ambassador, cria Software Livre interessante no seu tempo livre - cujo exemplo mais conhecido é um sistema de análise do posicionamento relativo dos partidos, nas votações da Assembleia da República.

                  Já sabem: oiçam, subscrevam e partilhem!

                  Apoios

                  Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

                  Atribuição e licenças

                  Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

                  16 May, 2024 12:00AM

                  May 15, 2024

                  Ubuntu Blog: Canonical at Dell Technologies World 2024

                  Canonical, the company behind Ubuntu and the trusted source for open source software, is thrilled to announce its sponsorship of Dell Technologies World again this year. Join us in Las Vegas from May 20th to the 23rd to explore how Canonical and Dell can elevate your business with state-of-the-art technologies for Cloud, AI and the Edge, while ensuring security every step of the way.

                  Register to Dell Technologies World 2024

                  What to anticipate from Canonical at DTW

                  Celebrating 20 Years and Introducing the Latest Ubuntu LTS

                  In celebration of our 20th anniversary, we’re excited to showcase several of our cutting-edge solutions at DTW. Alongside the launch of the newest Long-Term Support (LTS) version of Ubuntu, Noble Numbat, dive into other solutions we’ll be highlighting in collaboration with Dell and one of our newest strategic partners, Morpheus:

                  PowerFlex + MicroCloud: Discover how the tight integration between MicroCloud and Dell PowerFlex can streamline and provide a solid foundation for your cloud journey.

                  Ubuntu-based NativeEdge: Uncover why Dell selected Ubuntu as the foundational OS for the NativeEdge platform and learn how together Canonical and Dell simplify edge computing.

                  Enterprise Open Source AI: Unlock the full potential of your Dell hardware and build enterprise-grade AI projects with our comprehensive, flexible and composable open source platform.

                  VMware Alternatives – available today: Reduce TCO with compelling new solutions from Dell and Canonical that provide seamless migration from any size VMware-based cloud to open source clouds with Morpheus integrated for multicloud management.

                  Speaking Sessions at the Canonical Booth

                  Visit our booth and attend sessions led by industry experts covering a range of Open Source solutions. Plus, all attendees will receive a complimentary Ubuntu backpack!

                  From Infrastructure to Apps – Securing Your Open Source Environment

                  Come hear Canonical’s approach to securing and supporting open source, from infrastructure to modern applications, and discover why we are the trusted source for open source.

                  PowerFlex and MicroCloud: Cloud infrastructure at its best

                  MicroCloud is a lightweight open source cloud aimed at simplifying your cloud deployment and operations. Combined with Dell PowerFlex efficiency and performance, you have a winning combination for your journey to cloudification. Join this session to learn more.

                  AI GPU infrastructure optimisation

                  Gain insights from real-world projects and learn how to optimize the usage of 10k A100 GPUs across 20 on-prem Kubernetes clusters. Explore HW-level strategies such as NVidia MIG, swap the default K8s scheduler to Volcano, and PaddlePaddle for smarter job distribution.

                  Open source data and AI

                  Explore how open-source tools streamline the data and ML lifecycle, enabling end-to-end solutions for DataOps and MLOps. Learn about seamless integration with platforms like OpenSearch, Kubeflow, and MLFlow, empowering organizations to tackle complex challenges.

                  Enable Hybrid Cloud Platform Operations with Morpheus and Canonical

                  With the current virtualization challenges, enterprises are looking for alternatives. Discover how to swiftly enable 100% agnostic Hybrid Cloud PlatformOps with Morpheus, migrating to a cost-effective, fully featured Open Source private cloud with Canonical OpenStack.

                  Canonical | Ubuntu and Dell

                  Canonical and Dell Technologies have collaborated to create a series of feature-rich reference architectures applicable across industries, delivering superior customer experiences and value. Additionally, we offer professional services including consulting, deployment, and managed services to ensure a seamless transition to production.

                  Customers can access Canonical support subscriptions, services, and solutions for cloud, Kubernetes, AI/ML, storage, and edge directly from Dell, providing a single source for all Dell customers.

                  Learn more about our offerings and how Canonical and Dell can propel your business forward.

                  Getting to the Event

                  Join the Canonical team at DTW to discuss how to take advantage of solutions that capitalize on the benefits of Open Source software with security and compliance.

                  Location
                  The Venetian Convention Center
                  201 Sands Ave, Las Vegas, NV 89169, United States

                  Dates
                  May 20 – 23

                  Hours
                  Monday, 6 PM – 9 PM PT
                  Tuesday-Wednesday, 10 AM – 5 PM PT

                  You can meet with the Canonical team on-site in Las Vegas and pick our technical experts’ brains about your particular open source scenario.

                  Register to Dell Technologies World 2024

                  Want to learn more?
                  Please stop by booth 208, to speak to our experts and check out our demos.

                  Are you interested in setting up a meeting with our team?
                  Reach out to our Alliances team at partners@canonical.com

                  15 May, 2024 04:32PM

                  Faizul "Piju" 9M2PJU: Exploring Ubuntu Touch: A Comprehensive Guide

                  Introduction:
                  Ubuntu Touch, a mobile operating system developed by Canonical, offers a unique experience in the world of smartphones and tablets. From its intriguing history to its vibrant community and growing list of supported devices, Ubuntu Touch continues to evolve, offering users a secure, open-source alternative to mainstream mobile platforms. Let’s delve into the depths of Ubuntu Touch to uncover its journey, development, community, supported devices, and its current version.

                  History:
                  The story of Ubuntu Touch dates back to 2011 when Canonical, the company behind the Ubuntu operating system, announced its ambitious goal to create a unified platform for desktops, tablets, and smartphones. Development began in earnest, leveraging Ubuntu’s robust Linux-based architecture and the Unity interface. Initial versions focused on creating a responsive and intuitive user experience, with a strong emphasis on convergence – the ability to seamlessly transition between different form factors.

                  Development:
                  Ubuntu Touch underwent several iterations, with Canonical releasing developer previews and betas to gather feedback and refine the platform. Despite facing challenges and setbacks, the development community remained dedicated, contributing code, testing builds, and providing support. Canonical’s decision to shift focus away from smartphone and tablet development in 2017 led to the community-driven UBports project taking over maintenance and further development of Ubuntu Touch. This transition breathed new life into the platform, fostering innovation and community collaboration.

                  Community:
                  The Ubuntu Touch community is a vibrant ecosystem of developers, enthusiasts, and users passionate about open-source software and mobile technology. Through forums, IRC channels, and social media groups, community members exchange ideas, troubleshoot issues, and collaborate on improving the platform. UBports organizes regular events, hackathons, and development sprints to foster community engagement and drive the evolution of Ubuntu Touch. This grassroots movement ensures that Ubuntu Touch remains true to its open-source roots while embracing the diverse needs of its users.

                  Supported Devices:
                  One of the strengths of Ubuntu Touch lies in its versatility and compatibility with a wide range of devices. Thanks to the efforts of the UBports community, Ubuntu Touch is available on various smartphones and tablets from manufacturers such as Fairphone, OnePlus, and Sony. The UBports Installer simplifies the process of installing Ubuntu Touch on supported devices, allowing users to experience the platform firsthand without extensive technical knowledge. With each new release, the list of supported devices continues to grow, showcasing the adaptability and resilience of the Ubuntu Touch ecosystem.

                  Conclusion:
                  Ubuntu Touch represents a compelling alternative in the competitive landscape of mobile operating systems. From its humble beginnings to its flourishing community and expanding device support, Ubuntu Touch continues to defy expectations and carve out its niche in the mobile world. Whether you’re a developer looking to contribute to an open-source project or a user seeking a secure and customizable mobile experience, Ubuntu Touch offers a platform ripe for exploration and innovation. Join the Ubuntu Touch community today and embark on a journey towards a mobile future built on freedom, transparency, and collaboration.

                  The post Exploring Ubuntu Touch: A Comprehensive Guide appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:44PM

                  Faizul "Piju" 9M2PJU: Unveiling Xiaomi HyperOS: A New Era of Innovation in Android Firmware


                  In the realm of Android firmware, Xiaomi has long been a trailblazer, with its MIUI setting a benchmark for innovation and customization. Now, Xiaomi enthusiasts and users find themselves at the dawn of a new era with the introduction of Xiaomi HyperOS. A successor to MIUI, HyperOS promises to redefine the Android experience with enhanced performance, streamlined design, and a host of exciting features. Let’s embark on a journey to explore the history, evolution, and potential future of Xiaomi HyperOS.

                  History and Evolution:
                  Xiaomi’s journey into the realm of custom Android firmware began with MIUI, which debuted in 2010 alongside the release of the Xiaomi Mi 1 smartphone. MIUI quickly garnered a dedicated following for its sleek design, feature-rich interface, and regular updates. Over the years, MIUI underwent numerous iterations, incorporating user feedback and technological advancements to refine the user experience.

                  As the Android ecosystem evolved, so did the demands of users. Enter Xiaomi HyperOS, born out of a desire to push the boundaries of what’s possible with Android firmware. Developed by a team of enthusiasts and former MIUI developers, HyperOS represents a fresh approach to the Android experience, combining performance optimization with user-centric design.

                  Performance and User Experience:
                  One of the hallmarks of Xiaomi HyperOS is its focus on performance optimization. Built upon a foundation of efficient code and system-level tweaks, HyperOS delivers snappy responsiveness and smooth multitasking capabilities. Whether navigating through apps, gaming, or multitasking, users can expect a fluid and seamless experience that maximizes the capabilities of their Xiaomi devices.

                  In terms of user experience, HyperOS maintains a balance between simplicity and customization. The interface is clean and modern, with intuitive navigation and a visually appealing design language. At the same time, HyperOS offers a wide range of customization options, allowing users to personalize their devices to suit their preferences.

                  Pros and Cons:
                  Like any firmware, Xiaomi HyperOS comes with its own set of strengths and weaknesses.

                  Pros:

                  1. Enhanced Performance: HyperOS prioritizes performance optimization, ensuring a smooth and responsive user experience.
                  2. Streamlined Design: The interface is clean, modern, and intuitive, enhancing usability.
                  3. Customization Options: HyperOS offers a wide range of customization options, allowing users to tailor their devices to their preferences.
                  4. Community-Driven Development: HyperOS benefits from the input and contributions of a dedicated community of enthusiasts and developers.

                  Cons:

                  1. Limited Device Support: As a new firmware, HyperOS may initially have limited device support compared to established alternatives.
                  2. Learning Curve: While HyperOS offers extensive customization options, navigating and configuring these features may require some learning for new users.
                  3. Potential Stability Issues: As with any new firmware, early releases of HyperOS may encounter stability issues or bugs that need to be addressed through updates.

                  Devices Supported:
                  Xiaomi HyperOS initially targets a select range of Xiaomi devices, with plans to expand support to additional models in the future. Supported devices may include flagship offerings such as the Xiaomi Mi series, as well as popular mid-range and budget devices from Xiaomi’s lineup.

                  Future Features:
                  Looking ahead, Xiaomi HyperOS holds promise for further innovation and feature enhancements. Future updates may introduce improvements in areas such as artificial intelligence integration, privacy features, and compatibility with emerging technologies. Additionally, expanded device support and partnerships with hardware manufacturers could broaden HyperOS’s reach and appeal to a wider audience.

                  Conclusion:
                  Xiaomi HyperOS represents a new chapter in the evolution of Android firmware, building upon the foundation laid by MIUI while charting its own path forward. With a focus on performance, user experience, and community-driven development, HyperOS aims to redefine the Android experience for Xiaomi users. As it continues to evolve and mature, HyperOS holds the potential to become a compelling choice for users seeking a balance between performance, customization, and innovation in their Android firmware.

                  The post Unveiling Xiaomi HyperOS: A New Era of Innovation in Android Firmware appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:36PM

                  Faizul "Piju" 9M2PJU: Exploring the Vibrant Culture of Android Custom ROMs: A Journey through History, Community, and Innovation

                  In the ever-evolving world of Android, custom ROMs stand out as a testament to the community’s ingenuity and passion for personalization. These alternative operating system distributions have a rich history, evolving from humble beginnings to becoming a cornerstone of Android enthusiast culture. Let’s embark on a journey through the past, present, and thriving community surrounding Android custom ROMs.

                  Custom ROMs emerged in the early days of Android as a response to the limitations imposed by manufacturers and carriers. Android’s open-source nature allowed developers to tinker with the code, leading to the creation of custom firmware tailored to specific devices. In the early 2010s, ROMs like CyanogenMod gained popularity for offering enhanced performance, additional features, and the latest Android updates to devices that were often left behind by manufacturers.

                  As the community grew, so did the diversity of ROMs. Projects like Paranoid Android, MIUI, and LineageOS (the successor to CyanogenMod) emerged, each with its unique features and design philosophies. Custom ROM development became a playground for innovation, with developers experimenting with everything from performance tweaks to entirely new user interfaces.

                  Today, the custom ROM scene remains vibrant, catering to a diverse range of users seeking to push the boundaries of their Android devices. While mainstream manufacturers continue to dominate the market, custom ROMs offer an alternative for users who crave more control over their user experience.

                  One of the driving forces behind the popularity of custom ROMs is their ability to breathe new life into older devices. Devices that have reached the end of their official support lifecycle can often continue to receive updates and feature enhancements through custom ROMs, extending their usability for years beyond what manufacturers intended.

                  At the heart of the custom ROM culture lies a passionate and tightly-knit community of developers, testers, and enthusiasts. Online forums, such as XDA Developers, serve as hubs for collaboration and knowledge sharing, where developers exchange ideas, troubleshoot issues, and distribute their creations to eager users.

                  The collaborative nature of the community fosters innovation and ensures that even niche devices receive attention from developers. It’s not uncommon to find custom ROMs available for obscure devices that mainstream manufacturers have long forgotten.

                  While the custom ROM scene encompasses a vast array of devices and projects, certain devices and ROMs have garnered particular attention and acclaim from enthusiasts.

                  Some of the top devices for custom ROM enthusiasts include Google’s Pixel lineup, OnePlus devices, and various offerings from Xiaomi, Samsung, and Asus. These devices often boast strong developer support, making them ideal candidates for those looking to dive into the world of custom ROMs.

                  As for custom ROMs themselves, LineageOS remains one of the most popular and widely supported projects, offering a clean and near-stock Android experience across a broad range of devices. Other notable ROMs include Paranoid Android for its innovative features, Resurrection Remix for its customization options, and Pixel Experience for its focus on delivering the Pixel’s software experience to non-Google devices.

                  The culture of Android custom ROMs is a testament to the power of community-driven innovation. What began as a grassroots movement to liberate Android devices from the constraints of manufacturers has evolved into a thriving ecosystem of creativity and exploration. As long as there are passionate developers and users seeking to push the boundaries of what’s possible with their Android devices, the custom ROM culture will continue to thrive, driving innovation and personalization in the Android ecosystem.

                  The post Exploring the Vibrant Culture of Android Custom ROMs: A Journey through History, Community, and Innovation appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:27PM

                  Faizul "Piju" 9M2PJU: Can Ubuntu Replace Any Enterprise Linux? Exploring the Pros and Cons

                  In the realm of enterprise computing, the choice of operating system (OS) is a critical decision that can impact everything from performance to security and scalability. For years, Linux distributions have been a staple in enterprise environments due to their stability, flexibility, and cost-effectiveness. Among these distributions, Ubuntu has emerged as a popular option, known for its ease of use and strong community support. But can Ubuntu truly replace any enterprise Linux distribution? Let’s explore the pros and cons.

                  Pros of Using Ubuntu in Enterprise Environments:

                  1. Ease of Use: Ubuntu is renowned for its user-friendly interface and straightforward installation process. This can reduce the learning curve for administrators and make deployment smoother.
                  2. Package Availability: Ubuntu boasts a vast repository of software packages, covering a wide range of applications and tools. This ensures that enterprises have access to the software they need to meet their business requirements.
                  3. Community Support: Ubuntu has a large and active community of users and developers who provide support through forums, documentation, and tutorials. This can be invaluable for troubleshooting issues and staying updated on best practices.
                  4. Regular Updates: Ubuntu follows a predictable release cycle with LTS (Long Term Support) versions released every two years. These LTS releases are supported for five years, providing enterprises with stability and security updates over an extended period.
                  5. Cost-Effectiveness: Ubuntu is open-source and free to use, making it a cost-effective option for enterprises looking to minimize licensing fees without compromising on quality.

                  Cons of Using Ubuntu in Enterprise Environments:

                  1. Commercial Support: While Ubuntu offers community support, enterprises may require additional support options, especially for mission-critical systems. Canonical, the company behind Ubuntu, provides commercial support, but this may come at an added cost.
                  2. Enterprise Applications Compatibility: Some enterprise applications are certified to run only on specific Linux distributions, such as Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES). Compatibility issues may arise when attempting to run these applications on Ubuntu.
                  3. Security Concerns: While Ubuntu is generally considered secure, some enterprises may have specific security requirements or compliance standards that mandate the use of a particular Linux distribution with enhanced security features.
                  4. Customization and Configuration: Enterprises with highly specialized requirements or existing infrastructure built around another Linux distribution may find it challenging to migrate to Ubuntu due to differences in configuration and customization options.

                  Case Study: The City of Munich’s Migration to Ubuntu

                  One notable example of Ubuntu’s adoption in the enterprise is the City of Munich’s migration from Windows to Ubuntu Linux. In 2004, the city embarked on a project to replace proprietary software with open-source alternatives, citing cost savings and increased flexibility as primary motivations. After a decade-long migration process, thousands of desktops were successfully transitioned to Ubuntu Linux, resulting in significant cost reductions and greater control over IT infrastructure.

                  Conclusion:

                  While Ubuntu offers many advantages for enterprise environments, it may not be suitable for every organization’s needs. Factors such as compatibility requirements, support options, and security considerations should be carefully evaluated before making a decision. However, as demonstrated by case studies like the City of Munich’s migration, Ubuntu has proven to be a viable option for enterprises seeking to leverage the benefits of open-source software in their IT infrastructure.

                  In conclusion, while Ubuntu may not replace every enterprise Linux distribution, its strengths in terms of ease of use, package availability, and community support make it a compelling choice for many organizations. As with any technology decision, careful planning and consideration of specific requirements are essential to ensure a successful implementation.

                  The post Can Ubuntu Replace Any Enterprise Linux? Exploring the Pros and Cons appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:19PM

                  Faizul "Piju" 9M2PJU: Exploring the Evolution of Android: From Its Origins to the Present

                  In the ever-evolving landscape of mobile technology, one name stands out prominently: Android. Developed by Google, Android has become synonymous with smartphones and tablets, offering a rich ecosystem of apps, seamless integration with Google services, and unparalleled customization options. But how did Android come to dominate the mobile market, and why aren’t Linux distributions like Ubuntu commonly found on smartphones and tablets? Let’s delve into the history of Android, its present state, and the factors that make it the go-to choice for mobile devices.

                  The Genesis of Android

                  The story of Android begins in 2003 when Andy Rubin, Rich Miner, Nick Sears, and Chris White founded Android Inc. Their vision was to develop smarter mobile devices, laying the groundwork for what would become the Android operating system. In 2005, Google acquired Android Inc., signaling its entry into the mobile space. Fast forward to 2007, Android was officially unveiled, with the Open Handset Alliance formed to establish open standards for mobile devices. The first commercial Android device, the HTC Dream, hit the market in 2008, marking the beginning of Android’s journey to prominence.

                  Android Today: A Versatile Powerhouse

                  Fast forward to the present, and Android has solidified its position as the leading mobile operating system. Built on an open-source Linux kernel, Android powers a diverse range of devices, from budget-friendly smartphones to flagship models boasting cutting-edge technology. Its versatility extends beyond smartphones, with Android also found in tablets, smartwatches, TVs, and even cars. One of Android’s key strengths lies in its integration with Google’s ecosystem, seamlessly incorporating services like Gmail, Google Maps, and Google Drive into the user experience.

                  Why Not Ubuntu or Other Linux Distros on Mobile Devices?

                  Despite the robustness and versatility of Linux distributions like Ubuntu, they are not commonly used on smartphones and tablets. Several factors contribute to this, including hardware compatibility, touchscreen optimization, app ecosystem, and resource efficiency. Mobile devices have unique hardware requirements, and Linux distributions may lack the necessary drivers and optimizations to ensure smooth operation. Moreover, mobile operating systems like Android are specifically designed for touch interfaces, offering intuitive gestures and controls tailored for smaller screens. While Ubuntu and other Linux distros excel in the desktop environment, adapting them for touchscreen use would require significant effort and may result in a subpar user experience.

                  The Pros and Cons of Android

                  Android’s dominance in the mobile space is not without its drawbacks. On the positive side, Android offers extensive customization options, a vast app ecosystem, and seamless integration with Google services. However, fragmentation remains a concern, with devices running different versions of the operating system and custom manufacturer skins leading to inconsistencies in user experience and software updates. Security vulnerabilities, bloatware, and update delays are also common issues faced by Android users. Nonetheless, Android continues to evolve, with each iteration bringing new features and improvements to the table.

                  User Experience and App Support

                  Despite its challenges, Android provides a user-friendly experience, with intuitive navigation and customizable interfaces. The Google Play Store hosts millions of apps, covering a wide range of categories and catering to various user needs. While some popular apps may be available on other platforms like iOS, the majority of Android apps are exclusive to the platform, ensuring robust app support for Android users.

                  In conclusion, Android’s journey from its humble beginnings to its current status as the leading mobile operating system is a testament to its versatility, innovation, and adaptability. While Linux distributions like Ubuntu offer their own set of advantages, they are not well-suited for smartphones and tablets due to hardware compatibility issues, touchscreen optimization challenges, and limited app ecosystems. As technology continues to evolve, Android remains at the forefront, shaping the future of mobile computing.

                  The post Exploring the Evolution of Android: From Its Origins to the Present appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:06PM

                  Faizul "Piju" 9M2PJU: Why Universities Use Ubuntu and the Role of Edubuntu

                  In the ever-evolving landscape of higher education, universities constantly seek efficient, reliable, and cost-effective solutions to enhance their IT infrastructure. One such solution that has gained significant traction is the use of Ubuntu, a popular Linux distribution. Institutions like the University of Kerala, the University of Delhi, Tsinghua University, and Technische Universität München have embraced Ubuntu for its myriad benefits. Additionally, the specialized educational variant, Edubuntu, provides tailored features for academic environments. Let’s explore why these universities choose Ubuntu and the specific advantages offered by Edubuntu.

                  Why Universities Use Ubuntu

                  1. Cost-Effectiveness:
                  • Open Source: Ubuntu is free to use, which helps universities reduce costs associated with licensing fees. This is particularly beneficial for institutions with limited budgets.
                  • Maintenance Savings: Since Ubuntu is open-source, updates and maintenance are managed by a global community, reducing the need for expensive proprietary support services.
                  1. Security:
                  • Robust Security Features: Ubuntu offers strong security features out-of-the-box, including built-in firewall and encryption capabilities, which are essential for protecting sensitive academic and research data.
                  • Regular Updates: The frequent updates and patches ensure that security vulnerabilities are promptly addressed, maintaining a secure environment for users.
                  1. Stability and Reliability:
                  • Long-Term Support (LTS): Ubuntu’s LTS versions provide stable and reliable performance with support and updates for five years, making it an ideal choice for university IT systems that require long-term stability.
                  • Minimal Downtime: The robustness of Ubuntu minimizes downtime, ensuring that educational and research activities proceed without interruptions.
                  1. Flexibility and Customization:
                  • Highly Customizable: Universities can tailor Ubuntu to meet specific needs, from customizing the user interface to selecting software packages that align with their academic requirements.
                  • Wide Range of Applications: Ubuntu supports a vast array of applications and development tools, making it suitable for various departments, from humanities to advanced scientific research.
                  1. Community and Support:
                  • Active Community: Ubuntu boasts a large and active community of developers and users who contribute to its continuous improvement and provide support through forums and documentation.
                  • Educational Resources: Numerous resources, including tutorials and guides, are available to help users, especially students and faculty, get the most out of the operating system.

                  What is Edubuntu?

                  Edubuntu is a derivative of Ubuntu specifically designed for use in educational environments. It aims to provide a comprehensive operating system tailored to the needs of students, educators, and institutions.

                  1. Pre-Installed Educational Software:
                  • KDE Education Suite: A collection of educational software covering mathematics, science, language learning, and more, making it an all-in-one solution for classrooms.
                  • GCompris: An educational software suite comprising numerous activities for children aged 2 to 10, covering basic computer use, reading, mathematics, and science.
                  1. User-Friendly Interface:
                  • Intuitive Design: Edubuntu offers a user-friendly interface that is easy to navigate, reducing the learning curve for students and educators new to Linux.
                  • Customization Options: The interface can be customized to suit different age groups and educational settings, providing an adaptable learning environment.
                  1. Enhanced Collaboration Tools:
                  • Classroom Management: Tools like iTalc (Intelligent Teaching and Learning with Computers) allow teachers to view and control multiple student desktops, facilitating better classroom management and interactive learning.
                  • Collaborative Applications: Edubuntu includes applications that promote collaboration, such as Etherpad for real-time document editing and Moodle for creating online learning platforms.
                  1. Focus on Digital Literacy:
                  • Coding and Development: Edubuntu includes programming tools and environments like Scratch and Python, encouraging students to develop coding skills from an early age.
                  • STEM Education: The OS is equipped with software that supports STEM (Science, Technology, Engineering, and Mathematics) education, preparing students for future technological advancements.
                  1. Accessibility and Inclusivity:
                  • Assistive Technologies: Edubuntu integrates assistive technologies to support students with disabilities, ensuring an inclusive learning environment.
                  • Multiple Language Support: The system supports multiple languages, catering to diverse student populations.

                  Conclusion

                  The adoption of Ubuntu and Edubuntu by universities around the world underscores the importance of reliable, secure, and cost-effective IT solutions in education. Ubuntu’s strengths in these areas, combined with the tailored educational features of Edubuntu, make them ideal choices for academic institutions aiming to provide a robust and inclusive learning environment. By leveraging these open-source platforms, universities can enhance their educational offerings, support diverse learning needs, and prepare students for the digital future.

                  For more information on Ubuntu and Edubuntu, you can visit their official websites Ubuntu and Edubuntu.

                  The post Why Universities Use Ubuntu and the Role of Edubuntu appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 02:00PM

                  Faizul "Piju" 9M2PJU: Amateur Radio and Emergency Communications: Using Ubuntu as a Desktop Operating System

                  Amateur radio, often referred to as ham radio, has long been a crucial tool in emergency communications. When traditional communication networks fail during disasters, amateur radio operators step in to provide vital communication links. Leveraging Ubuntu, a popular and user-friendly Linux distribution, can enhance the capabilities of amateur radio operators in emergency situations. This article explores how Ubuntu can be used as a desktop operating system for amateur radio, highlighting key software that supports emergency communications.

                  Why Ubuntu for Amateur Radio?

                  Ubuntu’s reliability, ease of use, and extensive support for open-source software make it an excellent choice for amateur radio operators. Its predictable release cycles and long-term support (LTS) versions ensure stability, which is critical during emergencies. Additionally, Ubuntu’s strong community support and comprehensive repositories offer a wide range of applications tailored for ham radio and emergency communications.

                  Essential Software for Amateur Radio on Ubuntu

                  1. Fldigi (Fast, Light Digital):
                  • Description: Fldigi is a versatile digital modem application that supports numerous digital modes such as PSK31, RTTY, and CW. It allows operators to send and receive text messages, images, and other data over radio frequencies.
                  • Use in Emergencies: Fldigi’s ability to operate in various digital modes makes it invaluable for transmitting information when voice communication is impractical or bandwidth is limited.
                  1. Hamlib:
                  • Description: Hamlib is a library that provides a standard programming interface to control various radios and receivers. It supports a wide range of amateur radios and can be integrated with other software.
                  • Use in Emergencies: Hamlib allows operators to automate and control their radio equipment efficiently, facilitating quicker and more reliable communication setups during emergencies.
                  1. Xastir (X Amateur Station Tracking and Information Reporting):
                  • Description: Xastir is an open-source client for the Automatic Packet Reporting System (APRS). It provides real-time tracking and information reporting, displaying data on detailed maps.
                  • Use in Emergencies: Xastir enables operators to track the location of mobile units and report real-time information such as weather conditions, enhancing situational awareness and coordination.
                  1. CQRLOG:
                  • Description: CQRLOG is an advanced logging program for Linux, designed specifically for amateur radio operators. It supports logging of contacts, QSL management, and integration with various online services.
                  • Use in Emergencies: Keeping accurate logs is essential in emergency communications for accountability and coordination. CQRLOG helps maintain organized records of all communications.
                  1. WSJT-X:
                  • Description: WSJT-X is a popular software suite for weak-signal digital modes, such as FT8 and JT65. These modes are designed for making reliable, confirmed QSOs under extreme conditions.
                  • Use in Emergencies: WSJT-X’s weak-signal modes are particularly useful when signal conditions are poor, ensuring that messages can be sent and received even with minimal power and bandwidth.
                  1. GNU Radio:
                  • Description: GNU Radio is a powerful toolkit for building software-defined radios (SDR). It provides a wide array of signal processing blocks to implement various communication systems.
                  • Use in Emergencies: GNU Radio allows operators to create flexible and adaptable communication setups, which can be quickly reconfigured to meet the specific needs of an emergency situation.
                  1. Chirp:
                  • Description: Chirp is a free, open-source tool for programming amateur radios. It supports a wide range of radios and allows for easy management of frequency memories and settings.
                  • Use in Emergencies: Chirp simplifies the process of configuring radios, ensuring that all units are set up correctly and consistently, which is crucial for coordinated emergency response.

                  Setting Up Ubuntu for Amateur Radio

                  To set up Ubuntu for amateur radio operations, follow these steps:

                  1. Install Ubuntu: Download and install the latest LTS version of Ubuntu from the official website. The LTS versions offer long-term support and stability, ideal for mission-critical applications.
                  2. Update Repositories: Ensure your system is up-to-date by running:
                     sudo apt update && sudo apt upgrade
                  1. Install Necessary Software: Use Ubuntu’s package manager to install the software mentioned above. For example, to install Fldigi, use:
                     sudo apt install fldigi

                  Repeat for other software packages like Hamlib, Xastir, and Chirp.

                  1. Configure Radios and Interfaces: Connect your amateur radio equipment to your computer. Use the respective software interfaces to configure and test your setup. Ensure all necessary drivers and libraries (like Hamlib) are installed and functioning.
                  2. Test and Train: Conduct regular tests and training exercises to ensure that all software and hardware components are working correctly. Familiarity with the tools and their functions is crucial for effective emergency communication.

                  Conclusion

                  Using Ubuntu as a desktop operating system for amateur radio during emergency communications offers a robust, reliable, and user-friendly platform. With its extensive support for a wide range of open-source software tailored for ham radio, Ubuntu empowers operators to maintain critical communication links when they are needed most. By integrating tools like Fldigi, Hamlib, Xastir, and others, amateur radio operators can enhance their emergency response capabilities, ensuring that they are prepared to provide vital communication support during disasters.

                  The post Amateur Radio and Emergency Communications: Using Ubuntu as a Desktop Operating System appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 01:44PM

                  Faizul "Piju" 9M2PJU: The Free Software Foundation: Pioneering Software Freedom

                  The Free Software Foundation (FSF) has been a pivotal force in the software industry, advocating for user freedom and open collaboration. Founded by Richard Stallman in 1985, the FSF has significantly influenced the development and proliferation of free software. This article explores the foundation’s origins, its contributions, and how it has inspired the creation of various Linux distributions, including Ubuntu.

                  The Founding of the Free Software Foundation

                  Richard Stallman, a programmer at the Massachusetts Institute of Technology (MIT), founded the Free Software Foundation in 1985. Stallman was driven by a vision of software that respected users’ freedoms—freedom to use, study, modify, and distribute software. This vision was a response to the growing trend of proprietary software that restricted these freedoms.

                  Logo and Slogan
                  • Logo: The FSF logo features a stylized head of a gnu (a species of African antelope), a play on the acronym GNU (GNU’s Not Unix). The gnu symbolizes the project’s commitment to creating free software.
                  • Slogan: The FSF’s slogan is “Free as in Freedom,” emphasizing that the foundation’s mission is about freedom, not price. It seeks to ensure that software users have control over their software and their computing experience.

                  Key Contributions and Achievements

                  1. The GNU Project:
                  • Initiation: Launched in 1983 by Stallman, the GNU Project aimed to develop a complete Unix-like operating system composed entirely of free software.
                  • Components: Key components of the GNU system include the GNU Compiler Collection (GCC), the GNU C Library (glibc), and the Bash shell. These tools have become fundamental in the broader software ecosystem.
                  1. GNU General Public License (GPL):
                  • Purpose: The GPL, written by Stallman, is a free software license that guarantees users the freedoms to run, study, modify, and share software.
                  • Impact: The GPL has become the most widely used free software license, fostering a collaborative environment and ensuring that derivative works also remain free.
                  1. Advocacy and Education:
                  • Campaigns: The FSF runs numerous campaigns to raise awareness about digital rights, software patents, and the dangers of proprietary software.
                  • Education: Through workshops, conferences, and publications, the FSF educates the public about the importance of software freedom.

                  The Birth of Ubuntu and Other Linux Distributions

                  The principles and tools established by the FSF and the GNU Project have been instrumental in the development of various Linux distributions. Here’s a look at how Ubuntu and other notable distributions emerged from this movement:

                  1. Ubuntu:
                  • Founder: Mark Shuttleworth, a South African entrepreneur, founded Ubuntu in 2004.
                  • Philosophy: Ubuntu was created to make Linux accessible to everyone, emphasizing ease of use and community-driven development. It is based on Debian, one of the oldest and most respected GNU/Linux distributions.
                  • Contributions: Canonical Ltd., Shuttleworth’s company, oversees Ubuntu’s development. Ubuntu’s regular release cycle, comprehensive software repositories, and strong community support have made it one of the most popular Linux distributions.
                  1. Debian:
                  • Founder: Ian Murdock founded Debian in 1993.
                  • Philosophy: Debian is known for its commitment to free software principles, stability, and community governance. The Debian Social Contract and Debian Free Software Guidelines (DFSG) reflect its dedication to software freedom.
                  1. Red Hat:
                  • Founders: Bob Young and Marc Ewing founded Red Hat in 1993.
                  • Philosophy: Red Hat combines open-source principles with commercial viability. Red Hat Enterprise Linux (RHEL) is a major enterprise platform, demonstrating the commercial potential of free software.
                  1. Slackware:
                  • Founder: Patrick Volkerding founded Slackware in 1993.
                  • Philosophy: Slackware focuses on simplicity and adhering closely to Unix principles. It provides a clean, unmodified experience that appeals to purists and system administrators.

                  Conclusion

                  The Free Software Foundation, under the visionary leadership of Richard Stallman, has been a cornerstone of the free software movement. Through the GNU Project, the GPL, and ongoing advocacy, the FSF has ensured that software freedom remains a central tenet in the digital age. This foundation has not only protected user rights but also inspired the creation of influential Linux distributions like Ubuntu, Debian, Red Hat, and Slackware. These distributions, built on the principles of free software, continue to drive innovation, collaboration, and accessibility in the software industry, embodying the ethos of freedom and community that the FSF champions.

                  The post The Free Software Foundation: Pioneering Software Freedom appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 01:34PM

                  Faizul "Piju" 9M2PJU: Ubuntu Linux: Pioneering a New Evolution for Desktop Operating Systems

                  Since its inception in 2004, Ubuntu Linux has been at the forefront of transforming the landscape of desktop operating systems. Conceived by South African entrepreneur Mark Shuttleworth and developed by Canonical Ltd., Ubuntu has grown to become one of the most popular and influential Linux distributions globally. This article delves into the innovations Ubuntu has introduced and the significant contributions Canonical has made to its development, marking a new era for Linux on the desktop.

                  The Rise of Ubuntu

                  Ubuntu was created with the vision of bringing the power of Linux to the masses, emphasizing ease of use, accessibility, and community-driven development. Its name, derived from a Southern African philosophy meaning “humanity to others,” reflects its commitment to open-source principles and collaboration.

                  Key Milestones and Features
                  1. User-Friendly Interface: One of Ubuntu’s most significant contributions to the Linux ecosystem is its user-friendly interface. The introduction of the GNOME desktop environment, and later the Unity interface, aimed to provide a seamless and intuitive user experience. This focus on usability helped lower the barrier to entry for new users migrating from Windows or macOS.
                  2. Regular Release Cycle: Ubuntu’s predictable release cycle, with new versions every six months and long-term support (LTS) releases every two years, has provided stability and reliability to its user base. This approach ensures that users and enterprises can plan upgrades and deployments with confidence.
                  3. Comprehensive Software Repository: Ubuntu offers a vast software repository, making it easy for users to find and install applications. The inclusion of the Snap package system has further simplified software installation and updates, allowing for more secure and isolated app environments.
                  4. Security and Updates: Canonical has placed a strong emphasis on security, providing regular updates and patches to ensure that systems remain secure. The Ubuntu Pro service extends this by offering extended security maintenance and compliance features, catering to enterprise needs.
                  5. Community and Documentation: The Ubuntu community is one of the most vibrant in the Linux world, contributing to extensive documentation, forums, and support channels. This community-driven approach has been crucial in driving innovation and addressing user needs.

                  Canonical’s Contributions

                  Canonical Ltd., the company behind Ubuntu, has played a pivotal role in its development and proliferation. Here are some key contributions:

                  1. Development and Maintenance: Canonical employs a dedicated team of developers who work on improving Ubuntu, ensuring it stays at the cutting edge of technology. Their efforts include kernel development, system integration, and performance optimization.
                  2. Enterprise Solutions: Canonical has expanded Ubuntu’s reach into the enterprise sector with products like Ubuntu Server, Ubuntu Core (for IoT devices), and cloud solutions. Their partnerships with major cloud providers (like AWS, Google Cloud, and Microsoft Azure) have solidified Ubuntu’s position in the enterprise and cloud computing markets.
                  3. OpenStack and Kubernetes: Canonical has been a major contributor to the OpenStack and Kubernetes projects, integrating these technologies with Ubuntu to provide robust cloud and container solutions. This has helped organizations deploy scalable and efficient cloud infrastructures.
                  4. Support and Services: Canonical offers professional support and services, including consulting, managed services, and training. These services ensure that businesses can effectively deploy, manage, and scale their Ubuntu-based solutions.
                  5. Innovation Initiatives: Canonical continuously explores new avenues of innovation. Projects like the Ubuntu Touch (for mobile devices) and the Ubuntu WSL (Windows Subsystem for Linux) highlight their commitment to expanding Ubuntu’s versatility and accessibility across different platforms and devices.

                  Conclusion

                  Ubuntu Linux has undoubtedly pioneered a new evolution for desktop operating systems, blending the robustness and flexibility of Linux with user-centric design and enterprise-grade capabilities. Canonical’s ongoing contributions and commitment to innovation have cemented Ubuntu’s place as a leading force in the open-source community. As Ubuntu continues to evolve, it promises to drive further advancements in desktop computing, cloud services, IoT, and beyond, staying true to its ethos of bringing “humanity to others” through technology.

                  The post Ubuntu Linux: Pioneering a New Evolution for Desktop Operating Systems appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 12:44PM

                  hackergotchi for Deepin

                  Deepin

                  deepin V23 RC Is Officially Released!

                  deepin is a Linux-based open source desktop operating system, and just today, deepin V23 RC is here! Welcome to experience and feedback! Important Notice: Please prioritize using the Control Center for system upgrades: Control Center - Updates - Check for Updates. After this upgrade, the original Linglong applications will no longer be usable. Please use the pre-installed "linglong-repair-tool" to uninstall Linglong applications and automatically install deb format applications. For specific instructions, please refer to the link provided.(https://www.deepin.org/en/solution-for-missing-linglong-app/)   [New Features] [Installer] Optimized the details of the installer UI. Updated the carousel images in the installer. Added a trial mode for ...Read more

                  15 May, 2024 10:12AM by aida

                  How to Use the New Linglong App on deepin V23 RC

                  The deepin V23 RC version has made significant changes to enhance the design of LingLong. Existing LingLong applications need to be upgraded synchronously with this system update. To address this issue, we provide the LingLong Repair Tool. Users who have installed LingLong applications in previous versions can run the LingLung Repair Tool from the launcher after the upgrade to restore some of the original LingLung applications. Additionally, due to some applications not yet being well adapted to LingLong, there may be issues such as application startup failure or problems with taskbar display after startup. Therefore, we have rolled back some ...Read more

                  15 May, 2024 09:40AM by aida

                  hackergotchi for Grml developers

                  Grml developers

                  Evgeni Golov: Using HPONCFG on CentOS Stream 9 with OpenSSL 3.2

                  Today I've updated an HPE ProLiant DL325 G10 from CentOS Stream 8 to CentOS Stream 9 (details on that to follow) and realized that hponcfg was broken afterwards.

                  As I do not have a support contract with HPE, I couldn't just yell at them in private, so I am doing this in public now ;-)

                  # hponcfg
                  HPE Lights-Out Online Configuration utility
                  Version 5.6.0 Date 11/30/2020 (c) 2005,2020 Hewlett Packard Enterprise Development LP
                  Error: Unable to locate SSL library.
                         Install latest SSL library to use HPONCFG.
                  

                  Welp, what the heck?

                  But wait, 5.6.0 from 2020 looks old, let's update this first!

                  hponcfg is part of the "Management Component Pack" (at least if you're not running RHEL or SLES where you get it via the "Service Pack for ProLiant" which requires a support contract) and can be downloaded from the Software Delivery Repository.

                  The Software Delivery Repository tells you to configure it in /etc/yum.repos.d/mcp.repo as

                  [mcp]
                  name=Management Component Pack
                  baseurl=http://downloads.linux.hpe.com/repo/mcp/dist/dist_ver/arch/project_ver
                  enabled=1
                  gpgcheck=0
                  gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp
                  

                  gpgcheck=0? Suuure! Plain HTTP? Suuure!

                  But it gets better! When you look at https://downloads.linux.hpe.com/repo/mcp/centos/ (you have to substitute dist with your distribution!) you'll see that there is no 9 folder and thus no packages for CentOS (Stream) 9. There are however folders for Oracle, Rocky and Alma. Phew. Let's take one of these!

                  [mcp]
                  name=Management Component Pack
                  baseurl=https://downloads.linux.hpe.com/repo/mcp/rocky/9/x86_64/current/
                  enabled=1
                  gpgcheck=1
                  gpgkey=https://downloads.linux.hpe.com/repo/mcp/GPG-KEY-mcp
                  

                  dnf upgrade hponcfg updates it to hponcfg-6.0.0-0.x86_64 and:

                  # hponcfg
                  HPE Lights-Out Online Configuration utility
                  Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP
                  Error: Unable to locate SSL library.
                         Install latest SSL library to use HPONCFG.
                  

                  Fuck.

                  ldd doesn't show hponcfg being linked to libssl, do they dlopen() at runtime and fucked something up? ltrace to the rescue!

                  # ltrace hponcfg
                  
                  popen("strings /bin/openssl | grep 'Ope"..., "r")            = 0x621700
                  fgets("OpenSSL 3.2.1 30 Jan 2024\n", 256, 0x621700)          = 0x7ffd870e2e10
                  strstr("OpenSSL 3.2.1 30 Jan 2024\n", "OpenSSL 3.0")         = nil
                  
                  

                  WAT?

                  They run strings /bin/openssl |grep 'OpenSSL' and compare the result with "OpenSSL 3.0"?!

                  Sure, OpenSSL 3.2 in EL9 is rather fresh and didn't hit RHEL/Oracle/Alma/Rocky yet, but surely there are better ways to check for a compatible version of OpenSSL than THIS?!

                  Anyway, I am not going to downgrade my OpenSSL. Neither will I patch it to pretend to be 3.0.

                  But I can patch the hponcfg binary!

                  # vim /sbin/hponcfg
                  <go to line 146>
                  <replace 3.0 with 3.2>
                  :x
                  

                  Yes, I used vim. Yes, it works. No, I won't guarantee this won't kill a kitten somewhere.

                  # ./hponcfg
                  HPE Lights-Out Online Configuration utility
                  Version 6.0.0 Date 10/30/2022 (c) 2005,2022 Hewlett Packard Enterprise Development LP
                  Firmware Revision = 2.44 Device type = iLO 5 Driver name = hpilo
                  
                  USAGE:
                    hponcfg  -?
                    hponcfg  -h
                    hponcfg  -m minFw
                    hponcfg  -r [-m minFw] [-u username] [-p password]
                    hponcfg  -b [-m minFw] [-u username] [-p password]
                    hponcfg  [-a] -w filename [-m minFw] [-u username] [-p password]
                    hponcfg  -g [-m minFw] [-u username] [-p password]
                    hponcfg  -f filename [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password]
                    hponcfg  -i [-l filename] [-s namevaluepair] [-v] [-m minFw] [-u username] [-p password]
                  
                    -h,  --help           Display this message
                    -?                    Display this message
                    -r,  --reset          Reset the Management Processor to factory defaults
                    -b,  --reboot         Reboot Management Processor without changing any setting
                    -f,  --file           Get/Set Management Processor configuration from "filename"
                    -i,  --input          Get/Set Management Processor configuration from the XML input
                                          received through the standard input stream.
                    -w,  --writeconfig    Write the Management Processor configuration to "filename"
                    -a,  --all            Capture complete Management Processor configuration to the file.
                                          This should be used along with '-w' option
                    -l,  --log            Log replies to "filename"
                    -v,  --xmlverbose     Display all the responses from Management Processor
                    -s,  --substitute     Substitute variables present in input config file
                                          with values specified in "namevaluepairs"
                    -g,  --get_hostinfo   Get the Host information
                    -m,  --minfwlevel     Minimum firmware level
                    -u,  --username       iLO Username
                    -p,  --password       iLO Password
                  

                  For comparison, here is the diff --text output:

                  # diff -u --text /sbin/hponcfg ./hponcfg
                  --- /sbin/hponcfg   2022-08-02 01:07:55.000000000 +0000
                  +++ ./hponcfg   2024-05-15 09:06:54.373121233 +0000
                  @@ -143,7 +143,7 @@
                   helpget_hostinforesetwriteconfigallfileinputlogminfwlevelxmlverbosesubstitutetimeoutdbgverbosityrebootusernamepasswordlibpath%Ah*Ag7Ar=AwIAaMAfRAiXAl\AmgAvrAs}At�Ad�Ab�Au�Ap�Azhgrbaw:f:il:m:vs:t:d:z:u:p:tmpXMLinputFile%2d.xmlw+Error: Syntax Error - Invalid options present.
                   =O@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@aQ@�M@�M@aQ@�M@aQ@�N@�M@�N@�P@aQ@aQ@�M@�M@aQ@aQ@LN@aQ@�M@�O@�M@�M@�M@�M@aQ@aQ@�M@<!----><LOGINUSER_LOGINPASSWORD<LOGIN USER_LOGIN="%s" PASSWORD="%s"ERROR: LOGIN tag is missing.
                   >ERROR: LOGIN end tag is missing.
                  -strings  | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.0which openssl 2>&1/usr/bin/opensslOpenSSL location - %s
                  +strings  | grep 'OpenSSL 1' | grep 'OpenSSL 3'OpenSSL 1.0OpenSSL 1.1OpenSSL 3.2which openssl 2>&1/usr/bin/opensslOpenSSL location - %s
                   Current version %s
                  
                   No response from command.
                  

                  Pretty sure it won't apply like this with patch, but you get the idea.

                  And yes, double-giggles for the fact that the error message says "Install latest SSL library to use HPONCFG" and the issues is because I have the latest SSL library installed…

                  15 May, 2024 09:14AM

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Faizul "Piju" 9M2PJU: Ubuntu vs. FreeBSD: Network Latency and Performance Comparison for Servers

                  When it comes to choosing an operating system for server deployment, network latency and performance are critical factors. Both Ubuntu, a popular Linux distribution, and FreeBSD, a Unix-like operating system, are known for their robustness and reliability. However, they have different architectures, kernels, and networking stacks, leading to differences in their network performance. This article delves into the network latency and performance of Ubuntu and FreeBSD to help you decide which one is better suited for your server needs.

                  Overview of Ubuntu and FreeBSD

                  Ubuntu:

                  • Based on Debian Linux.
                  • Uses the Linux kernel.
                  • Known for ease of use, extensive community support, and regular updates.
                  • Commonly used for both desktops and servers.

                  FreeBSD:

                  • Derived from the Berkeley Software Distribution (BSD).
                  • Uses the FreeBSD kernel.
                  • Known for its performance, advanced networking features, and robust security.
                  • Primarily used in server environments and for network appliances.

                  Network Stack and Performance

                  Ubuntu (Linux):

                  • The Linux kernel is optimized for a balance between performance and flexibility.
                  • Ubuntu’s network stack is highly modular, supporting a wide range of network protocols and features.
                  • The default networking tool is NetworkManager, which provides a user-friendly interface for network configuration.
                  • Linux kernel features such as TCP Congestion Control algorithms (like BBR) enhance network performance, especially in high-latency environments.
                  • Tuning parameters, such as sysctl settings for TCP window scaling and buffer sizes, can significantly improve performance.
                  • High performance is achievable with proper tuning, but out-of-the-box performance can vary based on kernel versions and configurations.

                  FreeBSD:

                  • FreeBSD is renowned for its efficient and high-performance network stack.
                  • Uses the pf (Packet Filter) for advanced network filtering, which is considered more robust and performant compared to iptables on Linux.
                  • Includes advanced network features like the TCP/IP stack improvements and zero-copy sockets, which reduce CPU overhead.
                  • The default networking tool is rc.conf, which offers straightforward yet powerful network configuration options.
                  • FreeBSD’s network stack is designed with performance in mind, often delivering lower latency and higher throughput compared to default Linux settings.
                  • The operating system’s architecture, including the VFS and networking layers, is optimized for high-speed data transfers and low latency.

                  Network Latency Comparison

                  Network latency is critical in determining how quickly data is transferred across the network. Low latency is crucial for applications requiring real-time data exchange, such as online gaming, VoIP, and high-frequency trading.

                  Latency Tests:

                  • Ping Test: Measures the round-trip time for packets sent from the server to a destination and back.
                  • Throughput Test: Measures the rate at which data is successfully transferred from one host to another.

                  Ubuntu:

                  • Typically shows slightly higher latency out-of-the-box compared to FreeBSD due to the overhead of the modular and flexible network stack.
                  • Can achieve competitive latency with fine-tuning of the kernel and networking parameters.
                  • Tools like iperf and netperf are used to measure and optimize network performance.

                  FreeBSD:

                  • Generally exhibits lower latency and higher throughput in default configurations due to its streamlined and efficient network stack.
                  • Consistently outperforms Linux in high-performance network applications and scenarios.
                  • Uses built-in tools like dummynet for detailed network performance testing and optimization.

                  Performance Benchmarks

                  Performance benchmarks provide quantifiable data on how each OS handles network tasks under various conditions. Key metrics include data transfer rates, packet loss, and CPU usage during network operations.

                  Ubuntu:

                  • In multi-threaded network applications, Ubuntu performs well, leveraging the multi-core capabilities of modern CPUs.
                  • Performance can be optimized for specific use cases by adjusting the TCP/IP stack parameters and kernel settings.

                  FreeBSD:

                  • Excels in single-threaded network performance due to its efficient handling of network interrupts and processing.
                  • Shows superior performance in high-throughput, low-latency environments, making it ideal for heavy-duty networking tasks.

                  Conclusion: Which is Best?

                  Choosing between Ubuntu and FreeBSD for server network performance depends on your specific needs:

                  • Ubuntu:
                  • Best for users who need a user-friendly, highly customizable system with extensive software support.
                  • Ideal for general-purpose servers where ease of use and community support are significant factors.
                  • Requires tuning to match FreeBSD’s performance in high-performance networking scenarios.
                  • FreeBSD:
                  • Best for environments where network performance, low latency, and security are paramount.
                  • Ideal for high-performance network appliances, web servers, and services requiring consistent, low-latency communication.
                  • Out-of-the-box performance is generally superior in networking tasks compared to Ubuntu.

                  In conclusion, if network latency and performance are your primary concerns, FreeBSD typically offers better results. However, with proper tuning and configuration, Ubuntu can also deliver impressive performance. Your choice should be guided by the specific requirements of your server environment and the level of control and support you need.

                  The post Ubuntu vs. FreeBSD: Network Latency and Performance Comparison for Servers appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 08:14AM

                  Faizul "Piju" 9M2PJU: Ubuntu vs. FreeBSD: Nginx Web Server Performance Comparison

                  When it comes to setting up a web server, Nginx is a popular choice due to its high performance and low resource usage. Both Ubuntu and FreeBSD are robust operating systems that can host Nginx effectively, but there are differences in how each handles performance, load, memory, and other related aspects. This article will compare Nginx web server performance on Ubuntu and FreeBSD, providing insights into their strengths and weaknesses.

                  Overview of Nginx

                  Nginx is an open-source web server known for its high concurrency, performance, and low memory footprint. It is commonly used for serving static content, acting as a reverse proxy, load balancer, and handling HTTP, HTTPS, and other protocols.

                  Testing Environment

                  For a fair comparison, we need to set up identical environments on both Ubuntu and FreeBSD:

                  • Hardware: Identical hardware configurations with equal CPU, RAM, and storage capacities.
                  • Nginx Version: The same version of Nginx, compiled with similar modules and configurations.
                  • Workload: The same workload, including the number of concurrent users, request types, and content served.
                  • Benchmarking Tools: Tools like ApacheBench (ab) and wrk to measure performance metrics such as requests per second, latency, and resource utilization.

                  Performance Metrics

                  1. Installation and Configuration

                  • Ubuntu:
                  • Installation: sudo apt update && sudo apt install nginx
                  • Configuration: Nginx on Ubuntu can be easily configured using /etc/nginx/nginx.conf.
                  • FreeBSD:
                  • Installation: pkg install nginx or using the Ports Collection.
                  • Configuration: Nginx configuration on FreeBSD is also located in /usr/local/etc/nginx/nginx.conf.

                  Ubuntu offers a more straightforward and familiar package management system with apt, while FreeBSD provides more flexibility through its Ports Collection, allowing for customized builds.

                  2. Requests Per Second (RPS)

                  This metric measures the number of requests a server can handle per second.

                  • Ubuntu: Typically, Ubuntu has shown high RPS due to its optimized kernel for general workloads and extensive optimizations available through apt.
                  • FreeBSD: FreeBSD, with its focus on network performance and efficient TCP/IP stack, often matches or slightly exceeds Ubuntu in RPS, especially under high network loads.

                  3. Latency

                  Latency measures the delay before a transfer of data begins following an instruction for its transfer.

                  • Ubuntu: Generally, Ubuntu exhibits low latency, but this can vary depending on system tuning and specific workloads.
                  • FreeBSD: Known for its network stack efficiency, FreeBSD can have slightly lower latency, particularly for high throughput scenarios due to optimizations in handling TCP/IP.

                  4. Memory Usage

                  Memory usage is crucial for understanding how many resources are consumed under load.

                  • Ubuntu: Nginx on Ubuntu tends to use a bit more memory due to the default configuration and system services running in the background.
                  • FreeBSD: Typically uses less memory because of its leaner base system and more granular control over system processes.

                  5. CPU Utilization

                  This metric evaluates how efficiently the CPU is used by Nginx under load.

                  • Ubuntu: Ubuntu’s CPU utilization is efficient, benefiting from years of optimizations and improvements in the Linux kernel.
                  • FreeBSD: FreeBSD’s CPU utilization can be even more efficient in network-heavy scenarios due to its well-tuned kernel and network stack.

                  Comparative Analysis

                  Installation and Ease of Use

                  • Ubuntu: Easier for beginners and those familiar with Linux. The apt package manager simplifies the installation and update process.
                  • FreeBSD: Offers more customization through the Ports Collection but has a steeper learning curve.

                  Performance Under Load

                  • Ubuntu: Handles web traffic efficiently but might require more tuning for extreme performance needs.
                  • FreeBSD: Excels in network performance and stability under heavy loads, often outperforming Ubuntu in high-concurrency scenarios.

                  Memory Efficiency

                  • Ubuntu: Uses more memory out-of-the-box but provides ample tools and community support for optimization.
                  • FreeBSD: Generally more memory-efficient due to its streamlined base system.

                  Security and Stability

                  • Ubuntu: Regular updates and a large support community ensure security patches and stability.
                  • FreeBSD: Known for its rigorous security measures and stability, making it a preferred choice for critical applications.

                  Conclusion

                  Both Ubuntu and FreeBSD are capable of running Nginx with high performance, but they cater to different needs and preferences:

                  • Ubuntu: Ideal for users who prefer ease of use, extensive software repositories, and community support. It performs well under most web server loads and is easier to set up and maintain.
                  • FreeBSD: Preferred for high-performance, high-stability environments, particularly where network performance is critical. Its efficient memory usage and robust security features make it an excellent choice for demanding web server applications.

                  Ultimately, the choice between Ubuntu and FreeBSD for hosting an Nginx web server will depend on your specific requirements, expertise, and the particular demands of your workload. Both operating systems offer unique advantages that can be leveraged to optimize web server performance.

                  The post Ubuntu vs. FreeBSD: Nginx Web Server Performance Comparison appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 08:07AM

                  Faizul "Piju" 9M2PJU: Ubuntu vs. FreeBSD: A Comprehensive Comparison

                  When it comes to choosing an operating system for servers, desktops, or specific applications, the decision often comes down to two robust options: Ubuntu and FreeBSD. Both are powerful, open-source systems but cater to different needs and preferences. This article will delve into the details of Ubuntu and FreeBSD, comparing their pros and cons, top usages, and key differences to help you make an informed choice.

                  What is Ubuntu?

                  Ubuntu is a Debian-based Linux distribution developed by Canonical Ltd. It is known for its ease of use, regular updates, and extensive community support. Ubuntu is available in various flavors, including Ubuntu Desktop for personal computers, Ubuntu Server for server environments, and Ubuntu Core for IoT devices.

                  What is FreeBSD?

                  FreeBSD is a free and open-source Unix-like operating system descended from the Berkeley Software Distribution (BSD). It is renowned for its performance, advanced networking, security features, and system consistency. FreeBSD is often used in servers, embedded systems, and network appliances.

                  Pros and Cons

                  Ubuntu

                  Pros:

                  1. User-Friendly: Ubuntu is known for its ease of installation and user-friendly interface, making it accessible to beginners.
                  2. Extensive Software Repositories: With thousands of software packages available, users can easily find and install applications.
                  3. Community and Commercial Support: Backed by Canonical, Ubuntu offers extensive community support and professional services.
                  4. Regular Updates: Ubuntu has a predictable release cycle with regular updates and Long-Term Support (LTS) versions that offer stability for enterprise use.
                  5. Wide Adoption: Popular in both personal and enterprise environments, ensuring broad software compatibility and community resources.

                  Cons:

                  1. System Overhead: Ubuntu can be more resource-intensive compared to some other Linux distributions or FreeBSD.
                  2. Security: While secure, Ubuntu’s focus on user-friendliness sometimes leads to defaults that may need tightening for high-security environments.

                  FreeBSD

                  Pros:

                  1. Performance and Stability: FreeBSD is optimized for performance and is known for its stability, especially in network and server environments.
                  2. Advanced Networking: Features advanced networking capabilities and native support for various protocols, making it ideal for network appliances.
                  3. Security Features: Includes robust security features such as jails (lightweight virtualization) and a focus on security from the ground up.
                  4. System Consistency: FreeBSD provides a complete operating system, including the kernel, system libraries, and userland tools, all maintained in one coherent package.
                  5. ZFS File System: Natively supports the ZFS file system, which offers advanced storage features like snapshots and data integrity verification.

                  Cons:

                  1. Steeper Learning Curve: FreeBSD can be more challenging to set up and manage, especially for those new to Unix-like systems.
                  2. Software Availability: While FreeBSD has a robust ports collection, it may not have as many precompiled packages as Ubuntu, leading to more manual installations.
                  3. Desktop Support: FreeBSD is primarily server-oriented, and while it can be used as a desktop OS, it requires more configuration.

                  Top Usages

                  Ubuntu

                  1. Desktop Computing: Ubuntu Desktop is popular among personal users, educational institutions, and businesses for everyday computing tasks.
                  2. Server Deployments: Ubuntu Server is widely used for web servers, database servers, and cloud computing, including Ubuntu’s own cloud offering.
                  3. Development Environment: Favored by developers for its wide range of development tools and environments, along with strong community support.
                  4. IoT Devices: Ubuntu Core is optimized for IoT devices, offering security and regular updates.

                  FreeBSD

                  1. Web and Network Servers: FreeBSD’s stability and networking capabilities make it ideal for web servers, mail servers, and high-performance network appliances.
                  2. Embedded Systems: Frequently used in embedded systems due to its performance and flexibility.
                  3. Storage Solutions: Leveraging the ZFS file system, FreeBSD is used in storage solutions where data integrity and advanced file system features are crucial.
                  4. Security Appliances: Often used in firewalls and VPN solutions due to its security features and advanced networking capabilities.

                  Key Differences

                  1. System Architecture: Ubuntu is a Linux distribution, while FreeBSD is a Unix-like OS with a different lineage and system architecture.
                  2. Package Management: Ubuntu uses apt for package management, making software installation and updates straightforward. FreeBSD uses the Ports Collection and pkg for package management, offering flexibility but sometimes requiring more manual intervention.
                  3. Licensing: Ubuntu follows the GNU General Public License (GPL), whereas FreeBSD uses the permissive BSD license, which has fewer restrictions on redistribution.
                  4. Kernel and Userland: Ubuntu’s kernel and userland tools come from different sources (Linux kernel and GNU utilities), while FreeBSD’s kernel and userland are developed and maintained together, ensuring better integration and consistency.

                  Conclusion

                  Both Ubuntu and FreeBSD offer robust and reliable platforms, but they cater to different needs. Ubuntu’s user-friendly nature, extensive software repositories, and broad community support make it an excellent choice for both beginners and seasoned users looking for a reliable desktop or server environment. On the other hand, FreeBSD’s performance, advanced networking capabilities, and security features make it ideal for network servers, security appliances, and storage solutions where stability and consistency are paramount.

                  Your choice between Ubuntu and FreeBSD should be guided by your specific requirements, familiarity with the systems, and the environment in which you plan to deploy them. Whether you prioritize ease of use and wide software availability or performance and advanced security features, both Ubuntu and FreeBSD have proven themselves as powerful and versatile operating systems.

                  The post Ubuntu vs. FreeBSD: A Comprehensive Comparison appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 07:58AM

                  Faizul "Piju" 9M2PJU: Unlocking the Potential of Raspberry Pi for Amateur Radio Operators

                  In the world of amateur radio, innovation and experimentation are key. One of the most exciting tools available to amateur radio operators today is the Raspberry Pi, a versatile and affordable single-board computer. With its compact size, powerful capabilities, and extensive community support, the Raspberry Pi can revolutionize your ham radio setup. In this blog post, we’ll explore the various uses of Raspberry Pi for amateur radio operators and delve into the advantages of using Ubuntu Linux on this tiny powerhouse.

                  Why Raspberry Pi for Amateur Radio?

                  The Raspberry Pi offers several advantages that make it an excellent choice for amateur radio enthusiasts:

                  1. Affordability: Starting at just $35, the Raspberry Pi provides a cost-effective solution for a wide range of radio applications.
                  2. Compact Size: Its small form factor makes it easy to integrate into your existing radio setup without taking up much space.
                  3. Versatility: With a plethora of available software and hardware add-ons, the Raspberry Pi can be adapted for numerous radio functions.
                  4. Community Support: A large and active community means plenty of resources, tutorials, and forums to help you troubleshoot and expand your projects.

                  Common Uses of Raspberry Pi in Amateur Radio

                  1. Digital Mode Operations: The Raspberry Pi can be used to operate various digital modes such as FT8, PSK31, and RTTY. Software like WSJT-X can be easily installed on the Raspberry Pi to decode and encode digital signals.
                  2. Software-Defined Radio (SDR): Pairing a Raspberry Pi with an SDR dongle (like the RTL-SDR) allows you to receive a wide range of frequencies and demodulate various types of signals. Software such as GQRX or CubicSDR can be used to process the SDR data.
                  3. APRS (Automatic Packet Reporting System): The Raspberry Pi can be used as an APRS iGate or digipeater. By using software like Direwolf, you can decode APRS packets and send position reports to APRS-IS.
                  4. EchoLink Node: You can set up a Raspberry Pi as an EchoLink node, allowing you to connect to other EchoLink nodes and repeaters worldwide. This is particularly useful for operators without access to a physical radio.
                  5. Repeater Controller: A Raspberry Pi can serve as a repeater controller, managing the operation of an amateur radio repeater, handling tasks such as CW ID, timers, and control functions.
                  6. Weather Station Integration: By connecting sensors to the GPIO pins of the Raspberry Pi, you can collect weather data and transmit it via APRS or other digital modes.
                  7. Remote Station Control: You can use the Raspberry Pi to control your radio station remotely. Software like Hamlib or FLRig allows you to operate your rig from anywhere in the world.

                  Ubuntu Linux on Raspberry Pi

                  While the Raspberry Pi OS (formerly Raspbian) is the default operating system for Raspberry Pi, Ubuntu Linux is an excellent alternative that offers several benefits:

                  1. Familiarity: Ubuntu is one of the most popular Linux distributions, and many users are already familiar with its interface and package management system.
                  2. Support and Updates: Ubuntu provides regular updates and long-term support (LTS) versions, ensuring stability and security for your projects.
                  3. Software Availability: The Ubuntu repositories contain a vast array of software packages, including many applications relevant to amateur radio.
                  Installing Ubuntu on Raspberry Pi
                  1. Download Ubuntu: Go to the official Ubuntu website and download the appropriate image for your Raspberry Pi model.
                  2. Flash the Image: Use software like Balena Etcher to flash the downloaded image onto a microSD card.
                  3. Boot the Raspberry Pi: Insert the microSD card into your Raspberry Pi and power it on. Follow the on-screen instructions to complete the setup.
                  4. Install Ham Radio Software: Use the terminal to install necessary software. For example, to install WSJT-X, you can use:
                     sudo apt update
                     sudo apt install wsjtx
                  Popular Software for Amateur Radio on Ubuntu
                  • WSJT-X: For digital modes like FT8 and JT65.
                  • FLDigi: A versatile digital mode software.
                  • CQRLog: A powerful logging program for ham radio operators.
                  • Xastir: APRS software for Linux.
                  • GQRX: SDR receiver application.
                  • Hamlib: Library for controlling radio transceivers and receivers.

                  Conclusion

                  The Raspberry Pi is a game-changer for amateur radio operators, offering a compact, affordable, and versatile platform for a myriad of applications. By running Ubuntu Linux on your Raspberry Pi, you gain access to a robust and familiar operating system with a vast array of software tools at your disposal. Whether you’re operating digital modes, setting up an SDR, or controlling your station remotely, the Raspberry Pi can enhance your ham radio experience in exciting and innovative ways. Embrace the power of Raspberry Pi and take your amateur radio projects to new heights!

                  The post Unlocking the Potential of Raspberry Pi for Amateur Radio Operators appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  15 May, 2024 07:55AM

                  May 14, 2024

                  hackergotchi for Grml developers

                  Grml developers

                  Evgeni Golov: Using Packit to build RPMs for projects that depend on or vendor your code

                  I am a huge fan of Packit as it allows us to provide RPMs to our users and testers directly from a pull-request, thus massively tightening the feedback loop and involving people who otherwise might not be able to apply the changes (for whatever reason) and "quickly test" something out. It's also a great way to validate that a change actually builds in a production environment, where no unnecessary development and test dependencies are installed.

                  You can also run tests of the built packages on Testing Farm and automate pushing releases into Fedora/CentOS Stream, but this is neither a (plain) Packit advertisement post, nor is that functionality that I can talk about with a certain level of experience.

                  Adam recently asked why we don't have Packit builds for our our Puppet modules and my first answer was: "well, puppet-* doesn't produce a thing we ship directly, so nobody dared to do it".

                  My second answer was that I had blogged how to test a Puppet module PR with Packit, but I totally agree that the process was a tad cumbersome and could be improved.

                  Now some madman did it and we all get to hear his story! ;-)

                  What is the problem anyway?

                  The Foreman Installer is a bit of Ruby code1 that provides a CLI to puppet apply based on a set of Puppet modules. As the Puppet modules can also be used outside the installer and have their own lifecycle, they live in separate git repositories and their releases get uploaded to the Puppet Forge. Users however do not want to (and should not have to) install the modules themselves.

                  So we have to ship the modules inside the foreman-installer package. Packaging 25 modules for two packaging systems (we support Enterprise Linux and Debian/Ubuntu) seems like a lot of work. Especially if you consider that the main foreman-installer package would need to be rebuilt after each module change as it contains generated files based on the modules which are too expensive to generate at runtime.

                  So we can ship the modules inside the foreman-installer source release, thus vendoring those modules into the installer release.

                  To do so we use librarian-puppet with a Puppetfile and either a Puppetfile.lock for stable releases or by letting librarian-puppet fetch latest for nightly snapshots.

                  This works beautifully for changes that land in the development and release branches of our repositories - regardless if it's foreman-installer.git or any of the puppet-*.git ones. It also works nicely for pull-requests against foreman-installer.git.

                  But because the puppet-* repositories do not map to packages, we assumed it wouldn't work well for pull-requests against those.

                  How can we solve this?

                  Well, the "obvious" solution is to build the foreman-installer package via Packit also for pull-requests against the puppet-* repositories. However, as usual, the devil is in the details.

                  Packit by default clones the repository of the pull-request and tries to create a source tarball from that using git archive. As this might be too simple for many projects, one can define a custom create-archive action that runs after the pull-request has been cloned and produces the tarball instead. We already use that in the Packit configuration for foreman-installer to run the pkg:generate_source rake target which executes librarian-puppet for us.

                  But now the pull-request is against one of the Puppet modules, so Packit will clone that, not the installer.

                  We gotta clone foreman-installer on our own. And then point librarian-puppet at the pull-request. Fun.

                  Cloning is relatively simple, call git clone -- sorry Packit/Copr infrastructure.

                  But the Puppet module pull-request? One can use :git => 'https://git.example.com/repo.git' in the Puppetfile to fetch a git repository. In fact, that's what we already do for our nightly snapshots. It also supports :ref => 'some_branch_or_tag_name', if the remote HEAD is not what you want.

                  My brain first went "I know this! GitHub has this magic refs/pull/1/head and refs/pull/1/merge refs you can checkout to get the contents of the pull-request without bothering to add a remote for the source of the pull-request". Well, this requires to know the ID of the pull-request and Packit does not expose that in the environment variables available during create-archive.

                  Wait, but we already have a checkout. Can we just say :git => '../.git'? Cloning a .git folder is totally possible after all.

                  [Librarian]     --> fatal: repository '../.git' does not exist
                  Could not checkout ../.git: fatal: repository '../.git' does not exist
                  

                  Seems librarian disagrees. Damn. (Yes, I checked, the path exists.)

                  💡 does it maybe just not like relative paths?! Yepp, using an absolute path absolutely works!

                  For some reason it ends up checking out the default HEAD of the "real" (GitHub) remote, not of ../. Luckily this can be fixed by explicitly passing :ref => 'origin/HEAD', which resolves to the branch Packit created for the pull-request.

                  Now we just need to put all of that together and remember to execute all commands from inside the foreman-installer checkout as that is where all our vendoring recipes etc live.

                  Putting it all together

                  Let's look at the diff between the packit.yaml for foreman-installer and the one I've proposed for puppet-pulpcore:

                  --- a/foreman-installer/.packit.yaml    2024-05-14 21:45:26.545260798 +0200
                  +++ b/puppet-pulpcore/.packit.yaml  2024-05-14 21:44:47.834162418 +0200
                  @@ -18,13 +18,15 @@
                   actions:
                     post-upstream-clone:
                       - "wget https://raw.githubusercontent.com/theforeman/foreman-packaging/rpm/develop/packages/foreman/foreman-installer/foreman-installer.spec -O foreman-installer.spec"
                  +    - "git clone https://github.com/theforeman/foreman-installer"
                  +    - "sed -i '/theforeman.pulpcore/ s@:git.*@:git => \"#{__dir__}/../.git\", :ref => \"origin/HEAD\"@' foreman-installer/Puppetfile"
                     get-current-version:
                  -    - "sed 's/-develop//' VERSION"
                  +    - "sed 's/-develop//' foreman-installer/VERSION"
                     create-archive:
                  -    - bundle config set --local path vendor/bundle
                  -    - bundle config set --local without development:test
                  -    - bundle install
                  -    - bundle exec rake pkg:generate_source
                  +    - bash -c "cd foreman-installer && bundle config set --local path vendor/bundle"
                  +    - bash -c "cd foreman-installer && bundle config set --local without development:test"
                  +    - bash -c "cd foreman-installer && bundle install"
                  +    - bash -c "cd foreman-installer && bundle exec rake pkg:generate_source"
                  
                  1. It clones foreman-installer (in post-upstream-clone, as that felt more natural after some thinking)
                  2. It adjusts the Puppetfile to use #{__dir__}/../.git as the Git repository, abusing the fact that a Puppetfile is really just a Ruby script (sorry Ben!) and knows the __dir__ it lives in
                  3. It fetches the version from the foreman-installer checkout, so it's sort-of reasonable
                  4. It performs all building inside the foreman-installer checkout

                  Can this be used in other scenarios?

                  I hope so! Vendoring is not unheard of. And testing your "consumers" (dependents? naming is hard) is good style anyway!


                  1. three Ruby modules in a trench coat, so to say 

                  14 May, 2024 08:12PM

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Faizul "Piju" 9M2PJU: Linux in the Ham Shack: Enhancing Amateur Radio with Open Source Power

                  The world of amateur radio has always been closely tied to technological innovation, and in recent years, Linux has become an essential tool in the ham shack. From digital modes and logging to software-defined radio (SDR) and satellite communication, Linux offers a robust, flexible, and cost-effective platform for amateur radio operators. This blog post will explore what amateur radio enthusiasts can do with Linux, the top usages in the ham radio world, and why Ubuntu stands out as an excellent choice for a Linux operating system.

                  What Can Amateur Radio Do with Linux?

                  Amateur radio operators, or hams, can leverage Linux in various ways to enhance their radio experience. Here are some key capabilities:

                  1. Digital Mode Operation: Linux supports various digital modes like PSK31, RTTY, FT8, and JT65 through software like FLDigi and WSJT-X.
                  2. Logging and Logbook Management: Applications like CQRLOG provide comprehensive logging capabilities, including integration with online logbooks like Logbook of The World (LoTW) and eQSL.
                  3. Rig Control: Software such as Hamlib allows hams to control their transceivers from their computer, making it easier to manage frequencies, modes, and other settings.
                  4. Software-Defined Radio (SDR): GNU Radio and other SDR software enable hams to build and experiment with SDR applications, opening up new possibilities for signal processing and communication.
                  5. Satellite Tracking: Programs like GPredict allow hams to track satellites and plan contacts using their orbital data.
                  6. APRS (Automatic Packet Reporting System): Xastir is a popular APRS client for Linux, helping hams track and map real-time data from APRS networks.

                  Top Linux Usages in the Amateur Radio World

                  1. Digital Mode Operations with FLDigi and WSJT-X

                  • FLDigi: A versatile program that supports numerous digital modes. It allows for seamless operation of modes such as PSK31, RTTY, and Olivia.
                  • WSJT-X: Widely used for weak signal communication, this software is essential for modes like FT8 and JT65, making it possible to make contacts even under poor conditions.

                  2. Logging and Station Management with CQRLOG

                  • CQRLOG: Specifically designed for Linux, CQRLOG offers advanced logging features and integrates well with other amateur radio software. It uses MySQL for robust log management and supports direct online logbook uploads.

                  3. Rig Control with Hamlib

                  • Hamlib: Provides a standardized interface for controlling various transceivers. It integrates with software like FLDigi, CQRLOG, and WSJT-X, allowing for automated rig control and frequency management.

                  4. Software-Defined Radio with GNU Radio

                  • GNU Radio: An open-source toolkit that enables hams to create custom SDR applications. It’s a powerful platform for those interested in signal processing and experimenting with new communication methods.

                  5. Satellite Tracking with GPredict

                  • GPredict: A satellite tracking and prediction application that helps hams track the movement of amateur radio satellites and plan their communications accordingly.

                  6. APRS with Xastir

                  • Xastir: An APRS client that allows hams to send and receive APRS data, track positions, and visualize data on maps. It supports a wide range of map formats and interfaces with GPS devices and TNCs.

                  7. Slow Scan Television with QSSTV

                  • QSSTV: Enables the transmission and reception of SSTV images, allowing hams to send pictures over HF, VHF, and UHF bands. It supports multiple SSTV modes and provides real-time image processing.

                  Why Ubuntu?

                  Ubuntu is a popular Linux distribution that stands out for several reasons, making it an excellent choice for amateur radio operators:

                  1. Ease of Use: Ubuntu’s user-friendly interface and extensive documentation make it accessible to both beginners and experienced users.
                  2. Software Repositories: Ubuntu has vast software repositories that include a wide range of amateur radio applications, making installation and updates straightforward.
                  3. Community Support: A strong community of users and developers provides robust support, ensuring help is available when needed.
                  4. Stability and Security: Regular updates and a focus on security make Ubuntu a reliable platform for running amateur radio software.
                  5. Customization: Ubuntu’s flexibility allows hams to customize their setup according to their specific needs, whether for digital modes, SDR, or satellite tracking.

                  Conclusion

                  Linux, and particularly Ubuntu, offers a powerful and versatile platform for amateur radio enthusiasts. From digital modes and logging to rig control and SDR, the open-source nature of Linux provides endless possibilities for innovation and experimentation in the ham shack. Embrace the flexibility, stability, and community support that Linux offers, and take your amateur radio operations to the next level. Happy operating!

                  For more insights and discussions about Linux in the ham shack, please visit Linux in the Ham Shack.

                  The post Linux in the Ham Shack: Enhancing Amateur Radio with Open Source Power appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  14 May, 2024 05:56PM

                  Faizul "Piju" 9M2PJU: Installing Ubuntu on Apple MacBooks with ARM CPUs: A Step-by-Step Guide

                  Apple’s transition to ARM-based processors, specifically the M1 and M2 chips, has brought significant performance and efficiency improvements. However, running alternative operating systems like Ubuntu on these devices can be challenging due to architectural differences and proprietary hardware. This guide will walk you through the process of installing Ubuntu on Apple Silicon MacBooks, utilizing the Asahi Linux project and virtualization tools.

                  What You Need to Know

                  Before diving into the installation process, it’s important to understand that running Ubuntu natively on Apple Silicon is still a work in progress. The Asahi Linux project is actively developing support for Apple’s ARM architecture, and virtualization tools can provide a more straightforward solution in the meantime.

                  Preparing for Installation

                  Backup Your Data

                  Ensure all important data is backed up using Time Machine or another backup solution. Installing a new operating system can result in data loss if not done correctly.

                  Check System Requirements

                  • Apple MacBook with M1 or M2 chip
                  • At least 4 GB of RAM (8 GB or more recommended)
                  • 25 GB of free disk space
                  • USB drive with at least 16 GB of space

                  Method 1: Installing Ubuntu via Virtualization

                  Using virtualization tools like UTM allows you to run Ubuntu on your ARM-based MacBook without modifying the existing macOS installation.

                  Step-by-Step Guide to Using UTM

                  1. Download UTM:
                  • Visit UTM’s website and download the latest version of the application.
                  1. Download Ubuntu ARM Image:
                  1. Create a New Virtual Machine:
                  • Open UTM and click on the “+” button to create a new virtual machine.
                  • Select “Virtualize” (not “Emulate”) for better performance and choose the ARM64 architecture.
                  1. Configure the Virtual Machine:
                  • Load the Ubuntu ARM image as the boot ISO.
                  • Allocate appropriate resources (e.g., 4 GB of RAM and sufficient storage space).
                  1. Install Ubuntu:
                  • Start the virtual machine.
                  • Follow the on-screen instructions to install Ubuntu within the virtual environment.
                  1. Post-Installation Setup:
                  • Once Ubuntu is installed, update the system using:
                    bash sudo apt update sudo apt upgrade

                  Method 2: Installing Ubuntu Natively with Asahi Linux

                  Asahi Linux is a project aimed at running Linux natively on Apple Silicon. While still under development, it provides a pathway to install Ubuntu on these devices.

                  Step-by-Step Guide to Using Asahi Linux

                  1. Download the Asahi Linux Installer:
                  1. Prepare a Bootable USB Drive:
                  • Use a tool like Etcher to create a bootable USB drive with the Asahi Linux installer.
                  1. Boot into Recovery Mode:
                  • Restart your MacBook and hold down the Power button to enter the startup options screen.
                  • Select “Options” to boot into Recovery Mode.
                  1. Disable System Integrity Protection (SIP):
                  • Open the Terminal from the Utilities menu in Recovery Mode.
                  • Run the command: csrutil disable
                  • Reboot your MacBook.
                  1. Run the Asahi Linux Installer:
                  • Insert the bootable USB drive and reboot your MacBook, holding down the Option key.
                  • Select the USB drive from the startup options.
                  • Follow the on-screen instructions to install Asahi Linux.
                  1. Partition Your Disk:
                  • During the installation process, partition your disk to allocate space for Ubuntu.
                  1. Complete the Installation:
                  • The installer will guide you through the remaining steps. Once complete, reboot your MacBook and select the Asahi Linux partition.
                  1. Install Ubuntu:
                  • Follow the standard Ubuntu installation process within the Asahi Linux environment.

                  Post-Installation Steps

                  After installing Ubuntu via Asahi Linux, perform additional setup:

                  1. Update the System:
                  • Open a terminal and run:
                    bash sudo apt update sudo apt upgrade
                  1. Install Additional Drivers:
                  • Refer to the Asahi Linux documentation for any specific drivers or configurations needed for optimal performance.
                  1. Customize Your Environment:
                  • Install your preferred applications and configure your desktop environment as needed.

                  Conclusion

                  Installing Ubuntu on Apple Silicon MacBooks is an evolving process, with tools like Asahi Linux and UTM making it increasingly feasible. Whether you choose to run Ubuntu natively or through virtualization, this guide provides the necessary steps to get started. Keep an eye on the Asahi Linux project for ongoing updates and improvements, and enjoy exploring the versatility of Ubuntu on your powerful MacBook.

                  For the latest updates and detailed documentation, visit the Asahi Linux website. Happy experimenting!

                  The post Installing Ubuntu on Apple MacBooks with ARM CPUs: A Step-by-Step Guide appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  14 May, 2024 05:41PM

                  hackergotchi for VyOS

                  VyOS

                  VyOS 1.4.0-epa3 release

                  Hello, Сommunity!

                  The VyOS 1.4.0-epa3 (Early Production Access) release is now available to subscribers. It includes a fix for CVE-2024-2961 — the recently discovered buffer overflow vulnerability in GNU libc.

                  This is the final EPA of 1.4.0/Sagitta release, which includes all supported flavors (hardware and virtual). It also includes a few configuration syntax changes (all automatically migrated) that were required to make old configs work or to unblock improvement paths, such as implementing the DHCP server active/active high availability mode in addition to the old active/passive failover mechanism.

                  Please let us know if you notice any anomalies! We expect the 1.4.0 GA release in two weeks if no significant issues are detected.

                  14 May, 2024 04:38PM by Daniil Baturin (daniil@sentrium.io)

                  hackergotchi for GreenboneOS

                  GreenboneOS

                  PITS 2024 Public IT Security

                  Save the date: The “German Congress for IT and Cyber Security in Government and Administration” (June 12 to 13, 2024) provides information on current trends, strategies and solutions in IT security.

                  In the main program: “IT support for early crisis detection” (Moderation: Dr. Eva-Charlotte Proll, Editor-in-Chief and Publisher, Behörden Spiegel).

                  Participants:

                  • Dr. Jan-Oliver Wagner, Chief Executive Officer Greenbone
                  • Carsten Meywirth, Head of the Cybercrime Division, Federal Criminal Police Office
                  • Generalmajor Dr. Michael Färber, Head of Planning and Digitization, Cyber & Information Space Command
                  • Katrin Giebel, Branch Manager, VITAKO Bundesverband kommunaler IT-Dienstleister e.V.
                  • Dr. Dirk Häger, Head of the Operational Cybersecurity Department, Federal Office for Information Security (BSI)

                  Where? Berlin, Hotel Adlon Kempinski, Unter den Linden 77
                  When? 13.06.2024; 9:40 a.m.

                  Vulnerabilities in IT systems are increasingly being exploited by malicious attackers. You can protect your IT systems with vulnerability management. Visit us in our lounge at stand 44 – we look forward to seeing you!

                  Registration: https://www.public-it-security.de/anmeldung/

                  14 May, 2024 02:10PM by Greenbone AG

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Julian Andres Klode: The new APT 3.0 solver

                  APT 2.9.3 introduces the first iteration of the new solver codenamed solver3, and now available with the –solver 3.0 option. The new solver works fundamentally different from the old one.

                  How does it work?

                  Solver3 is a fully backtracking dependency solving algorithm that defers choices to as late as possible. It starts with an empty set of packages, then adds the manually installed packages, and then installs packages automatically as necessary to satisfy the dependencies.

                  Deferring the choices is implemented multiple ways:

                  First, all install requests recursively mark dependencies with a single solution for install, and any packages that are being rejected due to conflicts or user requests will cause their reverse dependencies to be transitively marked as rejected, provided their or group cannot be solved by a different package.

                  Second, any dependency with more than one choice is pushed to a priority queue that is ordered by the number of possible solutions, such that we resolve a|b before a|b|c.

                  Not just by the number of solutions, though. One important point to note is that optional dependencies, that is, Recommends, are always sorting after mandatory dependencies. Do note on that: Recommended packages do not “nest” in backtracking - dependencies of a Recommended package themselves are not optional, so they will have to be resolved before the next Recommended package is seen in the queue.

                  Another important step in deferring choices is extracting the common dependencies of a package across its version and then installing them before we even decide which of its versions we want to install - one of the dependencies might cycle back to a specific version after all.

                  Decisions about package levels are recorded at a certain decision level, if we reach a conflict we backtrack to the previous decision level, mark the decision we made (install X) in the inverse (DO NOT INSTALL X), reset all the state all decisions made at the higher level, and restore any dependencies that are no longer resolved to the work queue.

                  Comparison to SAT solver design.

                  If you have studied SAT solver design, you’ll find that essentially this is a DPLL solver without pure literal elimination. A pure literal eliminitation phase would not work for a package manager: First negative pure literals (packages that everything conflicts with) do not exist, and positive pure literals (packages nothing conflicts with) we do not want to mark for install - we want to install as little as possible (well subject, to policy).

                  As part of the solving phase, we also construct an implication graph, albeit a partial one: The first package installing another package is marked as the reason (A -> B), the same thing for conflicts (not A -> not B).

                  Once we have added the ability to have multiple parents in the implication graph, it stands to reason that we can also implement the much more advanced method of conflict-driven clause learning; where we do not jump back to the previous decision level but exactly to the decision level that caused the conflict. This would massively speed up backtracking.

                  What changes can you expect in behavior?

                  The most striking difference to the classic APT solver is that solver3 always keeps manually installed packages around, it never offers to remove them. We will relax that in a future iteration so that it can replace packages with new ones, that is, if your package is no longer available in the repository (obsolete), but there is one that Conflicts+Replaces+Provides it, solver3 will be allowed to install that and remove the other.

                  Implementing that policy is rather trivial: We just need to queue obsolete | replacement as a dependency to solve, rather than mark the obsolete package for install.

                  Another critical difference is the change in the autoremove behavior: The new solver currently only knows the strongest dependency chain to each package, and hence it will not keep around any packages that are only reachable via weaker chains. A common example is when gcc-<version> packages accumulate on your system over the years. They all have Provides: c-compiler and the libtool Depends: gcc | c-compiler is enough to keep them around.

                  New features

                  The new option --no-strict-pinning instructs the solver to consider all versions of a package and not just the candidate version. For example, you could use apt install foo=2.0 --no-strict-pinning to install version 2.0 of foo and upgrade - or downgrade - packages as needed to satisfy foo=2.0 dependencies. This mostly comes in handy in use cases involving Debian experimental or the Ubuntu proposed pockets, where you want to install a package from there, but try to satisfy from the normal release as much as possible.

                  The implication graph building allows us to implement an apt why command, that while not as nicely detailed as aptitude, at least tells you the exact reason why a package is installed. It will only show the strongest dependency chain at first of course, since that is what we record.

                  What is left to do?

                  At the moment, error information is not stored across backtracking in any way, but we generally will want to show you the first conflict we reach as it is the most natural one; or all conflicts. Currently you get the last conflict which may not be particularly useful.

                  Likewise, errors currently are just rendered as implication graphs of the form [not] A -> [not] B -> ..., and we need to put in some work to present those nicely.

                  The test suite is not passing yet, I haven’t really started working on it. A challenge is that most packages in the test suite are manually installed as they are mocked, and the solver now doesn’t remove those.

                  We plan to implement the replacement logic such that foo can be replaced by foo2 Conflicts/Replaces/Provides foo without needing to be automatically installed.

                  Improving the backtracking to be non-chronological conflict-driven clause learning would vastly enhance our backtracking performance. Not that it seems to be an issue right now in my limited testing (mostly noble 64-bit-time_t upgrades). A lot of that complexity you have normally is not there because the manually installed packages and resulting unit propagation (single-solution Depends/Reverse-Depends for Conflicts) already ground us fairly far in what changes we can actually make.

                  Once all the stuff has landed, we need to start rolling it out and gather feedback. On Ubuntu I’d like automated feedback on regressions (running solver3 in parallel, checking if result is worse and then submitting an error to the error tracker), on Debian this could just be a role email address to send solver dumps to.

                  At the same time, we can also incrementally start rolling this out. Like phased updates in Ubuntu, we can also roll out the new solver as the default to 10%, 20%, 50% of users before going to the full 100%. This will allow us to capture regressions early and fix them.

                  14 May, 2024 11:26AM

                  Ubuntu Blog: Ubuntu Pro for EKS is now generally available

                  May 14, 2024 – Canonical, the publisher of Ubuntu, is delighted to announce the general availability of Ubuntu Pro for Amazon Elastic Kubernetes Service (Amazon EKS). This expansion brings robust security offerings to AWS’ managed Kubernetes service, including enhanced uptime and security through Kernel Livepatch and unrestricted access to Pro containers to Amazon EKS, a managed Kubernetes services to run Kubernetes on Amazon Web Services (AWS) and on-premises data centers.

                  Amazon EKS clusters running containers with additional security

                  Running an Ubuntu Pro cluster will give customers access to an unlimited registry of containers, with expanded security coverage for your apps and uninterrupted security patches to the kernel in underlying nodes.

                  “We are excited to announce Ubuntu Pro as a new option for Amazon Elastic Kubernetes Service (Amazon EKS) users,” said Barry Cooks, VP Kubernetes at Amazon Web Services (AWS). “At AWS, we strive to provide our customers with the broadest selection of tools and services to help them build, deploy, and run their applications. With Ubuntu Pro now available on Amazon EKS, customers can migrate their on-premises Ubuntu Pro estates to Amazon EKS with the same secure, compliant, and supported open-source operating system that they know and trust.”

                  “At Civis Analytics the stability and security provided by Ubuntu Pro is critical to our security posture,” said Sean Mann, Director of Infrastructure and Security at Civis Analytics. “The integration of Ubuntu Pro with Amazon EKS streamlines the deployment process, eliminating the need for manual configuration steps previously required for Ubuntu Pro implementation within Amazon EKS environments. This will streamline our image-building process and give DevOps engineers more time to focus on other critical efforts.”

                  Amazon EKS extended support coverage with Ubuntu Pro

                  The EKS extended support period provides an additional 12 months of support for Kubernetes minor versions. Ubuntu Pro for Amazon EKS extends the support to the cluster (worker nodes), aligning with the security coverage period for the complete Amazon EKS lifecycle.

                  Ubuntu Pro for EKS: how it works

                  Ubuntu Pro for Amazon EKS is an optimised Amazon Machine Image (AMI) based on the official Ubuntu Minimal LTS image. It includes the AWS optimised kernel, Amazon EKS binaries and an additional layer of Ubuntu Pro services such as kernel livepatch and Expanded Security Maintenance. 

                  These images have been built specifically for the Amazon EKS service, and are therefore not intended as general OS images.

                  Running Amazon EKS with Ubuntu Pro clusters will also entitle customers to run containers with Expanded Security Maintenance, covering all the third-party open source software. Ubuntu Pro also provides compliance tools, such as CIS hardening and FIPS for FedRAMP.

                  “Canonical and AWS have a longstanding relationship delivering a reliable, secure, and consistent experience across their ecosystems,” said Alex Gallagher, Vice President of Public Cloud at Canonical. “With Ubuntu Pro now available on EKS, customers can launch containers in Amazon EKS and take advantage of all security and compliance features for Ubuntu Pro including Livepatch and FIPS. This advancement highlights our ongoing commitment to facilitating open-source software, ensuring freedom and security.”

                  How to launch EKS clusters with Ubuntu Pro

                  Ubuntu Pro on EKS is available on EC2 and also the AWS Marketplace. Customers can deploy their Ubuntu Pro clusters using EC2 Launch Templates or eksctl.

                  Learn more about launching Ubuntu Pro clusters in the official documentation page.

                  Contact us if you need phone and ticket support or wish to discuss specific needs at aws@canonical.com or https://ubuntu.com/aws/support.

                  About Canonical

                  Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems: from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers ranging from top tech brands and emerging startups, to governments and home users, Canonical delivers trusted open source for everyone. 

                  Learn more at https://canonical.com/ 

                  14 May, 2024 08:26AM

                  Ubuntu Blog: Deploy an on-premise data hub with Canonical MAAS, Spark, Kubernetes and Ceph

                  <noscript> <img alt="" height="960" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_1280,h_960/https://ubuntu.com/wp-content/uploads/a0b7/build-a-spark-data-lake-on-k8s-and-ubuntu.jpg" width="1280" /> </noscript>
                  Download the Spark reference architecture guide

                  In this post we’ll explore deploying a fully operational, on-premise data hub using Canonical’s data centre and cloud automation solutions MAAS (Metal as a Service) and Juju.

                  MAAS is the industry standard open source solution for provisioning and managing physical servers in the data centre. With the rich featureset of MAAS, you can build a bare-metal cloud for any purpose. This time we’ll use MAAS to deploy a data hub for big data storage and distributed, parallel processing with Charmed Spark – Canonical’s solution for Apache Spark. Spark has become the de-facto solution for big data processing these days.

                  Juju is Canonical’s take on cloud orchestration. Juju orchestrates “Charms”, which are software operators that help you to handle deployment, configuration and operation of complex distributed infrastructure and platform software stacks. Juju works with both Virtual Machines and Kubernetes so that you can build up complex service architectures in a flexible manner. Juju Charms are available for many different purposes – from cloud building through to Kubernetes, distributed databases and beyond to AI/MLOps.

                  Sometimes the cloud just doesn’t cut it. For whatever reason you want to deploy your data hub on premise – this could be because of the scale of the service, or cost, the sensitivity of the data being processed, or simply a matter of company policy. MAAS makes it straightforward to manage data centre systems at scale, and with Juju we can orchestrate those systems and the software applications that they run in sophisticated ways.

                  The solution we’ll deploy will look like this:

                  <noscript> <img alt="" height="540" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_960,h_540/https://ubuntu.com/wp-content/uploads/0cc2/spark-k8s-ubuntu-data-hub-architecture-diagram.jpg" width="960" /> </noscript>

                  Metal head

                  We’ll first deploy MAAS and get it to PXE boot some physical servers. For the physical servers, I’ve used five Lenovo ThinkCenter computers that I bought refurbished from Amazon, although you could use any suitable x86_64 hardware that you have available. I found that ThinkCenters offer a relatively low cost approach to building a home lab. Although they vary in price, I picked them up for about $200 per system plus tax which gives me 16GB of RAM, 4 CPU cores and 250GB of storage per system – enough for a home lab and for this post.

                  I installed MAAS on my Ubuntu 22.04 LTS laptop, so my laptop acts as the controller for the home lab.

                  To get started, we’ll run the following commands to install MAAS. We’ll also install a specially preconfigured test PostgreSQL database for MAAS, to make home lab configuration a bit easier. For production, you’d want a dedicated external PostgreSQL database cluster.

                  sudo snap install maas
                  sudo snap install maas-test-db

                  The next step is to initialise and configure MAAS so that we can log in to its Web UI.

                  sudo maas init region+rack --database-uri maas-test-db:///
                  MAAS URL [default=http://192.168.86.45:5240/MAAS]:

                  When you’ve entered a suitable URL, or accepted the default, the following prompt will appear:

                  MAAS has been set up.
                  
                  If you want to configure external authentication or use
                  MAAS with Canonical RBAC, please run
                  
                    sudo maas configauth
                  
                  To create admins when not using external authentication, run
                  
                    sudo maas createadmin

                  Let’s go ahead and create that admin account so we can proceed. It’s important to import your SSH public keys from GitHub (gh) or LaunchPad (lp) as MAAS will need these to be able to grant you SSH access to the systems that it deploys. Note as well that MAAS configures the SSH user to be ubuntu.

                  sudo maas createadmin
                  Username: yourusername
                  Password: ******
                  Again: ******
                  Email: yourusername@example.com
                  Import SSH keys [] (lp:user-id or gh:user-id): gh:yourusername

                  There’s one final step: we’ll want an API key so that we can orchestrate MAAS actions with Juju. Let’s make one now for our MAAS user account that we just created.

                  sudo maas apikey --username yourusername

                  Enable TLS on MAAS

                  Let’s set up some wire encryption for our services, starting by generating a root CA certificate and server certificate for the MAAS API and Web UI.

                  MAAS_IP=192.168.86.45
                  sudo apt install mkcert -y
                  
                  mkcert -install
                  mkcert maas.datahub.demo ${MAAS_IP}
                  cp ${HOME}/.local/share/mkcert/rootCA.pem .

                  Next, we’ll enable TLS on our MAAS region and rack controller.

                  sudo cp rootCA.pem /var/snap/maas/common
                  sudo cp maas.datahub.demo+1.pem /var/snap/maas/common
                  sudo cp maas.datahub.demo+1-key.pem /var/snap/maas/common
                  
                  echo "y" | sudo maas config-tls enable --port 5443 --cacert /var/snap/maas/common/rootCA.pem /var/snap/maas/common/maas.datahub.demo+1-key.pem /var/snap/maas/common/maas.datahub.demo+1.pem

                  Now we should be able to log into the MAAS web UI and start adding our host systems to the MAAS inventory. You’ll need to step past the security warning, as this is a self-signed certificate. Obviously for production use you should use a real CA (Certificate Authority) – whether a third party like Let’s Encrypt or a CA that your organisation manages.

                  <noscript> <img alt="" height="2012" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2012/https://ubuntu.com/wp-content/uploads/f183/maas-login-screen.png" width="3844" /> </noscript>

                  Once you’ve logged in with the username and password you just created, you’ll need to perform some further configuration steps to get MAAS working like it should.

                  1. Under “Networking”, go to “Subnets”.
                  2. Find the network where you will be adding your servers. For example, my network is 192.168.86.0/24. Click on the VLAN link to the left of this network. The link is likely titled “untagged”.
                  3. Find and click the button “Configure DHCP”.
                  4. Leave the checkbox “MAAS provides DHCP” checked.
                  5. Choose “Provide DHCP from rack controller(s)”.
                  6. Click the dropdown list entitled “Select subnet…” and choose the available subnet
                  7. Choose sensible values for start IP address and end IP address, and set the gateway IP address to your network’s gateway IP address.
                  8. Click the button “Configure DHCP”.

                  While we’re here, let’s reserve a small range of IP addresses that MAAS won’t touch, that we can use for the MetalLB Kubernetes load balancer and also for the Ceph Rados Gateway  which we’ll deploy later on.

                  1. Again under “Networking”, “Subnets”.
                  2. Find the network where you will be adding your servers. For example, my network is 192.168.86.0/24. Click on the VLAN link to the left of this network. The link is likely entitled “untagged”.
                  3. Find and click the button “Reserve Range”.
                  4. Click “Reserve Range” from the dropdown menu.
                  5. Take a small range of five IP addresses from within the subnet – I took a start IP of 192.168.86.90 and an end IP of 192.168.86.95
                  6. Click “Reserve”.
                  7. Repeat the process for the Ceph Rados Gateway, this time take just a single IP address for both start and end addresses – I took 192.168.86.180
                  <noscript> <img alt="" height="2007" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2007/https://ubuntu.com/wp-content/uploads/6237/MAAS-VLAN.png" width="3844" /> </noscript>

                  Now, back on the Subnets screen, click on the IP address of your main network.

                  • Click the button “Edit”.
                  • In the box labelled “DNS”, add the IP address of your MAAS host – in my case that’s the IP address of my laptop.
                  • Click “Save”.
                  <noscript> <img alt="" height="2010" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2010/https://ubuntu.com/wp-content/uploads/7eab/MAAS-Subnet.png" width="3844" /> </noscript>

                  Ok, so we’ve configured the minimal networking requirements to get MAAS working. Of course, you can go much, much further with MAAS in a data centre environment with sophisticated VLAN, zone, hall and site configurations. But this will be just enough for our home lab and to build our data hub. 

                  Now that those steps are done, we can start enlisting machines. To enlist a machine, you’ll first need to perform the following preparatory steps.

                  On the target machine:

                  • Enter the system BIOS of the machine. In the boot sequence menu, enable network (PXE) booting and make this the first item in the boot order. You might also need to enable PXE IPv4 stack in the networking menu, depending on your BIOS.
                  • While you’re here, check that the CPU has virtualization extensions enabled and if not, enable them as we’ll need this later.
                  • Save your BIOS configuration changes.
                  <noscript> <img alt="" height="2002" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2002/https://ubuntu.com/wp-content/uploads/3c7f/MAAS-Machines.png" width="3844" /> </noscript>

                  On the MAAS host:

                  • In the MAAS UI, under Hardware, go to Machines and choose “Add Hardware” from the menu in the top right of the screen. From the dropdown menu, choose “Machine”.
                  • Give your system a name, for instance metal1.
                  • Enter the MAC address of your system’s network adapter. You should be able to find this in the BIOS or otherwise your system may display it on the screen at boot time.
                  • In the “Power Type” dropdown, choose “Manual”.
                  • Click “Save Machine”.

                  Now go back to your target machine, exit the BIOS configuration screen and reboot.

                  At this point, the system should enter PXE boot mode and initiate a network boot from MAAS. The system should also show up in the MAAS machine inventory, in the state “Commissioning”. After a while, the machine will provision and shut itself down. To prevent the machine shutting down all the time and to gain SSH access, we’ll commission it again, by performing the following steps.

                  • Check the checkbox next to the system in the “Machines” screen.
                  • In the dropdown menu labelled “Actions”, choose “Commission…”.
                  • Check the checkbox labelled “Allow SSH access and prevent machine powering off”.
                  • Click the button “Start commissioning for this machine”.
                  <noscript> <img alt="" height="2098" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2098/https://ubuntu.com/wp-content/uploads/6a47/MAAS-Commission.png" width="3844" /> </noscript>

                  You’ll need to power the machine on manually, and at this point it should start reprovisioning. Note that in a real data centre, power cycling the machine is usually done through remote management agents, which MAAS can fully support. Once completed, the machine should be marked “Available” in the “Machines” screen.

                  Next we’ll want to move the system to “deployed” state.

                  • Again, check the checkbox next to the system in the “Machines” screen.
                  • In the dropdown menu labelled “Actions”, choose “Deploy…”.
                  • Check the checkbox labelled “Register as MAAS KVM host”.
                  • You can choose either “LXD” or “libvirt” here – I chose libvirt for my systems.
                  • Click the button “Start deployment for machine”.
                  <noscript> <img alt="" height="2007" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2007/https://ubuntu.com/wp-content/uploads/cd03/MAAS-Deploy.png" width="3844" /> </noscript>

                  You’ll need to power cycle the machine manually again, and at this point it should start installing Ubuntu Server on the system. Once completed, the machine should be marked “Deployed” in the “Machines” screen.

                  <noscript> <img alt="" height="2007" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3844,h_2007/https://ubuntu.com/wp-content/uploads/a434/MAAS-Deployed.png" width="3844" /> </noscript>

                  That’s all that we need to do to have the system deployed and operational for our purposes. Repeat the process for the other systems that you’ll be using to build the data hub.

                  Back to black (and green)

                  Now that we have some physical hosts to build our data hub on, our next step is going to be to deploy the foundational services, which we’ll do at the command line. The foundation platform services we’ll deploy are:

                  • Juju controllers for orchestration,
                  • a Ceph object storage cluster based on Charmed Ceph,
                  • a Vault server,
                  • And a MicroK8s Kubernetes cluster.

                  We’ll deploy those foundational platform services into VMs running on top of our physical host systems, using Juju. Juju will create, configure, and manage the VMs and the software which runs on them, which will be PXE boot provisioned over the network by MAAS.

                  Get your Juju on

                  First, let’s install and configure Juju so that it knows how to work with our MAAS environment. You’ll need the MAAS API token that we made earlier, as well as the URL of your MAAS server. The following commands will:

                  • Install the Juju client on the local laptop.
                  • Register the MAAS environment with Juju.
                  • Install a three node cluster of Juju controllers on VMs on top of our MAAS environment to ensure Juju has high-availability.
                  MAAS_URL=https://192.168.86.45:5443/MAAS
                  MAAS_TOKEN=your-maas-api-token
                  MAAS_USER=your-maas-username
                  
                  cat > maas-cloud.yaml <<EOF
                  clouds:
                    maas:
                      type: maas
                      auth-types: [oauth1]
                      endpoint: ${MAAS_URL}
                      ca-certificates:
                      - |
                  $(cat rootCA.pem | sed -e 's/^/      /')
                  EOF
                  
                  cat > cloudinit-userdata.yaml <<EOF
                  cloudinit-userdata: |
                    ca-certs:
                      trusted: 
                      - |
                  $(cat rootCA.pem | sed -e 's/^/      /')
                  EOF
                  
                  cat > maas-credential.yaml <<EOF
                  credentials:
                    maas:
                      ${MAAS_USER}:
                        auth-type: oauth1
                        maas-oauth: ${MAAS_TOKEN}
                  EOF
                  
                  # Install the Juju client
                  sudo snap install juju --channel=3.3/stable
                  sudo snap install juju-wait --channel=latest/edge --classic
                  
                  # Register the MAAS cloud
                  juju add-cloud --client maas -f maas-cloud.yaml
                  juju add-credential maas -f maas-credential.yaml --client
                  
                  # Spin up a Juju controller cluster with HA
                  juju bootstrap maas --config caas-image-repo="public.ecr.aws/juju" \
                       --credential ${MAAS_USER} \
                       --model-default cloudinit-userdata.yaml cloud-controller
                  juju enable-ha
                  

                  We’ve got the first bits down. Note that in this post, we’ll build our data platform with VMs running on top of the physical systems we provisioned, using LXD. But if you wanted, you could also build the data platform directly on bare metal systems by adding an inventory of physical systems using syntax similar to  juju add-machine ssh:ubuntu@hostname. You can learn more about this command in the Juju operations guide.

                  Anyway, let’s set the password on our administrator account for the Juju controllers before we go any further. Run the command and enter your preferred password. You’ll need to enter it twice to confirm.

                  juju change-user-password admin

                  Just so you know, you can always log in to Juju (for example from some other computer configured to manage the Juju controllers) using the following command.

                  juju login -u admin

                  Build up

                  Now that Juju is ready, we’ll build up the foundation components. First we’ll create a Juju “model” – which is like a namespace for resources. Then we’ll deploy a six node MicroK8s Kubernetes cluster into the model we created, and finally we’ll deploy a Ceph object storage cluster into the model. These commands will take a while to run to completion, so in the meantime go and make yourself a nice cup of tea. You can check on progress in a new terminal window by running juju status from time to time.

                  CEPH_VIP=192.168.86.180 # The IP we reserved in MAAS
                  
                  juju add-model charm-stack-base-model maas
                  
                  # Deploy Charmed MicroK8s
                  juju deploy microk8s -n 3 --config hostpath_storage=true --constraints "mem=8G root-disk=40G" --channel=edge; juju-wait
                  juju deploy microk8s microk8s-worker --channel edge --config role=worker --constraints "mem=8G root-disk=40G" -n 3
                  juju integrate microk8s:workers microk8s-worker:control-plane
                  
                  juju expose microk8s
                  
                  # Deploy Charmed Ceph with backend and S3 compatible API 
                  juju deploy ceph-osd -n 3 --storage osd-devices=loop,1G,1; juju-wait
                  juju deploy -n 3 ceph-mon; juju-wait
                  juju deploy -n 3 ceph-radosgw --config vip=${CEPH_VIP}; juju-wait
                  juju deploy --config cluster_count=3 hacluster ceph-radosgw-hacluster; juju-wait
                  
                  juju integrate ceph-radosgw-hacluster:ha ceph-radosgw:ha
                  juju integrate  ceph-radosgw:mon ceph-mon:radosgw
                  juju integrate  ceph-osd:mon ceph-mon:osd
                  juju-wait
                  
                  juju expose ceph-radosgw

                  Lastly, we’ll deploy a Grafana agent component to enable us to monitor MicroK8s with COS (the Canonical Observability Stack), which we’ll deploy shortly.

                  juju deploy grafana-agent --channel edge; juju-wait
                  juju integrate microk8s:cos-agent grafana-agent
                  juju integrate microk8s-worker:cos-agent grafana-agent

                  The commands we just ran should be quite self-explanatory. Just a few details to note:

                  • juju deploy <thing> -n 3  means deploy 3 instances of the thing.
                  • When we deploy ceph-osd, we tell Juju to create one loopback storage device per node, with 1GB of capacity. This is good for testing but not suitable for a production data hub deployment. Ceph can handle many petabytes of storage, so obviously you can change this to reference physical block devices on the host. Learn more in the Charmed Ceph documentation.

                  Once the commands complete, you should have a working set of foundation services that will enable  you to store and retrieve data through the AWS S3 object store API. You should also now have a platform to facilitate distributed, parallel processing of data, in the form of a Kubernetes cluster. Let’s proceed.

                  Deploy Vault

                  Now we’ll deploy Vault, to manage TLS on behalf of Ceph’s Rados Gateway API server.

                  # Install the Vault client
                  sudo snap install vault
                  
                  # Deploy a Vault server
                  juju deploy vault --channel=1.8/stable; juju-wait
                  VAULT_IP=$(juju status | grep vault | tail -n 1 | awk '{ print $5 }')
                  
                  # Configure TLS on Vault server API
                  mkcert vault.datahub.demo ${VAULT_IP}
                  juju config vault ssl-ca="$(cat rootCA.pem | base64)"
                  juju config vault ssl-cert="$(cat vault.datahub.demo+1.pem | base64)"
                  juju config vault ssl-key="$(cat vault.datahub.demo+1-key.pem | base64)"
                  juju-wait
                  
                  # Initialise Vault
                  export VAULT_ADDR="https://${VAULT_IP}:8200"
                  VAULT_OUTPUT=$(vault operator init -key-shares=5 -key-threshold=3)
                  KEY1=$(echo ${VAULT_OUTPUT} | grep "Unseal Key 1" | awk '{ print $4}')
                  KEY2=$(echo ${VAULT_OUTPUT} | grep "Unseal Key 2" | awk '{ print $4}')
                  KEY3=$(echo ${VAULT_OUTPUT} | grep "Unseal Key 3" | awk '{ print $4}')
                  KEY4=$(echo ${VAULT_OUTPUT} | grep "Unseal Key 4" | awk '{ print $4}')
                  KEY5=$(echo ${VAULT_OUTPUT} | grep "Unseal Key 5" | awk '{ print $4}')
                  export VAULT_TOKEN=$(echo ${VAULT_OUTPUT} | grep "Initial Root Token" | awk '{ print $4 }')
                  
                  echo "Do not lose these keys"
                  echo 
                  echo "unseal key 1: ${KEY1}"
                  echo "unseal key 2: ${KEY2}"
                  echo "unseal key 3: ${KEY3}"
                  echo "unseal key 4: ${KEY4}"
                  echo "unseal key 5: ${KEY5}"
                  echo
                  echo "root token: ${VAULT_TOKEN}"
                  
                  vault operator unseal ${KEY1}
                  vault operator unseal ${KEY2}
                  vault operator unseal ${KEY3}
                  
                  # Authorise Juju to manage Vault
                  VAULT_JUJU_TOKEN_OUTPUT=$(vault token create -ttl=10m)
                  VAULT_JUJU_TOKEN=$(echo ${VAULT_JUJU_TOKEN_OUTPUT} | grep token | head -n 1 | awk '{ print $2 }')
                  
                  juju run vault/leader authorize-charm token=${VAULT_JUJU_TOKEN}; juju-wait
                  
                  # Integrate Ceph with Vault
                  juju run vault/leader generate-root-ca
                  juju integrate ceph-radosgw:certificates vault:certificates
                  
                  # Import Ceph CA into local trusted CA store
                  CEPH_ROOT_CA_OUTPUT=$(juju run vault/leader get-root-ca)
                  echo ${CEPH_ROOT_CA_OUTPUT} | tail -n +2 | grep "^\s\s.*$" | sed "s/\ \ //g" > ceph-ca.pem
                  sudo cp ceph-ca.pem /usr/local/share/ca-certificates
                  sudo update-ca-certificates

                  Configure Ceph

                  We need to configure Ceph and grant ourselves access by creating a user account, and we need to create buckets for both our big data and for our Spark job logs. Let’s get to it.

                  We’ll use the Minio mc client to interact with Ceph.

                  sudo snap install minio-mc-nsg
                  sudo snap alias minio-mc-nsg mc
                  
                  # Create a Ceph user account
                  CEPH_RESPONSE_JSON=$(juju ssh ceph-mon/leader 'sudo radosgw-admin user create \
                     --uid="ubuntu" --display-name="Charmed Spark user"')
                  
                  # Get the account credentials
                  CEPH_ACCESS_KEY_ID=$(echo ${CEPH_RESPONSE_JSON} | yq -r '.keys[].access_key')
                  CEPH_SECRET_ACCESS_KEY=$(echo ${CEPH_RESPONSE_JSON} | yq -r '.keys[].secret_key')
                  
                  # Configure mc to work with Ceph
                  mc config host add ceph-radosgw https://${CEPH_VIP}  \
                    ${CEPH_ACCESS_KEY_ID} ${CEPH_SECRET_ACCESS_KEY}
                  
                  mc mb ceph-radosgw/spark-history
                  mc mb ceph-radosgw/data

                  Let’s add bucket policies to the two object storage buckets we just made, to grant access to just our ubuntu user.

                  cat > policy-data-bucket.json <<EOF
                  {
                    "Version": "2012-10-17",
                    "Id": "s3policy1",
                    "Statement": [{
                      "Sid": "BucketAllow",
                      "Effect": "Allow",
                      "Principal": {"AWS": ["arn:aws:iam::user/ubuntu"]},
                      "Action": [ "s3:ListBucket", "s3:PutObject", "s3:GetObject" ],
                      "Resource": [
                        "arn:aws:s3:::data", "arn:aws:s3:::data/*"
                      ]
                    }]
                  }
                  EOF
                  
                  cat > policy-spark-history-bucket.json <<EOF
                  {
                    "Version": "2012-10-17",
                    "Id": "s3policy2",
                    "Statement": [{
                      "Sid": "BucketAllow",
                      "Effect": "Allow",
                      "Principal": {"AWS": ["arn:aws:iam::user/ubuntu"]},
                      "Action": [ "s3:ListBucket", "s3:PutObject", "s3:GetObject" ],
                      "Resource": [
                        "arn:aws:s3:::spark-history", "arn:aws:s3:::spark-history/*"
                      ]
                    }]
                  }
                  EOF
                  
                  mc policy set-json ./policy-data-bucket.json ceph-radosgw/data
                  mc policy set-json ./policy-spark-history-bucket.json ceph-radosgw/spark-history

                  All set.

                  Start deploying Kubernetes applications

                  We can use Juju to deploy and manage our entire platform, including the bits that run on Kubernetes. The first step is to make Juju aware of our Kubernetes environment.

                  KUBECONF="$(juju exec --unit microk8s/leader -- microk8s config)"
                  echo "${KUBECONF}" | juju add-k8s microk8s-cloud --controller cloud-controller

                  Now that Juju knows about our Kubernetes platform, we can deploy stuff to it. Let’s start by getting MetalLB on there.

                  METALLB_RANGE=192.168.86.90-192.168.86.95
                  juju add-model metallb-system microk8s-cloud
                  
                  juju deploy metallb --channel 1.29/beta --trust
                  juju config metallb iprange="${METALLB_RANGE}"

                  Spark History Server

                  Next we’ll create a Kubernetes namespace where our Spark jobs will run, and also deploy the Spark History Server into it. Spark History Server is Spark users’ go-to web app for troubleshooting Spark jobs. These commands will configure the Spark History Server to read Spark job logs from the bucket spark-history in Ceph, under the key prefix spark-events.

                  Note that when we deploy Spark History Server, we choose to set constraints to enable more granular control over how much cluster resources are grabbed by the service. In this case we limit memory to 1GB of RAM and cpu-power to 100 millicores, which is about 1/10th of a CPU core.

                  juju add-model spark-model microk8s-cloud
                  
                  # Deploy the Spark History Server and supporting cast
                  juju deploy spark-history-server-k8s --constraints="mem=1G cpu-power=100"
                  juju deploy s3-integrator --channel=latest/edge
                  juju deploy traefik-k8s --trust
                  juju-wait
                  
                  # Connect History Server to S3 bucket
                  juju config s3-integrator bucket="spark-history" path="spark-events" \
                    endpoint=https://${CEPH_VIP} tls-ca-chain="$(cat ceph-ca.pem | base64)"
                  
                  juju run s3-integrator/leader sync-s3-credentials access-key=${CEPH_ACCESS_KEY_ID} secret-key=${CEPH_SECRET_ACCESS_KEY}
                  
                  juju integrate s3-integrator spark-history-server-k8s
                  juju integrate spark-history-server-k8s traefik-k8s

                  Observability stack

                  Nice. Now we’ll set up the Canonical Observability Stack, which is an integrated observability solution built on Grafana, Loki and Prometheus for metrics, logs and alerting. 

                  juju add-model cos-model microk8s-cloud
                  
                  # Deploy the COS “Lite” Juju bundle
                  curl -L https://raw.githubusercontent.com/canonical/cos-lite-bundle/main/overlays/storage-small-overlay.yaml -O
                  
                  juju deploy cos-lite \
                    --trust \
                    --overlay ./storage-small-overlay.yaml
                  
                  # Deploy and integrate supporting cast for COS
                  juju deploy cos-configuration-k8s --config git_repo=https://github.com/canonical/charmed-spark-rock --config git_branch=dashboard \
                    --config git_depth=1 --config grafana_dashboards_path=dashboards/prod/grafana/
                  juju deploy prometheus-scrape-config-k8s scrape-interval-config --config scrape_interval=5
                  juju-wait
                  
                  juju integrate cos-configuration-k8s grafana
                  juju integrate scrape-interval-config prometheus-pushgateway-k8s
                  juju integrate scrape-interval-config:metrics-endpoint \ 
                      prometheus:metrics-endpoint
                  
                  # Set up cross-model relation offers
                  juju offer prometheus:receive-remote-write prometheus
                  juju offer loki:logging loki
                  juju offer grafana:grafana-dashboard grafana

                  The last three commands above enable applications Juju manages in other “models”, regardless whether deployed on Kubernetes or directly on VMs, to integrate with the observability components.

                  Let’s deploy a Prometheus push gateway so that our Spark jobs can send metrics over to Prometheus, and we’ll install a preconfigured Spark dashboard in Grafana so that we can monitor our jobs. Then before we continue, we’ll just grab the IP of the Prometheus push gateway so that we can configure our Spark jobs to push metrics to it. Since Spark jobs may be ephemeral batch jobs, instead of getting Prometheus to regularly scrape metrics from a static endpoint, we’ll configure the Spark Jobs to push metrics to a central component (the push gateway). This way, even when jobs are short-lived and transient, metrics will be forwarded to Prometheus and available to the operations team via Grafana. 

                  juju switch spark-model
                  
                  juju deploy prometheus-pushgateway-k8s --channel=edge; juju-wait
                  juju deploy prometheus-scrape-config-k8s scrape-interval-config --config scrape_interval=5; juj-wait
                  
                  # Connect to COS via cross model relations
                  juju consume admin/cos-model.prometheus-scrape prometheus
                  juju integrate prometheus-pushgateway-k8s prometheus
                  juju integrate scrape-interval-config prometheus-pushgateway-k8s
                  juju integrate scrape-interval-config:metrics-endpoint prometheus:metrics-endpoint
                  
                  # Grab the pushgateway IP
                  PROMETHEUS_GATEWAY_IP=$(juju status --format=yaml | yq ".applications.prometheus-pushgateway-k8s.address")

                  Now let’s go back to the Juju model that contains our foundation infrastructure and integrate MicroK8s and Ceph with our observability stack:

                  juju switch charm-stack-base-model 
                  
                  # Connect to COS via cross model relations and integrate
                  juju consume admin/cos-model.prometheus-scrape prometheus
                  juju consume admin/cos-model.loki-logging loki
                  juju consume admin/cos-model.grafana-dashboards grafana
                  
                  juju integrate grafana-agent prometheus
                  juju integrate grafana-agent loki
                  juju integrate grafana-agent grafana
                  
                  # Enable Ceph monitoring & Alerting
                  juju integrate ceph-mon:metrics-endpoint prometheus:metrics-endpoint
                  
                  wget -o prometheus_alerts.yml.rules https://raw.githubusercontent.com/ceph/ceph/351e1ac63950164ea5f08a6bfc7c14af586bb208/monitoring/ceph-mixin/prometheus_alerts.yml
                  
                  juju attach-resource ceph-mon alert-rules=./prometheus_alerts.yml.rules
                  

                  Start running Spark jobs

                  Alright. Next step is to configure Spark to run on our data hub platform – it’s rapidly taking shape. First let’s install the spark-client snap, and then create a properties file for our logging and metrics configuration.

                  After that, we’ll store this configuration centrally using the spark-client.service-account-registry tool. This way the service account we will use to interact with the Kubernetes cluster can automatically apply the configuration from any edge node or even from a pod on the cluster.

                  To keep things light and ensure things get scheduled on our six-node Kubernetes cluster, I’ve also added configuration to drop the requested CPU shares to 0.01 of a CPU per Spark driver and executor; however when running at scale in a production context you’ll want to tweak this value and most likely set it much higher.

                  # Let's live life on the edge
                  sudo snap install spark-client --channel=3.4/edge
                  
                  cat > spark.conf <<EOF
                  spark.eventLog.enabled=true
                  spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
                  spark.hadoop.fs.s3a.connection.ssl.enabled=true
                  spark.hadoop.fs.s3a.path.style.access=true
                  spark.hadoop.fs.s3a.access.key=${CEPH_ACCESS_KEY_ID}
                  spark.hadoop.fs.s3a.secret.key=${CEPH_SECRET_ACCESS_KEY}
                  spark.hadoop.fs.s3a.endpoint=https://${CEPH_VIP}
                  spark.eventLog.dir=s3a://spark-history/spark-events/ 
                  spark.history.fs.logDirectory=s3a://spark-history/spark-events/
                  spark.driver.log.persistToDfs.enabled=true
                  spark.driver.log.dfsDir=s3a://spark-history/spark-events/
                  spark.metrics.conf.driver.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
                  spark.metrics.conf.driver.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
                  spark.metrics.conf.driver.sink.prometheus.enable-dropwizard-collector=true
                  spark.metrics.conf.driver.sink.prometheus.period=5
                  spark.metrics.conf.driver.sink.prometheus.metrics-name-capture-regex=([a-z0-9]*_[a-z0-9]*_[a-z0-9]*_)(.+)
                  spark.metrics.conf.driver.sink.prometheus.metrics-name-replacement=\$2
                  spark.metrics.conf.executor.sink.prometheus.pushgateway-address=${PROMETHEUS_GATEWAY_IP}:9091
                  spark.metrics.conf.executor.sink.prometheus.class=org.apache.spark.banzaicloud.metrics.sink.PrometheusSink
                  spark.metrics.conf.executor.sink.prometheus.enable-dropwizard-collector=true
                  spark.metrics.conf.executor.sink.prometheus.period=5
                  spark.metrics.conf.executor.sink.prometheus.metrics-name-capture-regex=([a-z0-9]*_[a-z0-9]*_[a-z0-9]*_)(.+)
                  spark.metrics.conf.executor.sink.prometheus.metrics-name-replacement=\$2
                  spark.kubernetes.executor.request.cores=0.01
                  spark.kubernetes.driver.request.cores=0.01
                  spark.kubernetes.container.image=ghcr.io/canonical/charmed-spark:3.4-22.04_edge
                  spark.executor.extraJavaOptions="-Djavax.net.ssl.trustStore=/spark-truststore/spark.truststore -Djavax.net.ssl.trustStorePassword=changeit"
                  spark.driver.extraJavaOptions="-Djavax.net.ssl.trustStore=/spark-truststore/spark.truststore -Djavax.net.ssl.trustStorePassword=changeit"
                  spark.kubernetes.executor.secrets.spark-truststore=/spark-truststore
                  spark.kubernetes.driver.secrets.spark-truststore=/spark-truststore
                  EOF
                  
                  # Create a Java keystore containing the CA certificate for Ceph
                  # and make it available to Spark jobs in K8s
                  echo ${KUBECONF} > kubeconfig
                  cp /usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacerts .
                  keytool -import -alias ceph-cert -file ceph-ca.pem -storetype JKS -keystore cacerts -storepass changeit -noprompt
                  mv cacerts spark.truststore
                  kubectl --kubeconfig=./kubeconfig --namespace=spark-model create secret generic spark-truststore --from-file spark.truststore
                  
                  # Import the certificate to the local spark-client keystore
                  spark-client.import-certificate ceph-cert ceph-ca.pem
                  
                  # Create the Kubernetes ServiceAccount whilst storing the configuration centrally
                  spark-client.service-account-registry create --username spark --namespace spark-model --primary --properties-file spark.conf --kubeconfig ./kubeconfig

                  Awesome. Let’s download some data from Kaggle, push it to our Ceph object store and run a simple psypark script to query it with Spark SQL.

                  pip install kaggle
                  sudo apt install unzip -y
                  
                  # Check for a Kaggle token
                  if [ ! -f ${HOME}/.kaggle/kaggle.json ]; then
                    echo "You first need to set up your Kaggle API token. Go to https://www.kaggle.com/ and create an API token or sign up"
                    exit -1
                  fi
                  
                  # Download a dataset from Kaggle
                  kaggle datasets download -d cityofLA/los-angeles-traffic-collision-data
                  unzip los-angeles-traffic-collision-data.zip
                  mc cp traffic-collision-data-from-2010-to-present.csv ceph-radosgw/data/
                  
                  # Make a pyspark script to analyse the data
                  cat > pyspark-script.py <<EOF
                  df = spark.read.option("header", "true").csv("s3a://data/traffic-collision-data-from-2010-to-present.csv")
                  df.createOrReplaceTempView("collisions")
                  spark.sql("select \`DR Number\`, count(\`DR Number\`) as hit_count from collisions group by \`DR Number\` having count(\`DR Number\`) > 1").show()
                  quit()
                  EOF
                  
                  # Run the pyspark script on the cluster
                  spark-client.spark-submit --username spark --namespace spark-model pyspark-script.py

                  Observe and debug

                  At this point you may be wondering where you can see the Spark job logs in the Spark History Server, or how to see the dashboards and monitor the environment from Grafana. The following commands will open those webapps for you:

                  juju switch spark-model
                  HISTORY_SERVER_URL=$(juju run traefik-k8s/leader show-proxied-endpoints | sed "s/proxied-endpoints: '//g" | sed "s/'//g" | jq -r '."spark-history-server-k8s".url')
                  google-chrome ${HISTORY_SERVER_URL}
                  
                  juju switch cos-model
                  CMDOUT=$(juju run grafana/leader get-admin-password)
                  echo "admin/$(echo ${CMDOUT} | grep admin-password | awk -F: '{ print $2 }')"
                  GRAFANA_SERVER_URL=$(echo ${CMDOUT} | grep url | awk '{ print $2 }')
                  
                  google-chrome ${GRAFANA_SERVER_URL}
                  <noscript> <img alt="" height="2018" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3838,h_2018/https://ubuntu.com/wp-content/uploads/f32d/Spark-Grafana-Dashboard.png" width="3838" /> </noscript>
                  <noscript> <img alt="" height="2018" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3838,h_2018/https://ubuntu.com/wp-content/uploads/0d1f/Alerting-with-Alert-Manager.png" width="3838" /> </noscript>

                  One final step. We’ll deploy the juju-dashboard so that we can get a visual overview of the deployment. Let’s do it.

                  juju switch controller
                  juju deploy juju-dashboard dashboard
                  juju integrate dashboard controller; juju-wait
                  juju expose dashboard
                  
                  juju dashboard

                  At this point you’ll see the credentials to log in to your Juju dashboard, and a URL that you can paste into your browser in order to reach the dashboard. The dashboard is tunnelled through SSH for security.

                  <noscript> <img alt="" height="2005" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3829,h_2005/https://ubuntu.com/wp-content/uploads/456b/Juju-dashboard-models-view.png" width="3829" /> </noscript>
                  <noscript> <img alt="" height="2005" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_3829,h_2005/https://ubuntu.com/wp-content/uploads/13e0/Juju-dashboard-model-detail.png" width="3829" /> </noscript>

                  Closing thoughts

                  In this post, we explored how to build an on-premise data hub with Canonical’s open source MAAS, Juju, charm tech and supported distros for Ceph, Kubernetes and Spark. And we learned through (hopefully) doing – or at least by following along. At the end of the journey we had a fully operational (ok, small-scale demo) data hub environment capable of running Spark jobs on Kubernetes and querying a data lake store founded on Ceph, all deployed on five physical home lab servers.

                  We did skip a step or two for brevity (Grafana and Spark History Server are not accessed over TLS, for instance – although they could be configured for this), but if you’d like to learn more about our full-stack, open source solutions for data intensive systems like Spark, then do get in touch. You can contact our commercial team here or chat with our engineers on Matrix here.

                  You can read the Juju operations guide to get the full download on Juju.

                  I hope you enjoyed this post. There are lots of features of Charmed Spark that we didn’t cover like Volcano gang scheduler support; or using Iceberg tables – which are features coming soon to the 3/stable track of the Charmed Spark solution. But new features are being shipped to our edge track all the time and I’ll be writing about them as they come available  – so stay tuned.

                  Read more about Charmed Spark at the product page, or check the docs.

                  Download the Spark reference architecture guide

                  14 May, 2024 08:22AM

                  hackergotchi for Deepin

                  Deepin

                  May 13, 2024

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  The Fridge: Ubuntu Weekly Newsletter Issue 839

                  Welcome to the Ubuntu Weekly Newsletter, Issue 839 for the week of May 5 – 11, 2024. The full version of this issue is available here.

                  In this issue we cover:

                  • Oracular Oriole is now open for development
                  • Ubuntu Stats
                  • Hot in Support
                  • UbuCon Korea 2024 – Registration is now open!
                  • LoCo Events
                  • Patch Pilot Hand-off 24.10
                  • Canonical News
                  • In the Press
                  • In the Blogosphere
                  • Other Articles of Interest
                  • Featured Audio and Video
                  • Meeting Reports
                  • Upcoming Meetings and Events
                  • Updates and Security for Ubuntu 20.04, 22.04, 23.10, and 24.04
                  • And much more!

                  The Ubuntu Weekly Newsletter is brought to you by:

                  • Krytarik Raido
                  • Bashing-om
                  • Chris Guiver
                  • Wild Man
                  • Paul White
                  • And many others

                  If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

                  .

                  13 May, 2024 10:27PM

                  Ubuntu Blog: Ubuntu Pro 24.04 LTS Lands on Google Cloud: Power Up Your Cloud Experience

                  Exciting news for cloud enthusiasts and developers! Ubuntu Pro 24.04 LTS (Noble Numbat) is now available on Google Cloud, bringing a robust and secure platform for your cloud workloads. This latest Long Term Support release from Canonical offers a wealth of features and enhancements, making it the perfect choice for building and deploying applications in the cloud.

                  Ubuntu Pro 24.04 LTS on Google Cloud: A Match Made in the Cloud

                  Google Cloud and Ubuntu Pro 24.04 LTS come together to offer a powerful combination of performance, security, and developer tools. Here are some of the unique benefits you can experience on Google Cloud:

                  • Boot Speed Improvements: Enjoy faster boot times with the optimized I/O scheduler, allowing you to get your instances up and running quickly.
                  • Seamless Integration: Ubuntu Pro 24.04 LTS seamlessly integrates with Google Cloud services and tools, simplifying your workflow and enhancing your overall cloud experience.
                  • Optimized for Google Cloud Infrastructure: Take advantage of the optimized kernel and configurations, specifically tailored for Google Cloud’s infrastructure, ensuring peak performance and stability for your applications.

                  Key Features of Ubuntu Pro 24.04 LTS:

                  Beyond the Google Cloud-specific benefits, Ubuntu Pro 24.04 LTS boasts a wide array of exciting features:

                  • Performance Engineering Tools: Leverage built-in tools for profiling and debugging, allowing you to optimize your applications for peak performance.
                  • Enhanced Security:  Ubuntu 24.04 LTS gets a 12 year commitment for security maintenance and support. As with other long term supported releases, Noble Numbat will get five years of free security maintenance on the main Ubuntu repository. Ubuntu Pro extends that commitment to 10 years on both the main and universe repositories. Ubuntu Pro subscribers can purchase an extra two years with the Legacy Support add-on.
                  • Developer Productivity: Enjoy the latest developer tools and frameworks, including Python 3.12, Ruby 3.2, PHP 8.3, Go 1.22, .NET 8, OpenJDK 21, and Rust 1.75. Ubuntu 24.04 LTS defaults to OpenJDK 21 for Java development, while still supporting versions 17, 11, and 8. Need interoperability? OpenJDK 17 and 21 are TCK certified, ensuring seamless integration with other Java platforms. For the security-conscious, Ubuntu Pro users can access a special FIPS-compliant OpenJDK 11 package. Keeping pace with the growing popularity of Rust, Ubuntu 24.04 LTS incorporates Rust 1.75 and a streamlined Rust toolchain snap framework. This empowers developers to leverage Rust in core Ubuntu packages like the kernel and Firefox, and paves the way for future Rust versions to be readily available on 24.04 LTS in the years ahead.
                  • Confidential Computing: A joint effort by Intel, Google, and Canonical has resulted in Ubuntu’s integration of Intel® Trust Domain Extensions (Intel® TDX) on the Google Cloud platform. This integration streamlines the adoption of confidential computing for existing workloads. By leveraging VM isolation with Intel TDX, applications can be migrated to a secure environment without requiring any changes at the application layer.

                  Upgrade Your Cloud Journey Today!

                  With Ubuntu Pro 24.04 LTS on Google Cloud, you have the tools and support to take your cloud experience to the next level. Start building and deploying your applications on a secure, performant, and developer-friendly platform.

                  Visit the Google Cloud Console and Google Cloud Marketplace to get started with Ubuntu Pro 24.04 LTS today!

                  13 May, 2024 07:50PM

                  hackergotchi for Purism PureOS

                  Purism PureOS

                  Get Your Hands on the Librem 5 for Just $699

                  As Purism reached price break quantities, we can pass those savings onto customers. We have now reached the inventory in our queue that has a slightly lower cost of goods, and with this product in stock and ready to ship, we are adjusting the pricing down from $999 to $699 for the Librem 5! Time to Try […]

                  The post Get Your Hands on the Librem 5 for Just $699 appeared first on Purism.

                  13 May, 2024 05:25PM by Purism

                  hackergotchi for VyOS

                  VyOS

                  VyOS 1.3.7 release

                  Hello, Community!

                  VyOS 1.3.7/Equuleus maintenance release is available now. It fixes the buffer overflow vulnerability recently discovered in GNU libc (CVE-2024-2961). It also adds a few useful options, such as startup resync in conntrack-sync and multiple peers for unicast VRRP; improves PPPoE server syntax to allow PADO delay of zero and client pools with arbitrary subnet masks; and fixes a bunch of bugs, including a bug that prevented BGP RPKI from loading correctly. Read on for details!

                  13 May, 2024 05:18PM by Daniil Baturin (daniil@sentrium.io)

                  hackergotchi for SparkyLinux

                  SparkyLinux

                  Sparky 2024.05 Special Editions

                  There are new iso images of Sparky 2024.05 Special Editions out there: GameOver, Multimedia and Rescue. It is based on Debian testing “Trixie”.

                  The May update of Sparky Special Edition iso images features Linux kernel 6.7, updated packages from Debian and Sparky testing repos as of May 12, 2024, and most changes introduced at the 2024.05 release.

                  The Linux kernel is 6.7.12, and there are 6.9.0, 6.6.30-LTS, 6.1.90-LTS, 5.15.158-LTS in Sparky repos.

                  There is no need to reinstall Sparky rolling, simply keep Sparky up to date.

                  New iso images of Sparky semi-rolling can be downloaded from the download/rolling page

                  13 May, 2024 12:49PM by pavroo

                  May 12, 2024

                  hackergotchi for Ubuntu developers

                  Ubuntu developers

                  Kubuntu General News: Introducing the Enhanced KubuQA: Revolutionising ISO Testing Across Ubuntu Flavors

                  The Kubuntu Team are thrilled to announce significant updates to KubuQA, our streamlined ISO testing tool that has now expanded its capabilities beyond Kubuntu to support Ubuntu and all its other flavors. With these enhancements, KubuQA becomes a versatile resource that ensures a smoother, more intuitive testing process for upcoming releases, including the 24.04 Noble Numbat and the 24.10 Oracular Oriole.

                  What is KubuQA?

                  KubuQA is a specialized tool developed by the Kubuntu Team to simplify the process of ISO testing. Utilizing the power of Kdialog for user-friendly graphical interfaces and VirtualBox for creating and managing virtual environments, KubuQA allows testers to efficiently evaluate ISO images. Its design focuses on accessibility, making it easy for testers of all skill levels to participate in the development process by providing clear, guided steps for testing ISOs.

                  New Features and Extensions

                  The latest update to KubuQA marks a significant expansion in its utility:

                  • Broader Coverage: Initially tailored for Kubuntu, KubuQA now supports testing ISO images for Ubuntu and all other Ubuntu flavors. This broadened coverage ensures that any Ubuntu-based community can benefit from the robust testing framework that KubuQA offers.
                  • Support for Latest Releases: KubuQA has been updated to include support for the newest Ubuntu release cycles, including the 24.04 Noble Numbat and the upcoming 24.10 Oracular Oriole. This ensures that communities can start testing early and often, leading to more stable and polished releases.
                  • Enhanced User Experience: With improvements to the Kdialog interactions, testers will find the interface more intuitive and responsive, which enhances the overall testing experience.

                  Call to Action for Ubuntu Flavor Leads

                  The Kubuntu Team is keen to collaborate closely with leaders and testers from all Ubuntu flavors to adopt and adapt KubuQA for their testing needs. We believe that by sharing this tool, we can foster a stronger, more cohesive testing community across the Ubuntu ecosystem.

                  We encourage flavor leads to try out KubuQA, integrate it into their testing processes, and share feedback with us. This collaboration will not only improve the tool but also ensure that all Ubuntu flavors can achieve higher quality and stability in their releases.

                  Getting Involved

                  For those interested in getting involved with ISO testing using KubuQA:

                  • Download the Tool: You can find KubuQA on the Kubuntu Team Github.
                  • Join the Community: Engage with the Kubuntu community for support and to connect with other testers. Your contributions and feedback are invaluable to the continuous improvement of KubuQA.

                  Conclusion

                  The enhancements to KubuQA signify our commitment to improving the quality and reliability of Ubuntu and its derivatives. By extending its coverage and simplifying the testing process, we aim to empower more contributors to participate in the development cycle. Whether you’re a seasoned tester or new to the community, your efforts are crucial to the success of Ubuntu.

                  We look forward to seeing how different communities will utilise KubuQA to enhance their testing practices. And by the way, have you thought about becoming a member of the Kubuntu Community? Join us today to make a difference in the world of open-source software!

                  12 May, 2024 09:28PM

                  Salih Emin: SysGlance: Download and Use SysGlance on Ubuntu, Debian, and Other Linux Systems

                  I am happy to announce the availability of SysGlance, a simple and universal, Linux utility for generating a report for the host system. Imagine encountering a problem with a Linux system service or device. Typically, you would search for a solution by Googling the issue, hoping to find a fix. In most cases, you would […]

                  12 May, 2024 08:39PM

                  Faizul "Piju" 9M2PJU: Ubuntu in the Military: A Linux Distinction

                  In the digital age, technology plays a pivotal role in every aspect of modern warfare. From secure communication channels to robust data analysis systems, the military relies heavily on technology to maintain operational superiority. Amidst this technological landscape, one open-source Linux distribution stands out: Ubuntu. Renowned for its user-friendliness and flexibility, Ubuntu has found its place in various military applications, offering reliability and security to defense organizations worldwide.

                  Ubuntu’s Appeal in Military Environments:
                  Ubuntu’s popularity in military environments stems from several key factors. Firstly, its open-source nature aligns well with the principles of transparency and customization valued by defense organizations. The ability to inspect and modify the source code allows military developers to tailor Ubuntu to specific mission requirements, enhancing both security and functionality.

                  Secondly, Ubuntu’s robust security features make it an attractive choice for military deployments. The operating system benefits from regular security updates and patches, reducing the risk of vulnerabilities that could be exploited by adversaries. Additionally, Ubuntu’s strong support for encryption and secure communication protocols ensures that sensitive military data remains protected from unauthorized access.

                  Furthermore, Ubuntu’s user-friendly interface simplifies training and usability for military personnel. In fast-paced operational environments, intuitive software interfaces can make a significant difference in mission execution. Ubuntu’s emphasis on accessibility and ease of use minimizes the learning curve for military personnel, allowing them to focus on their core tasks without unnecessary technological hurdles.

                  Real-World Applications:
                  While specific details about Ubuntu’s usage in military contexts may not always be publicly disclosed due to security considerations, there have been reports of its deployment in various defense applications. For instance, Ubuntu has been utilized in command and control systems, intelligence analysis platforms, and even in specialized military hardware such as drones and unmanned vehicles.

                  Moreover, Ubuntu’s compatibility with a wide range of hardware architectures makes it suitable for diverse military environments, from rugged battlefield deployments to command centers and headquarters. Its versatility enables seamless integration with existing infrastructure and interoperability with other systems, facilitating efficient information sharing and collaboration across military units and branches.

                  Looking Ahead:
                  As technology continues to evolve, Ubuntu’s role in military operations is likely to expand further. The ongoing development of innovative software solutions and the growing emphasis on cybersecurity in defense strategies will continue to drive the adoption of reliable and secure operating systems like Ubuntu.

                  Furthermore, the collaborative nature of the open-source community ensures that Ubuntu remains at the forefront of technological advancements, with constant improvements in performance, security, and functionality. As defense organizations adapt to emerging threats and challenges, Ubuntu stands ready to support their evolving mission requirements with its proven reliability and flexibility.

                  In the ever-evolving landscape of military technology, Ubuntu Linux stands out as a distinguished choice for defense organizations seeking reliability, security, and versatility in their operating systems. With its open-source philosophy, robust security features, and user-friendly interface, Ubuntu continues to play a crucial role in supporting military operations around the globe. As we look to the future, Ubuntu’s presence in the military domain is poised to strengthen further, contributing to the advancement of defense capabilities in an increasingly complex world.

                  The post Ubuntu in the Military: A Linux Distinction appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  12 May, 2024 02:07PM

                  Faizul "Piju" 9M2PJU: Ubuntu: Powering the World’s Knowledge Repository – The Wikipedia Connection

                  In the vast landscape of the internet, few platforms rival the ubiquity and impact of Wikipedia. As the world’s largest online encyclopedia, Wikipedia serves as a beacon of knowledge, democratizing access to information on an unprecedented scale. Behind this monumental project lies a robust infrastructure, and at the heart of it, lies Ubuntu, the open-source operating system that plays a pivotal role in supporting Wikipedia’s mission.

                  The Foundation of Collaboration

                  Wikipedia operates on the principles of collaboration and open access, mirroring the ethos of open-source software like Ubuntu. This shared philosophy has fostered a natural synergy between the two, with Ubuntu serving as the operating system of choice for many of Wikipedia’s servers and backend infrastructure.

                  Ubuntu Server: Powering Wikipedia’s Backend

                  The servers that host Wikipedia’s vast repository of articles, images, and data rely on Ubuntu Server for their operating system. Ubuntu’s stability, security, and scalability make it an ideal platform for handling the immense traffic and constant updates that Wikipedia experiences on a daily basis.

                  From managing user contributions to serving millions of page views per day, Ubuntu ensures that Wikipedia remains accessible and responsive to users around the globe. Its reliability and performance are essential in maintaining the uninterrupted availability of the world’s most comprehensive source of free knowledge.

                  Open Source Synergy

                  Both Ubuntu and Wikipedia embrace the principles of open source, promoting transparency, collaboration, and accessibility. Ubuntu’s open development model encourages community participation and innovation, much like Wikipedia’s crowdsourced approach to content creation and editing.

                  The collaboration between Ubuntu and Wikipedia extends beyond mere technological compatibility; it embodies a shared commitment to democratizing information and empowering individuals to contribute to the collective pool of human knowledge.

                  Security and Stability

                  As a high-traffic website with millions of users accessing its content every day, security is a top priority for Wikipedia. Ubuntu’s robust security features, coupled with regular updates and patches, help safeguard Wikipedia’s infrastructure against potential threats and vulnerabilities.

                  Furthermore, Ubuntu’s stability ensures uninterrupted service, minimizing downtime and ensuring that users can access Wikipedia whenever they need information.

                  Conclusion

                  In the digital age, access to knowledge is more critical than ever, and Wikipedia stands as a testament to the power of collaboration and open access. Behind this global resource lies the dependable foundation of Ubuntu, the open-source operating system that powers Wikipedia’s backend infrastructure.

                  As users around the world rely on Wikipedia to learn, explore, and discover, Ubuntu quietly plays its part in enabling this remarkable platform to fulfill its mission of providing free access to knowledge for all. Together, Ubuntu and Wikipedia embody the spirit of openness, innovation, and collaboration, paving the way for a more informed and interconnected world.

                  The post Ubuntu: Powering the World’s Knowledge Repository – The Wikipedia Connection appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  12 May, 2024 02:03PM

                  May 10, 2024

                  Faizul "Piju" 9M2PJU: Celebrate the Release of Ubuntu Noble Numbat 24.04 with the Ubuntu Malaysia Local Community!

                  Exciting news for all Linux enthusiasts and tech aficionados in Malaysia! The Ubuntu Malaysia Local Community is thrilled to invite you to the official release party for Ubuntu Noble Numbat 24.04. Join us on May 25, 2024, 10 AM at Taming Tech Sdn Bhd, located at Tingkat 1, 321A, Lorong Selangor, Taman Melawati, 53100 Kuala Lumpur, for an unforgettable celebration of this latest Ubuntu release.

                  Ubuntu Noble Numbat 24.04 marks another milestone in the journey of one of the world’s most popular Linux distributions. With each new release, Ubuntu continues to push the boundaries of innovation, reliability, and user-friendliness. So, what’s new in Noble Numbat?

                  First and foremost, Ubuntu Noble Numbat 24.04 brings with it the latest advancements in performance, security, and stability. Whether you’re a seasoned Linux user or new to the world of open-source software, you can expect an enhanced computing experience with this release.

                  One of the most notable features of Ubuntu Noble Numbat is its commitment to sustainability and environmental responsibility. From energy-efficient performance optimizations to reduced resource consumption, Ubuntu is leading the way in eco-friendly computing.

                  Moreover, Noble Numbat introduces several exciting updates and improvements across the board. Whether you’re interested in desktop productivity, server management, or development tools, Ubuntu has something for everyone.

                  But the Ubuntu experience extends far beyond the software itself. It’s about community, collaboration, and the shared passion for open-source technology. That’s where the Ubuntu Malaysia Local Community comes in.

                  The Ubuntu Malaysia Local Community is a vibrant and inclusive group of Ubuntu enthusiasts from all walks of life. Whether you’re a developer, sysadmin, educator, student, or simply curious about Linux, you’ll find a warm welcome and a wealth of knowledge within our community.

                  Our mission is to promote the adoption of Ubuntu and open-source software across Malaysia through advocacy, education, and outreach. From organizing release parties and workshops to providing support and resources for users, we’re dedicated to empowering individuals and organizations to embrace the power of open-source technology.

                  At the Ubuntu Noble Numbat 24.04 release party, you’ll have the opportunity to connect with fellow Linux enthusiasts, learn about the latest features of Ubuntu, and participate in engaging discussions and activities. Whether you’re a seasoned Linux veteran or just starting your journey with Ubuntu, there’s something for everyone at our event.

                  So mark your calendars and join us on May 25, 2024, 10 AM at Taming Tech Sdn Bhd in Kuala Lumpur for an unforgettable celebration of Ubuntu Noble Numbat 24.04. Together, let’s explore the possibilities of open-source computing and build a brighter future for technology in Malaysia and beyond. We can’t wait to see you there!

                  For more information and updates about the event, please follow us on social media here. To register for the event, kindly fill out the registration form here. Let’s make this release party a truly memorable experience for the Ubuntu community in Malaysia!

                  Huge thanks to Mawi for spearheading the organization of the Ubuntu release party. He brings a wealth of experience as a veteran of the special forces unit, Gerup Gerak Khas.

                  The post Celebrate the Release of Ubuntu Noble Numbat 24.04 with the Ubuntu Malaysia Local Community! appeared first on 9M2PJU - Malaysian Ham Radio Operator by 9M2PJU.

                  10 May, 2024 11:58PM