March 29, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

March 28, 2020

Ubuntu Podcast from the UK LoCo: S13E01 – Thirteen

This week the band is back together. We’ve been bringing new life into the universe and interconnecting chat systems. Distros are clad in new wallpapers, Raspberry Pi’s are being clustered with MicroK8s and the VR game industry has been revolutionsed.

It’s Season 13 Episode 01 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

28 March, 2020 11:00PM

Sergio Schvezov: Fingerprint Reader Support for Lenovo x390Y on Ubuntu

This is experimental, but I went ahead and ran the following after reading about it on reddit. snap install fwupdmgr /snap/bin/fwupdmgr install /snap/bin/fwupdmgr install These two cab files are referenced from: Rebooted and then went ahead with the actual fingerprint setup: After this was all done, login with fingerprints just worked. The only downside is that you need to press a key first to bring up the unlock logic.

28 March, 2020 10:38PM

hackergotchi for Freedombone


Improving onion support

I've been improving the support for the dual use case of Epicyon in which the instance is primarily on a clearnet domain but can also be used via an onion address. Previously when accessing via an onion address in a Tor browser it would often try to revert back to the clearnet domain, but now in nearly all cases it will stick with the onion address.

This kind of dual use case is typical for apps on Freedombone, and it gives you an alternative way to get to your sites if the clearnet becomes censored - such as if there is a hostile corporate firewall between you and your server. Due to the existence of bridges it's difficult for firewalls to entirely block access to Tor.

The future seems more uncertain than ever and so making use of alternate domain systems, like onion addresses, DAT, SSB, I2P, IPFS and so on is probably wise, at least as a fallback. Censoring things via DNS poisoning or blocking has historically been the go-to way that authoritarian governments try to stop people having the right to read in times of "national emergency".

28 March, 2020 07:50PM

hackergotchi for Purism PureOS

Purism PureOS

A Mini Desktop Replacement

Hardware is important when selecting a new desktop computer, but so is the software that drives the experience. You need to know there are applications you can rely on for your workflow. What can you use to browse the web, edit a spreadsheet, watch a movie, or play a game? PureOS, which empowers all our hardware, has plenty of software that respects your freedom and can get the job done.

The default browser on PureOS is the Extended Support Release of Firefox. This gives you the stable base of Firefox with at least a year of support on each version. Firefox puts an emphasis on security and is licensed under a weak copyleft license called the MPL.

Firefox add-ons are powerful, giving you the ability to add adblocking, account containers, key macros, etc. Do note, not all plugins are free software, it always a good idea to double-check before you install.

When it comes to editing a text document or setting up a spreadsheet, LibreOffice is the tool for you. You can edit in its native .odt format or use other common format types. It is licensed under the LGPL, which is also a weak copyleft license. PureOS has LibreOffice preinstalled, so no need to fuss with install instructions.

The default video player is called Totem. It is well integrated with the Gnome Desktop and is simply called Videos. It has a simple UI that gets out of the way and lets you watch your movie.

Another option is the popular VLC, like Totem it is GPL software. It can be installed by just typing vlc in the application search bar. This will lead you to the software store and allow you to install with just a few clicks.

For viewing, replying or organizing emails, you can use the built-in client called Thunderbird. You can also install Evolution as an alternative. Both work well for email, but Evolution includes extra out of the box features like calendar sync. Thunderbird is MPL and Evolution is LGPL, both are weak copyleft licenses.

Gnome Photos lets you view and organize your local and online pictures. It can access photos from your online accounts setup through Gnome and manage local images under your Pictures folder.

We also package software for creators. You can edit vector images with Inkscape, or pixel images with Gimp. The Librem Mini has the power to run intensive programs such as Blender, which is used for 3d modeling and animation. All of which use the GPL license.

Everyone likes gaming from time to time. If you’re looking for something like Minecraft, Minetest is licensed under the LGPL. Just like VLC, it can be installed directly from the software store. It takes only moments from install to a virtual world loading.

From emulation to Sudoku to something simple like 2048, PureOS packages a lot of games to keep you entertained.

Still not sure if you can get it all done? We offer several resources to support our users. Feel free to ask a specific question on our forum, or contact our support team.

We just announced the Librem Mini – our fastest, smallest and lightest Librem computer – starting at only $699. 8th-gen quad‑core i7 processor, up to 64 GB of fast DDR4 memory and 4K@60Hz HDMI 2.0 and Display Port and much more.

Preorder your Librem Mini

The post A Mini Desktop Replacement appeared first on Purism.

28 March, 2020 06:40PM by David Hamner

hackergotchi for SparkyLinux


Sparky 2020.03.1

New iso images of Sparky 2020.03.1 of the (semi-)rolling line have been generated.

This is a minor update of the live/install media which provides:

• fixed issue which doesn’t let you boot Sparky 2020.03 copied to an USB stick
• Sparky repository changed to the named “potolo” (“testing” works as before)
• all packages updated from Debian testing repos as of March 27, 2020

No reinstallation is required, simply make full system upgrade.

New rolling iso images can be downloaded from the download/rolling page.

28 March, 2020 11:08AM by pavroo

March 27, 2020

hackergotchi for Tails


Call for testing: 4.5~rc1

Tails 4.5, scheduled for April 7, will be the first version of Tails to support Secure Boot.

You can help Tails by testing the release candidate for Tails 4.5 now.

Secure Boot

Tails 4.5~rc1 should start on computers in UEFI mode and with Secure Boot enabled.

Known issues

If your Mac displays the following error:

Security settings do not allow this Mac to use an external startup disk.

Then you have to change the change the settings of the Startup Security Utility of your Mac to authorize starting from Tails.

Read our instructions on how to authorize starting from Tails on your Mac.

To open Startup Security Utility:

  1. Turn on your Mac, then press and hold Command(⌘)+R immediately after you see the Apple logo. Your Mac starts up from macOS Recovery.

  2. When you see the macOS Utilities window, choose Utilities ▸ Startup Security Utility from the menu bar.

  3. When you are asked to authenticate, click Enter macOS Password, then choose an administrator account and enter its password.

Startup Security Utility

In the Startup Security Utility:

  • Choose No Security in the Secure Boot section.

  • Choose Allow booting from external media in the External Boot.

To still protect your Mac from starting on untrusted external media, you can set a firmware password, available on macOS Mountain Lion or later. A firmware password prevents users who do not have the password from starting up from any media other than the designated startup disk.

If you forget your firmware password it will require an in-person service appointment with an Apple Store or Apple Authorized Service Provider.

Read more on Apple Support about:

See the list of long-standing issues.

How to test Tails 4.5~rc1?

Keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

Please, report any new problem to (public mailing list).

Get Tails 4.5~rc1

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from 4.2 or later to 4.5~rc1.

    To do an automatic upgrade to Tails 4.5~rc1:

    1. Start Tails 4.2 or later and set an administration password.

    2. Run this command in a Terminal:

      echo TAILS_CHANNEL=\"alpha\" | sudo tee -a /etc/os-release && \

      Enter the administration password when asked for the "password for amnesia".

    3. After the upgrade is applied, restart Tails and choose Applications ▸ Tails ▸ About Tails to verify that you are running Tails 4.5~rc1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To download 4.5~rc1

Direct download

BitTorrent download

To install Tails on a new USB stick

Follow our installation instructions:

All the data on this USB stick will be lost.

What's coming up?

Tails 4.5 is scheduled for April 7.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

27 March, 2020 05:05PM

hackergotchi for Ubuntu developers

Ubuntu developers

Full Circle Magazine: Full Circle Magazine #155

This month:
* Command & Conquer
* How-To : Python, Ubuntu & Security, and Rawtherapee [NEW!]
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback: nomadBSD
* Everyday Ubuntu
* Review : QNAP NAS
* Ubuntu Games : Asciiker
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot!

27 March, 2020 05:01PM

hackergotchi for ArcheOS


3DHOP for speleoarchaeology

Hello everybody,
today I go on writing about our speleoarchaeological project on the natural cave "Bus dela Spia". I prepared some material to share in later post, about the work-flow I followed in Blender, MeshLab, CloudComaper and GRASS GIS, but in the meantime I want to show the final result of my work in recovering old documentation (maps and sections), thanks to the FLOSS 3DHOP.
If you are a regular reader of ATOR, you know what I am speaking about. The software is developed by the Italian CNR (ISTI) and, more precisely, by the Visual Computin Lab (the same programmers who write the code of MeshLab). I chose this software due to its nice "slicer" tool, which allow the user to virtually cut a 3D model along one of the axis, to see other models hidden below (of course this is very useful in archaeology). This in not the only interesting tool (especially considering the updates of the last release), but, by now, is the one I want to show, also because I use it very often in another archaeological field: the Forensic Facial Reconstruction (in order to cut the face and show the cranium). Here below is a short video about this tool. Within 3DHOP are loaded 2 object: the Digital Terrain Model of the landscape and the 3D reconstruction of the "Bus dela Spia", performed in Blender. I hope you will enjoy it. Have a nice day!

27 March, 2020 04:22PM by Luca Bezzi (

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Learn snapcraft by example – multi-app client-server snap

Over the past few months, we published a number of articles showing how to snap desktop applications written in different languages – Rust, Java, C/C++, and others. In each one of these zero-to-hero guides, we went through a representative snapcraft.yaml file and highlighted the specific bits and pieces developers need to successfully build a snap.

Today, we want to diverge from this journey and focus on the server side of things. We will give you an overview of a snapcraft.yaml with two interesting components: a) it will have more than one application; typically, snaps come with one application inside b) it will have a simple background service, to which other applications can connect. Let’s have a look.


Here’s the relevant part of the snapcraft.yaml file:

    command: bin/borg
    daemon: simple
    restart-condition: on-abnormal
      - home
      - network
      - network-bind
    command: bin/locutus
      - home
      - network

What do we have here?

First, we declare the server part of the bundle – an application named borg. It is a simple daemon, and the restart condition will be “on-abnormal” – the service states are based on values returned from systemd.

Second, for the service to work, it needs to be able to bind to network ports, which is why we’re defining the network-bind interface. Please note that if your application needs to bind to a privileged port, you will need sudo to be able to make it run correctly. The use of snaps does not change the underlying security requirements.

We also define access to home (so that the service can read a configuration file, for instance), and network, so it can communicate over the network. This is necessary in addition to the bind declaration, unless your service is designed to only run and listen on localhost.

In the second part, we declare the client part of the bundle – an application that only needs access to the home directory (configuration files, certificates, etc.), and network. This way, you can run the component applications of this snap on various systems, in a classic client-server model.

How to invoke a multi-app snap?

The one question you may have is – with multiple apps inside a snap, how does the user run them? When the app name matches the snap name, e.g.: igor, you only need to run “igor” or “/snap/bin/igor”. Here, the invocation is slightly different. Let’s say the snap is called “picard”. In this case, for the server application, you would run:


Similarly, for the client application:


Users can manually create aliases to work around this. Similarly, developers can request automatic aliases for their software from the Snap Store team, in cases where there is no obvious namespace clash. This can lead to a smoother, more streamlined user experience. In this example, borg will be mapped to picard.borg, and locutus will be mapped to picard.locutus.


We hope today’s guide sheds clarity on how to make somewhat more elaborate use cases with snaps, including multi-apps and client-server scenarios. Of course, creating the snapcraft.yaml is only part of the work, you also need to make sure that your software parts can communicate with one another, and do their job correctly. However, at least you have the snapcraft side of things covered.

If you have any comments, or perhaps requests on future articles that would help you snap your application (or a complex service), please join our forum for a discussion.

No Star Trek TNG references were aliased in the creation of this article.

Photo by Marlon Corona on Unsplash.

27 March, 2020 10:49AM

March 26, 2020

Kubuntu General News: Testing for the Beta – help needed!

Kubuntu 20.04 Testing Week

The Kubuntu team is delighted to announce an ‘Ubuntu Testing Week’ from April 2nd to April 8th with other flavors in the Ubuntu family. April 2nd is the beta release of what will become Kubuntu 20.04 and during this week, there will be a freeze on changes to features, the user interface and documentation. Between April 2nd and final release on April 23rd, the Kubuntu team and community will focus on ISO testing, bug reporting, and fixing bugs. Please join the community by downloading the daily ISO image and trying it out, even beginning today.

QA tracker:

From this main page, click on the ‘Kubuntu Desktop amd64’ link to arrive at the testcases page. On the testcases page, you can download the ISO by clicking the ‘Link to the download information’ and report test results to the various test cases for Kubuntu. If you see other flavors needing testing on the main page, please test for them as well.

Chat live on IRC (#ubuntu-quality) or Telegram (UbuntuTesters: if you like, during this time of pandemic social distancing.

If you have no spare computer to use for testing, no problem! You can test without changing your system by running it in a VM (Virtual Machine) with software like Virtualbox, or running it in the live session from a USB or DVD, so you can also test if your hardware works correctly. We encourage those that are willing, to install it either in a VM or on physical hardware–requires at least 6GB of harddisk space–and use it continuously for a few days, as more bugs can be exposed and reported this way.

The easy way to report a bug is to open up Konsole by pressing alt+space and typing konsole or Menu > Konsole and then typing `ubuntu-bug packagename`, where packagename is the program or application where you experience the bug.

If you prefer working in the terminal, open the virtual console (terminal) by pressing control + alt + F2, 3, 4 etc. and typing `ubuntu-bug packagename`, where packagename is the program or application where you experience the bug. Control + Alt + F1 to return to your desktop. If a crash has landed you in the terminal, login with your usual user name and password, and report the bug as above.

Here is a nice youtube video showing the entire process, including one way to figure out what packagename is appropriate in GNOME:

Using ‘ubuntu-bug’ will automatically upload error logs and/or other files to Launchpad that developers need to fix the bug. By the way, the installer’s packagename is ubiquity. Experience tells us that is the most useful packagename to know for ISO testing when things go wrong with the installation. The live session software package is casper, should you encounter bugs affecting the live session itself, not programs. Other programs with bugs should be filed against their packages, for instance firefox, dolphin, vlc, etc. Only the bug *number* is needed when reporting the results of a test on the QA tracker.

Please test programs / applications that you regularly use, so you can identify bugs and regressions that should be reported. New ISO files are built every day; always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO with the command below (first right-click on the ISO’s folder in Dolphin and select ‘Open in Terminal’) or just open konsole or yakuake and `cd path-to-ISO-folder`. Zsync downloads only changes, so it’s very quick.
$ zsync

26 March, 2020 09:21PM

Jonathan Carter: Lockdown

I just took my dog for a nice long walk. It’s the last walk we’ll be taking for the next 3 weeks, he already starts moping around if we just skip one day of walking, so I’m going to have to get creative keeping him entertained. My entire country is going on lockdown starting at midnight. People won’t be allowed to leave their homes unless it’s for medical emergencies, to buy food or if their work has been deemed essential.

Due to the Covid-19 epidemic nearing half a million confirmed infections, this has become quite common in the world right now, with about a quarter of the world’s population currently under lockdown and confined to their homes.

Some people may have noticed I’ve been a bit absent recently, I’ve been going through some really rough personal stuff. I’m dealing with it and I’ll be ok, but please practice some patience with me in the immediate future if you’re waiting on anything.

I have a lot of things going on in Debian right now. It helps keeping me busy through all the turmoil and gives me something positive to focus on. I’m running for Debian Project Leader (DPL), I haven’t been able to put in quite the energy into my campaign that I would have liked, but I think it’s going ok under the circumstances. I think because of everything happening in the world it’s been more difficult for other Debianites to participate in debian-vote discussions as well. Recently we also announced Debian Social, a project that’s still in its early phases, but we’ve been trying to get it going for about 2 years, so it’s nice to finally see it shaping up. There’s also plans to put Debian Social and some additional tooling to the test, with the idea to host a MiniDebConf entirely online. No dates have been confirmed yet, we still have a lot of crucial bits to figure out, but you can subscribe to debian-devel-announce and Debian micronews for updates as soon as more information is available.

To everyone out there, stay safe, keep your physical distance for now and don’t lose hope, things will get better again.

26 March, 2020 03:09PM

hackergotchi for Cumulus Linux

Cumulus Linux

Virtual Data Centers, SDN, and Multitenancy

When you aren’t the size of Netflix, you may not be guaranteed dedicated infrastructure within a data center; you have to share. Even in larger organizations, multitenancy may be required to solve regulatory compliance issues. So what is multitenancy, how does it differ from other forms of resource division, and what role do networks play?

Gartner Inc. defines multitenancy as “a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.” This is basically a fancy way of saying “cutting up IT infrastructure so that more than one user/department/organization/and so on can share the same physical IT infrastructure, without being able to see one another’s data.”

That “without being able to see one another’s data” is the critical bit. Allowing multiple users to use a single computer has been possible for decades. Multi-user operating systems, for example, can allow multiple users to log in to a single computer at the same time. While this approach does allow multiple users to share a physical piece of IT infrastructure, it isn’t multitenancy.

In a multi-user OS, the multiple users logged in to the system are all using the same OS. The only thing that prevents one user from seeing another user’s data are the security controls of that OS. Barring additional isolation software, typically from a third party, users will be able to see what other users are doing on the system, and may be capable of launching any number of attacks against other users, or their data.

Isolation Is Key

Multitenancy differs from multi-user concepts in that it incorporates the idea of isolation. A good example are two virtual machines (VMs) running on a single host: The OS running inside a VM can’t see into other VMs on the same host. Multitenancy, however, also incorporates the idea of reproducing entire environments, notably including management capabilities, for each tenant. This adds another dimension of consideration.

A single virtualization host running two VMs could be a multi-tenant environment if the environment being reproduced is entirely contained within that VM. Let’s say, for example, that your company rents dedicated web server VMs, where each VM has a complete management suite. The user’s entire interaction with their environment is accomplished with a web-based management application, SSH, and the website that it ultimately serves. In this example, user data is separated because each VM is self-contained—the data doesn’t leave the VM, and users can’t break out of their VM to go rummage around in someone else’s.

Public cloud providers have shown the world what’s possible, so in the real world, even when talking about multitenancy within a single organization, IT infrastructure has to be considerably more advanced for anyone to start using the word “multitenancy” seriously.

Today, multitenancy requires not only that networking, storage, and compute resources be securely divisible, but that individual tenants have a way to create, edit, and destroy both workloads and data on their own. Self-service is technically a separate concept from multitenancy, but pragmatically, they’re deeply intertwined.

Consider, for example, the science department of a research university. Here, multiple individual research projects may need to be strictly segregated from each other, especially if government funding is involved. Separate physical infrastructure for each project would be expensive and inefficient, so switching, storage, and maybe even hosts might be shared, despite there being requirements for strict segregation of both data and access.

Automation, Orchestration, Action!

Data centers become more difficult to manage securely when either scale or complexity increases—you can only throw so many humans at the management problem before they start getting in one another’s way. As a result, if you want to manage a complex data center at scale, you need automation.

Multitenancy not only increases complexity, but it almost always opens the door to a rapid scaling of the IT resources in question. Thanks in part to a greater awareness of the risks of data theft, along with the long-term consequences of taking a lackadaisical approach to privacy, there aren’t a lot of people willing to skimp on security or privacy anymore. This reality is why automation is absolutely vital to deploying practical multitenancy.

Automation is only the first step. One thousand different automated systems that have to work in concert to get things done might be less chaotic than 1,000 different manually operated systems that have to work in concert to get things done; but 1,000 different automation systems trying to do anything in a coordinated fashion is still a complete madhouse.

Orchestration is simply the automation of those automated systems. It’s the reason you can log into a cloud provider, fill out a wizard, and with the push of a button have a pre-canned set of VMs, load balancers, security features, and virtual networks instantiated, configured, and made publicly available.

To use an analogy, automation is like graduating from making your flour with a mortar and pestle to an electric grinder. Orchestration is an industrial bakery.

Multitenancy in today’s data centers combines self-service, orchestration, and automation with logical infrastructure isolation technologies like computer, storage, and networking virtualization. The result is the ability for multiple organizations to share the same physical infrastructure securely—but making this happen isn’t easy.

The Advantages of ‘Open’

Multitenancy requires the orchestration of multiple different types of IT infrastructure. In any given data center there can be multiple different vendors providing products for networking, compute, and storage, in addition to vendors for management, self-service capabilities for tenants, security, and more. Orchestrating all of these pieces is most easily accomplished if each product uses both open protocols and open standards.

This is especially true of networks, which are the backbone tying all the other technologies involved together. Today, networks involve multiple vendors. In addition to physical networking, there are virtual switches in both hypervisors and microvisors. Each hosted and public cloud provider has its own networking to consider. Organizations will also have to put effort into securing the connections between data centers, and both the public cloud and hosted services that they use.

Automation is critical for making the day-to-day of modern IT viable. Open standards and open protocols, on the other hand, are important for keeping modern IT viable.

Modern data centers no longer do bulk forklift upgrades. The “refresh cycle” is a myth, one displaced by the cold reality of perpetual organic growth. There is a constant churn in the data center, a response not only to the need to continually scale, but the need to constantly change.

Data centers are no longer homogenous, single-vendor islands. Advancing data centers to be able to deliver the multitenancy that today’s users expect means creating data centers that are not only complicated and constantly evolving, but also capable of coping with multiple similar products, from multiple vendors.

For products and vendors to be able to enter—and eventually leave—the data center without causing operational disruption, these products must all be able to communicate in a standardized fashion. This is where open standards and open protocols come in. It’s also where Cumulus Linux comes in.

Cumulus Linux is based upon open standards, open protocols, and open source. Cumulus Linux can be fully automated and orchestrated, and plays well with others. If you’re looking to evolve your network toward the kind of multitenancy that public clouds have taught us all to expect, then Cumulus Linux is what you’ll need to make your network simple enough to manage … and keep it that way.

26 March, 2020 03:06PM by Katherine Gorham

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How Domotz streamlined provisioning of IoT devices

How Domotz streamlined provisioning of IoT devices

As the number of IoT devices scale, the challenges of provisioning and keeping them up to date in the field increases. Domotz, who manufacture an all-in-one, network monitoring and management device for enterprise IoT networks, found themselves with this challenge that was further compounded by their rapid software release cadence.

One of the most crucial and difficult aspects for Domotz to solve was the delivery of automatic updates to the tens of thousands of devices deployed. Domotz turned to snaps and Ubuntu Core to meet their exacting requirements.

I absolutely believe that Ubuntu Core and snaps give us a competitive advantage. We are the only company in the IoT network management space that can guarantee a secure, always-up-to-date device for our customers’ on-premises deployments.

Giancarlo Fanelli, CTO, Domotz

Download the full case study to learn more including:

  • How the use of Ubuntu Core, snaps and private brand stores offers the security, reliability and customisation Domotz needed to roll out to their 3,000 customers
  • The development time saved over building and maintaining their own provisioning channel
  • The ease of integration with their existing infrastructure including Windows machines and AWS virtual servers

To view the case study, complete the form below:

26 March, 2020 09:00AM

Xubuntu: Xubuntu 20.04 Testing Week

We’re delighted to announce that we’re participating in an ‘Ubuntu Testing Week’ from April 2nd to April 8th with other flavors in the Ubuntu family. On April 2nd, we’ll be releasing the beta release of Xubuntu 20.04 LTS, after halting all new changes to its features, user interface and documentation. And between April 2nd and the final release on April 23rd, all efforts by the Xubuntu team and community are focused on ISO testing, bug reporting, and fixing bugs.

So, we highly encourage you to join the community by downloading the daily ISO image and trying it out, though you are welcome to start from today. There are a variety of ways that you can help test the release, including trying out the various testcases for live sessions and installations on the ISO tracker (Xubuntu is found at the bottom of the page), which take less than 30 minutes to complete (example 1, example 2, example 3 below).

You can test without changing your system by running it in a VM (Virtual Machine) with software like VMWare Player, VirtualBox (apt-install), and Gnome Boxes (apt-install), or running it in the live session from a USB, SD Card, or DVD, so you can also test if your hardware works correctly. There are a number of software like etcher and Gnome Disks that can copy the ISO to a USB Drive and SD Card. We encourage those that are willing, to install it either in a VM or on physical hardware (it requires at least 6GB of harddisk space) and use it continuously for a few days, as more bugs can be reported this way.

If you find a bug in the installer, you can file it against ubiquity, or if you find a bug not in an application but in the live session from the booting to the shutdown, you can file it against casper. If you can’t figure out which package to file a bug against after watching the video above, then please file it with the Xubuntu Bugs Team.

Please test apps that you regularly use, so you can identify bugs and regressions that should be reported. New ISO files are built everyday, and you should always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO file on Linux by running the command below in the folder containing the ISO, after right-click on the folder and select ‘Open in Terminal’ from the context menu (example).

$ zsync

In order to assist you in your testing efforts, we encourage you to read our Quality Assurance (QA) guide and our new testers wiki. You can also chat with us live in our dedicated IRC channel ( #ubuntu-quality on freenode ) or telegram group ( Ubuntu Testers ). In order to submit reports to us, you’ll need an Launchpad account and once you have one, you can also join the Xubuntu Testers team.

We hope that you will join the community in making Xubuntu 20.04 a success, and hope that you will also take time to also test out the other Ubuntu flavors (Kubuntu, Lubuntu, Ubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, and Ubuntu Studio), as we will all benefit from that. We look forward to your contributions, your live chatting and for your return to future testing sessions. Happy bug hunting.

26 March, 2020 06:00AM

Ubuntu Blog: Securing open source through CVE prioritisation

Securing open source through CVE prioritisation

According to a recent study, 96% of applications in the enterprise market use open-source software. As the open-source landscape becomes more and more fragmented, the task to assess the impact of potential security vulnerabilities for an organisation can become overwhelming. Ubuntu is known as one of the most secure operating systems, but why? Ubuntu is a leader in security because, every day, the Ubuntu Security team is fixing and releasing updated software packages for known vulnerabilities. It is a continuous 24/7 effort. In fact, on average, the team is providing more than 3 updates each day, and the most vital updates are prepared, tested and released within 24 hours. To achieve that result, Canonical designed a robust process to review, prioritise and fix the most crucial software vulnerabilities first. Software vulnerabilities are tracked as part of the Common Vulnerabilities and Exposures (CVE) system, and almost all security updates published by the Ubuntu Security team (via Ubuntu Security Notices – USNs) are in response to a given public CVE. 

The robust triage process

The Ubuntu Security team manages their own CVE database to track various CVEs against the software packages within the Ubuntu archive. As part of this process, each day the team triages the latest public vulnerabilities from various sources, including MITRE, NIST NVD and others. This triage process involves assessing every single new publicly announced CVE and determining which (if any) software packages in Ubuntu may be affected, collecting any information required for patching the package (including upstream patches) and noting any potential mitigations for the vulnerability. Once CVEs are triaged against the applicable software packages, they are assigned a priority, from the range of negligible, low, medium, high and critical. This priority is then used by the Ubuntu Security team to indicate which vulnerabilities should be addressed first.

Extended CVE review 

A common method for assessing the severity of CVEs is the Common Vulnerability Scoring System (CVSS). This is designed to provide a numerical value for the severity of the particular vulnerability and to allow these to be compared between vulnerabilities. The CVSS score for a given CVE is calculated using a number of inputs, and whilst this allows various aspects of the vulnerability to be considered, it has a number of shortcomings. In particular, whilst CVSS was designed to assess the technical severity of a vulnerability, it is often misused instead as means of vulnerability prioritisation or risk assessment. In particular, there are many aspects that are important to consider for a given vulnerability which is not captured by CVSS, including the likelihood that the given software package is installed or in use, whether the default configuration of a package may mitigate the vulnerability and whether a known exploit against the vulnerability exists.

CVE Prioritisation done right

In contrast, the priority value assigned by the Ubuntu Security team is designed to capture these and other elements so that it can be used as an effective measure to prioritise security software updates taking into account every Ubuntu instance – including server, desktop, cloud, and IoT. Vulnerabilities that affect the largest number of Ubuntu installations and which present the largest risk (by say being remotely exploitable without any user input, etc.) are prioritised critical or high. Those which affect only a small number of users and might require user-input or might only cause smaller effects such as a denial-of-service may be prioritised as a medium, low or negligible. This CVE prioritisation is done on a case-by-case basis for each vulnerability, and since a given vulnerability might apply to more than one package in the Ubuntu archive, this can be assigned further on a vulnerability-per-package basis as well. This ensures that those vulnerabilities which have the highest risk and impact and which are likely to affect the largest number of Ubuntu installations are fixed first, regardless of the given CVSS score, to ensure that the risk of exploitation by known software vulnerabilities is limited as much as possible.

To read more about the priority which is assigned for each vulnerability, as well as the criteria used for each priority assignment as part of the CVE prioritisation, refer to the Ubuntu CVE Tracker.

26 March, 2020 12:12AM

March 25, 2020

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, February 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 226 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA gave back 12 out of his assigned 14h, thus he is carrying over 2h for March.
  • Ben Hutchings did 19.25h (out of 20h assigned), thus carrying over 0.75h to March.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 5.5h (out of 4h assigned and 1.5h from January).
  • Emilio Pozuelo Monfort did 29h (out of 20h assigned and 15.75h from January), thus carrying over 6.75h to March.
  • Hugo Lefeuvre gave back the 12h he got assigned.
  • Markus Koschany did 10h (out of 20h assigned and 8.75h from January), thus carrying over 18.75h to March.
  • Mike Gabriel did 5.75h (out of 20h assigned) and gave 12h back to the pool, thus he is carrying over 2.25h to March.
  • Ola Lundqvist did 10h (out of 8h assigned and 4.5h from January), thus carrying over 2.5h to March.
  • Roberto C. Sánchez did 20.25h (out of 20h assigned and 13h from January) and gave back 12.75h to the pool.
  • Sylvain Beucler did 20h (out of 20h assigned).
  • Thorsten Alteholz did 20h (out of 20h assigned).
  • Utkarsh Gupta did 20h (out of 20h assigned).

Evolution of the situation

February began as rather calm month and the fact that more contributors have given back unused hours is an indicator of this calmness and also an indicator that contributing to LTS has become more of a routine now, which is good.

In the second half of February Holger Levsen (from LTS) and Salvatore Bonaccorso (from the Debian Security Team) met at SnowCamp in Italy and discussed tensions and possible improvements from and for Debian LTS.

The security tracker currently lists 25 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

25 March, 2020 04:44PM

hackergotchi for ArcheOS


Speleoarchaeology: recovering old maps in 3D

Hello everybody,
during these days I am working on a speleoarchaeological project regarding a cave called "Bus dela Spia" ("Spy's hole") in Trentino. I already wrote about our first mission in this environment, looking of archaeological evidences. The data I am working on now comes from a new exploration, performed in January 2020.
I will report soon more details about the second mission, but now I would like to share a short video showing the result of a test I did to try to recover some old maps and sections to use them as base cartography for our project, which was focused on the 3D documentation of some specific AOI (Area Of Interest).
Here below is an image showing the old documentation of the "Bus dela Spia", based on past speleological explorations (supported by underwater spelunking activities). As you can see, despite the same scale-bar, the map and the section report different values on the X axis. This is due to the fact that the section does not follow a linear path, since the cave, obviously, is not straight, but, while the 3D path of the of the map has been projected, as usual, on a bi-dimensional plane, the section has been "enrolled", reducing (avoiding?) any projection.

The old documentation (map and section) of the "Bus dela Spia"

To manage this kind of data in an easier way I tried to restore in Blender their original shape in 3D. I will describe this process in future post on ATOR, by now, as I said, I just show the result with the video below.

I will use this raw 3D model to position our 3D documentation of some archaeological evidences, performed via SfM -MVS, trying, in the meantime, to recover some old laserscan data (2009).
I hope this post will be useful, if you will ever have the same problem (in combining map and section of old documentation). I will try to publish more detail about the work-flow ASAP. Have a nice day! 

25 March, 2020 04:40PM by Luca Bezzi (

hackergotchi for Purism PureOS

Purism PureOS

Free software for remote working

Purism has been working remote since we started in 2014. Here’s our list of essential free software for remote work, all can be self hosted or used via various hosted options.

Chat, Calls and Video Conferencing

Team chat has already become an essential tool for teams looking to be more collaborative and less reliant on email. At Purism we use Matrix for team chat, 1 to 1 calls, video conferencing via Jitsi (open source video conferencing), adhoc file sharing and all our community chat channels. Matrix is a distributed (federated) network, similar to email, which means you can communicate across Matrix servers and compatible services.

You can self host Matrix or use a public instance like our own free Librem Chat service part of Librem One. All the goodness of Matrix conveniently hosted for you and accessible with one account that also gives you access to Librem Social, our hosted Mastodon instance, and our premium services: end-to-end encrypted email and VPN.

Audio Conferencing

We use Mumble for weekly team calls and general large group audio conferencing. We really like its low bandwidth requirements and found it scales really well for our all-hands meeting.


Our primary social channel is on our free Librem Social service powered by Mastodon. Like Matrix, Mastodon is a distributed (federated) network, so you can create an account on one of the many public servers or host your own instance and still communicate across instances. Setting up a private company Mastadon can be a great way for everyone to share their days.

Librem Chat and Librem Social are free service part of Librem One



In addition to our community chat and social channels we have Discourse forums for our various products and support. Forums are great for long term conversations not suitable for chat. If you are new to remote work try out both team chat and forums to see what works for your team.

Project Management and DevOps

At Purism we have a pledge that all our software and hardware will be free/libre and open source. We host our own GitLab Community Edition instance for our source code, project management, support and DevOps. GitLab also has powerful user and group management which makes it easy to work with hundreds of active community contributors. For PureOS we also host phabricator for ticketing.

Content Authoring and Publishing

Our various web properties use WordPress for content authoring but we publish static sites for security and speed. We are looking to migrate to pure static site generators in the future but WordPress has been an essential tool for us to launch products and share updates with the community.

Calendar and Files

We heavily use NextCloud internally for our calenders, event scheduling, general file storage and collaborating on documents.

Operating System

At Purism we use PureOS, our secure GNU/Linux operating system based on Debian. PureOS comes with many security improvements over a default setup from the average Linux distribution. There’s support for our TPM chips and Librem Key. We’ve also enabled AppArmor for more secure apps and we’ve created a better, safer browsing experience by blocking ads and enforcing HTTPS everywhere. See the PureOS wiki to learn more about the extensive security features in PureOS.

PureOS is the same operating system we run on our Librem laptops, servers, our recently announced Librem Mini and even on our Librem 5 smartphone. Yes that’s right, the Librem 5 runs a complete desktop Linux experience with access to the same rich app ecosystem.


Most office-based teams already have email and things like a company newsletter but we thought we’d share how we manage ours. Our company email and Librem Mail are powered by Dovecot and we use GNU Mailman for our newsletter and mailing lists. We also have an internal wiki based on wiki.js.

If you’d like to know more about how we work remotely let us know on social, chat or our forums.

The post Free software for remote working appeared first on Purism.

25 March, 2020 04:19PM by Sean Packham

hackergotchi for SparkyLinux


Sparky named repos

New Sparky named repositories have been created, alongside to the present ones:
• oldstable-> tyche
• stable-> nibiru
• testing-> potolo

What is it for?
Developing and providing packages to Sparky based on Debian testing only was quite easy, it was just one branch, developed as a rolling release. No changes in repos required then.

Everything changed after releasing Sparky on Debian stable and keeping the oldstable line as well.

Every big upgrade, means from testing to a new stable, and stable to a new oldstable required manual changes in the repo lists.

To avoid that, the named repos will let you smoothly upgrade your Sparky installation without any manual change. It will be done via a regular upgrade using the package manager/upgrade tool only.


What to do now?
No need to change Sparky repos manually, simply make system upgrade as you do.

• sparky4-apt on Sparky 4 Tyche
• sparky5-apt on Sparky 5 Nibiru
• sparky6-apt on Sparky rolling (upcoming 6 Po Tolo)
install the new Sparky repo automatically.

Let me know at our forums, if you find any problem with that.
Wiki pages of Sparky repos have been already updated.

25 March, 2020 12:10PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 82 – Corsários e Capitães

Nuno do Carmo o nosso corsário preferido está de volta para nos contar como foia WSLConf. Fiquem também para saber o que é um Capitão. Já sabem: oiçam, comentem e partilhem!



Este episódio foi produzido e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

25 March, 2020 10:07AM

David Tomaschik: Security 101: X-Forwarded-For vs. Forwarded vs PROXY

Over time, there have been a number of approaches to indicating the original client and the route that a request took when forwarded across multiple proxy servers. For HTTP(S), the three most common approaches you’re likely to encounter are the X-Forwarded-For and Forwarded HTTP headers, and the PROXY protocol. They’re all a little bit different, but also the same in many ways.


X-Forwarded-For is the oldest of the 3 solutions, and was probably introduced by the Squid caching proxy server. As the X- prefix implies, it’s not an official standard (i.e., an IETF RFC). The header is an HTTP multi-valued header, which means that it can have one or more values, each separated by a comma. Each proxy server should append the IP address of the host from which it received the request. The resulting header looks something like:

X-Forwarded-For: client, proxy1, proxy2

This would be a request that has passed through 3 proxy servers – the IP of the 3rd proxy (the one closest to the application server) would be the IP seen by the application server itself. (Often referred to as the “remote address” or REMOTE_ADDR in many application programming contexts.)

So, you could end up seeing something like this:

X-Forwarded-For: 2001:DB8::6,

Coming from a TCP connection from This implies that the client had IPv6 address 2001:DB8::6 when connecting to the first proxy, then that proxy used IPv4 to connect from to the final proxy, which was running on localhost. A proxy running on localhost might be nginx splitting between static and application traffic, or a proxy performing TLS termination.


The HTTP Forwarded header was standardized in RFC 7239 in 2014 as a way to better express the X-Forwarded-For header and related X-Forwarded-Proto and X-Forwarded-Port headers. Like X-Forwarded-For, this is a multi-valued header, so it consists of one or more comma-separated values. Each value is, itself, a set of key-value pairs, with pairs separated by semicolons (;) and the keys and values separated by equals signs (=). If the values contain any special characters, the value must be quoted.

The general syntax might look like this:

Forwarded: for=client, for=proxy1, for=proxy2

The key-value pairs are necessary to allow expressing not only the client, but the protocol used, the original HTTP Host header, and the interface on the proxy where the request came in. For figuring out the routing of our request (and for parity with the X-Forwarded-For header), we’re mostly interested in the field named for. While you might think it’s possible to just extract this key-value pair and look at the values, the authors of the RFC added some extra complexity here.

The RFC contains provisions for “obfuscated identifiers.” It seems this is mostly intended to prevent revealing information about internal networks when using forward proxies (e.g., to public servers), but you might even see it in operating reverse proxies. According to the RFC, these should be prefixed by an underscore (_), but I can imagine cases where this would not be respected, so you’d need to be prepared for that in parsing the identifiers.

The RFC also contains provisions for unknown upstreams, identified as unknown. This is used to indicate forwarding by a proxy in some manner that prevented identifying the upstream source (maybe it was through a TCP load balancer first).

Finally, there’s also the fact that, unlike the defacto standard of X-Forwarded-For, Forwarded allows for the option of including the port number on which it was received. Because of this, IPv6 addresses are enclosed in square brackets ([]) and quoted.

The example from the X-Forwarded-For section above written using the Forwarded header might look like:

Forwarded: for="[2001:DB8::6]:1337", for=;proto=https

Additional examples taken from the RFC:

Forwarded: for="_gazonk"
Forwarded: For="[2001:db8:cafe::17]:4711"
Forwarded: for=;proto=http;by=
Forwarded: for=, for=

PROXY Protocol

At this point, you may have noticed that both of these headers are HTTP headers, and so can only be modified by L7/HTTP proxies or load balancers. If you use a pure TCP load balancer, you’ll lose the information about the source of the traffic coming in to you. This is particularly a problem when forwarding HTTPS connections where you don’t want to offload your TLS termination (perhaps traffic is going via an untrusted 3rd party) but you still want information about the client.

To that end, the developers of HAProxy developed the PROXY protocol. There are (currently) two versions of this protocol, but I’ll focus on the simpler (and more widely deployed) 1st version. The proxy should add a line at the very beginning of the TCP connection in the following format:

PROXY <protocol> <srcip> <dstip> <srcport> <dstport>\r\n

Note that, unlike the HTTP headers, this makes the PROXY protocol not backwards compatible. Sending this header to a server not expecting it will cause things to break. Consequently, this header will be used exclusively by reverse proxies.

Additionally, there’s no support for information about the hops along the way – each proxy is expected to maintain the same PROXY header along the way.

Version 2 of the PROXY protocol is a binary format with support for much more information beyond the version 1 header. I won’t go into details of the format (check the spec if you want) but the core security considerations are much the same.

Security Considerations

If you need (or want) to make use of these headers, there are some key security considerations in how you use them to use them safely. This is particularly of consideration if you use them for any sort of IP whitelisting or access control decisions.

Key to the problem is recognizing that the headers represent untrusted input to your application or system. Any of them could be forged by a client connecting, so you need to consider that.

Parsing Headers

After I spent so long telling you about the format of the headers, here’s where I tell you to disregard it all. Okay, really, you just need to be prepared to receive poorly-formatted headers. Some variation is allowed by the specifications/implementations: optional spaces, varying capitalization, etc. Some of this will be benign but still unexpected: multiple commas, multiple spaces, etc. Some of it will be erroneous: broken quoting, invalid tokens, hostnames instead of IPs, ports where they’re not expected, and so on.

None of this, however, precludes malicious input in the case of these headers. They may contain attempts at SQL Injection, Cross-Site Scripting and other malicious content, so one needs to be cautious in parsing and using the input from these headers.

Running a Proxy

As a proxy, you should consider whether you expect to be receiving these headers in your requests. You will only want that if you are expecting requests to be forwarded from another proxy, and then you should make sure the particular request came from your proxy by validating the source IP of the connection. As untrusted input, you cannot trust any headers from proxies not under your control.

If you are not expecting these headers, you should drop the headers from the request before passing it on. Blindly proxying them might cause downstream applications to trust their values when they come from your proxy, leading to false assertions about the source of the request.

Generally, you should rewrite the appropriate headers at your proxy, including adding the information on the source of the request to your proxy, before passing them on to the next stage. Most HTTP proxies have easy ways to manage this, so you don’t usually need to format the header yourself.

Running an Application

This is where it gets particularly tricky. If you’re using IP addresses for anything of significance (which you probably shouldn’t be, but it’s likely there’s some cases where people still are), you need to figure out whether you can trust these headers from incoming requests.

First off, if you’re not running the proxies: just don’t trust them. (Of course, I count a managed provider as run by you.) Also, if you’re not running the proxy, I hope we’re only talking about the PROXY protocol and you’re not exposing plaintext to untrusted 3rd parties.

If you are running proxies, you need to make sure the request actually came from one of your proxies by checking the IP of the direct TCP connection. This is the “remote address” in most web programming frameworks. If it’s not from your proxy, then you can’t trust the headers.

If it’s your proxy and you made sure not to trust incoming headers in your proxy (see above), then you can trust the full header. Otherwise, you can only trust the incoming hop to your proxy and anything before that is not trustworthy.

Man in the Middle Attacks

All of this disregards MITM attacks of course. If an attacker can inject traffic and spoof source IP addresses into your traffic, all bets on trusting headers are off. TLS will still help with header integrity, but they can still spoof the source address, convincing you to trust the headers in the request.

Bug Bounty Tip

Try inserting a few headers to see if you get different responses. Even if you don’t get a full authorization out of it, some applications will give you debug headers or other interesting behavior. Consider some of the following:

Forwarded: for="_localhost"

25 March, 2020 07:00AM

March 24, 2020

hackergotchi for OSMC


OSMC's March update is here with Kodi v18.6

Firstly, and most importantly, we hope that everyone and their loved ones are staying safe. We continue to work on and develop OSMC during this time and offer support and our store also remains open with orders being fulfilled promptly and without delay.

Team Kodi recently announced the 18.6 point release of Kodi Leia. We have now prepared this for all supported OSMC devices and added some improvements and fixes.

Our next video stack with support for HDR10+ and 3D MVC for Vero 4K and Vero 4K + will be made available for testing on our forums within the next 48 hours.

We continue to work on Raspberry Pi 4 support.

Here's what's new:

Bug fixes

  • Fixed an issue where Kodi would crash if using SSH based shares
  • Fixed an issue where Raspberry Pi could run at higher temperatures and consume excess power when idling
  • Fixed an issue which caused bookmarks to be corrupted on Vero 4K / 4K +
  • Fixed an issue which could prevent Vero 4K / 4K + from booting with some Seagate hard drive models were attached

Improving the user experience

  • Added support for long press support and low battery warning with the OSMC Remote Controller. You can learn more about the new long press functionality here.
  • Improved OSMC remote response time for repeat presses
  • Added support for HiDPI displays with the OSMC Windows installers
  • Added to hide item count in OSMC skin


  • Updated Raspberry Pi system firmware
  • Added support for OTA bootloader updates for Vero 4K / 4K +

Wrap up

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on Twitter, like us on Facebook and consider making a donation if you would like to support further development.

You may also wish to check out our Store, which offers a wide variety of high quality products which will help you get the best of OSMC.

24 March, 2020 09:45PM by Sam Nazarko

hackergotchi for Purism PureOS

Purism PureOS

Purism’s contributions to Linux 5.5 and 5.6

Following up on our report for Linux 5.4, we continue to improve mainline kernel support for the Librem 5 phone. Here’s a summary of the progress we have made during the 5.5 and 5.6 development cycles.

Librem 5’s backlight LED driver

The driver that saw the most patches was the LED backlight driver. More specifically the lm3692x family of chips as used in the Librem 5 to drive the LCD panel backlight. It makes up for almost half of the changes submitted, besides bug fixes and preparations for other changes.

We extended the driver so it can configure over-voltage protection and the maximum LED current to not damage the LED strips of the panel:

We also made sure the LED strip is turned fully off at brightness level 0, which saves a bit of power but more importantly prevents the phone from glowing slightly in the dark:

Broadmobi baseband modem sound support

Broadmobi 818 support was added to the gtm601 driver, for audio calls:

Librem 5’s IMU sensor

Building on our previous work on driver support itself, we now hook up the IMU sensor to the devkit device tree hardware description. Things like accelerometer and magnetometer can now work out of the box:

Also, since the chip is oriented differently on the devkit’s mainboard than on the Birch and Chestnut Batches of the phone, we added support for the “mount matrix” API:

Support for the Librem 5’s fuel gauge

The battery fuel gauge on the Librem 5 is similar to the max17042, its driver was extended to support the phone’s max17055:

eLCDIF display controller

Our effort to make the display stack work out of the box continued by adding the eLDCDIF controller to the i.MX8MQ’s device tree. It can also  pass on flags from the DSI controller (which acts as a DRM bridge):

Thermal throttling

We enabled thermal throttling for the GPU. This was a pure device tree change. The code was already there:

Misc fixes

We fixed the scaling of the Librem 5’s light and proximity sensors to get correct values:

Lastly, we submitted patches for the Librem 5’s charge controller driver to work as a module:

Code review

This round we contributed 5 Reviewed-by: or Tested-by: tags to patches by other authors.

For current ongoing work, check out the kernel tree. Some of this is already merged into linux-next and should make it into a future report.

The post Purism’s contributions to Linux 5.5 and 5.6 appeared first on Purism.

24 March, 2020 08:12PM by Guido Günther

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Kubernetes 1.18 available from Canonical

Canonical today announced full enterprise support for Kubernetes 1.18, with support covering Charmed Kubernetes, MicroK8s and kubeadm. Committed to releasing in tandem with upstream Kubernetes, enterprises can benefit from the latest additions to enhance their day to day operations.

“Canonical’s drive is to enable enterprises by giving them the tools to seamlessly deploy and operate their Kubernetes clusters. This new Kubernetes release unlocks capabilities for both MicroK8s and Charmed Kubernetes, with new add-ons like Kubeflow 1.0, Multus and support for the upcoming Ubuntu 20.04 LTS release. We are excited to work with our customers and partners to deliver them an unparalleled Kubernetes experience,” commented Alex Chalkias, Product Manager at Canonical.

MicroK8s, the lightweight, single snap packaged Kubernetes is suited for edge and IoT use cases like Raspberry Pi clustering and ideal forDevOps teams that want to create CI/CD pipelines to test K8s-based applications. Users following the latest stable MicroK8s track will be automatically upgraded to Kubernetes 1.18. The recent Kubeflow 1.0 release can be enabled in MicroK8s by a single command and unlock the capabilities of AI/ML at the edge. 

Charmed Kubernetes, Canonical’s multi-cloud Kubernetes, delivered on the widest range of clouds, benefits from preview release support for the upcoming 20.04 LTS release. Multus, a container network interface (CNI), that enables the creation of multiple virtual network interfaces on Kubernetes pods is added to the list of supported tools. Users interested in container storage interface (CSI) add-ons for filesystem storage can now benefit from support of CephFS. CIS benchmark 1.5 is also supported for organisations that are looking to increase their security and compliance.

What’s new:


  • etcd is upgraded to 3.4
  • CoreDNS addon is upgraded to v1.6.6
  • New helm3 addon is available with `microk8s.helm3`
  • Juju is upgraded to 2.7.3 and is packaged with the snap
  • Ingress RBAC rule to create configmaps
  • Certificates are set to have a lifespan of 365 days
  • `microk8s.reset` can disable add-ons
  • Allow `microk8s.kubectl` to use plugins such as krew
  • Fix in enabling add-ons via the rest API
  • On a ZFS machine, the native snapshotter will be used
  • Improved `microk8s.status` output
  • Hostpath can now list events when RBAC is enabled
  • New snap interface added, enabling other snaps to detect MicroK8s’ presence

Charmed Kubernetes

  • Preview release of Ubuntu 20.04 LTS support
  • Added support for CIS benchmark 1.5
  • Support for CephFS
  • Addition of Multus CNI – a plugin to enable multiple virtual network interfaces for pods

24 March, 2020 06:29PM

hackergotchi for Cumulus Linux

Cumulus Linux

Automate, orchestrate, survive: treating your network as a holistic entity

Organizations need to learn to think about networks as holistic entities. Networks are more than core routers or top-of-rack (ToR) switches. They’re composed of numerous connectivity options, all of which must play nice with one another. What role does automation play in making network heterogeneity viable? And does getting all the pieces from a single vendor really make management easier if that vendor has 15 different operating systems spread across their lineup of network devices?

Most network administrators are used to thinking about their networks in terms of tiers. Access is different from branch, which is different from campus, and so forth. Datacenter is something different again, and then there’s virtual networking complicating everything.

With networks being so big and sprawling that they frequently occupy multiple teams, it’s easy to focus on only one area at a time. Looking at the network holistically—both as it exists, and as it’s likely to evolve—is a much more complicated process, and increasingly important.

Networks grow, evolve and change. Some of this is organic; growth of the organization necessitates the acquisition of new equipment. Other times growth is more unmanaged; something that’s especially common with mergers and acquisitions (M&As).

Regardless of reason, change in the network introduces the possibility for a heterogeneous management environment. The greater the number of management planes, the more difficult network-wide orchestration is—this has real-world implications for IT teams beyond networking.

For example, holistic networking—or the lack thereof—impacts both security and compliance efforts.

The tipping point when automation becomes a necessity

Automation is an absolute necessity to do IT operations at scale. There comes a point in any organization’s growth where adding more personnel to the IT department simply doesn’t enable additional growth. Past that point, there’s no path forward except automation.

None of this is controversial. Every organization with more than a few hundred employees has faced this truth, and many are struggling with the result. There’s so much automation in use that there’s now a requirement to automate the automation. In other words: orchestration.

But how viable is orchestration in a heterogeneous environment? Each vendor has their own management stack. APIs are different. Systems and devices may or may not have support in third-party configuration management tools such as Puppet, Chef, Saltstack or Ansible. The necessity of orchestration coupled with the difficulties posed by heterogeneous environments has traditionally been the loudest argument made for absolute loyalty to a single vendor, especially in the networking world.

That rationale, however popular, falls apart upon closer examination. There are few vendors who can offer every piece of the networking puzzle, and they all suffer from an explosion of management options, multiple operating systems (each with different features and functionality) and all the very same problems that come with heterogenous networks in the first place.

In addition, those unexpected events—such as M&As—do occasionally occur. They’ll bring their own diversity to the table, as will the expansion of cloud computing, edge computing, mobile and all the other places around the world that your organization’s bits must flow.

An organization’s survival is inextricable from the health of its IT, and that IT doesn’t work unless the network does. How will all that IT infrastructure be automated, orchestrated and managed? Today? Tomorrow? Ten years from now? The foundations laid today will be built upon year after year.

If there’s little or no real-world operational efficiency to be gained through futile attempts at networking monoculture, doesn’t pursuing a strategy building an infrastructure designed from the ground up to support heterogeneity make the most sense? Vendors should be chosen based on their commitments to open networks, open protocols and open standards. Not for any ideological reasons, but for pragmatic ones—networks won’t get less complex with time, and 10 years from now you’ll be stitching together something new on top of whatever automation and orchestration it is that you choose to build today.

The holistic solution

Cumulus Linux allows you to manage the switch like a server with network automation. With Linux-based switches, NetDevOps is easier than ever before, as IT teams can leverage the automation tools with which they’re already familiar.

Cumulus Linux is built for agility, and offers a standard base from which to begin building the adaptive, open network of the future—one designed for heterogeneity, founded on a consistent Linux operating model, and enabling scale through automation that’s easier than has ever been possible before.

24 March, 2020 04:33PM by Katherine Gorham

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 Software Updates March 2020 Part 1

Backing up via Deja Dup can be done easily by installing it from the software store. This program is commonly used to backup your computer from the desktop to create and restore backups. On the Librem 5, it makes backing up to an SD card simple, since the Librem 5 is a fully powered portable computer with a touch interface that fits in your pocket.

Deja dup running on the Librem 5

Apps published on Flathub can be added to the Librem 5 store with this command:

sudo flatpak remote-add --if-not-exists flathub

After that, open Gnome Web and navigate to Flathub. Then install something like Gnome Podcasts. From now on, you can browse Flathub as an unofficial community repository. While there are a lot of apps on Flathub, they were not all made with the Librem 5 in mind. To help filter down which apps are built for the Librem 5, we will be creating our own Flatpak repository. Once that is ready to go, it will be enabled by default for all Librem 5 owners.

Phoc/wlroots now picks sane format modifiers. This allows GL applications to run without visual artifacts.

Before and after mesa bugfix

A community-driven application called Pure Maps can be installed on the Librem 5. This will offer turn by turn navigation to Librem 5 owners once we finish hooking up the GPS. For now, you can enter a starting location and destination to get step by step directions.

Pure Maps running on the Librem 5

Smartphone displays take a lot of power to run. To help optimize how the screen uses power, the automatic screen brightness can be enabled in the settings. We plan to enable this by default once it’s smoothed out and less distracting.

Power Improvements:

You can see, that the power draw goes down as the kernel version rises. From version 5.1 to 5.12, we see about a 10% improvement in heat, 30% in battery draw, and a 90% in load average.

The power draw goes up after 300 seconds. That is where the screen locks due to inactivity. We have an open bug to reduce the load during this event.

Load average over time

UI Changes:

A new and improved lock screen has been implemented. This one centers the unlock button and lowers the UI to be closer to your thumbs.

Quick power and notification muting have been added to the dropdown. The Rotation toggle has also been replaced by a button.

The keyboard is also improving quickly. Check out what improvements have been made along those lines.

If you change the user agent string and enable pinch to zoom, Firefox is much more useable. From now on, pages will load the mobile versions allowing you to interact with the pages as you’d expect on a smartphone. Adjusting the scaling can also help you get the most of your mobile experience.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post Librem 5 Software Updates <br>March 2020 Part 1 appeared first on Purism.

24 March, 2020 03:47PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Building a Raspberry Pi cluster with MicroK8s

The tutorial for building a Raspberry Pi cluster with MicroK8s is here. This blog is not a tutorial. This blog aims to answer; why? Why would you build a Raspberry Pi cluster with MicroK8s? Here we go a little deeper to understand the hype around Kubernetes, the uses of cluster computing and the capabilities of MicroK8s. 

Why build a Raspberry Pi Microk8s cluster?

The simple answer is to offload computation resources from your main computer to a cute little stack of Raspberry Pis. The longer answer is to give yourself, and your computer, a break to do other things and save time. You can use the cluster for resource allocation or as a separate system. For example, if you are a photographer who takes a lot of high-resolution photos, you might find that uploading, stitching, or rendering those photos can prove tedious. Instead, you could offload each photo to a Raspberry Pi. This way you have multiple things working at the same time and can get on with writing a blog about your trip. 

Similarly, if you’re getting into the YouTube business you might find it takes a while to upload your videos. You have to keep a careful eye on it so that when it’s done you can go back straight away to watch cat videos. With a Raspberry Pi Microk8s cluster, you can offload the upload. It might take a little longer if your cluster isn’t that big, but it frees up some time. If anyone reading this wants to write a tutorial for one or both of these examples, get in touch. 

Why use a Raspberry Pi?

The Raspberry Pi is a series of small, single-board computers that took the world by storm. They are built and developed by a UK based charity that aims to educate and lower the bar for people getting into technology. 

“Our mission is to put the power of computing and digital making into the hands of people all over the world.” – The Raspberry Pi Foundation. 

What makes them great for this purpose, in particular, is that they are incredibly cheap for what you get and depending on the model you get they have a wide array of different features, hardware and capabilities. You can whip up your MicroK8s cluster, ready to go, or you can build it into home automation. Or use it as a server. A display. A weather monitor. Or anything.

Why not?

Kubernetes, open-source technology, Raspberry Pi, clustering, beer. Are all buzzwords that people like to talk about and have become, or continue to be, very popular. Knowing what those words mean is one thing but understanding them and their implications is another. A cluster of Raspberry Pis running MicroK8s satisfies four out of five of those words. And if you’re old enough and so inclined, you could hit the fifth too. If you have the capacity and you have an interest, even if you don’t necessarily have a purpose, why not?


Containers have become the de facto way to run applications in a production environment. Being able to manage containers to maximise efficiency, minimise downtime and scale your operations, saves tremendous amounts of time and money. It’s a project Google started and open-sourced in 2014 based on a decade and a half of their running production workloads at scale. If you’re here because of the Raspberry Pi in the title and you don’t have a production environment to manage, skip ahead to the MicroK8s section. 

What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing container workloads and services, that facilitates both configuration and automation. If you run Kubernetes, you are running a Kubernetes cluster. As you’ll find in the tutorial, a cluster contains a minimum of a worker node and a master node. The master is responsible for maintaining the desired state of the cluster, and the worker node runs the applications. This principle is the core of Kubernetes. Being able to break jobs down and run them in containers across any group of machines, physical, virtual, or in the cloud, means the work and the containers aren’t tied to specific machines, they are “abstracted” across the cluster. 

Problems it solves

A big question for any system is how will it react to change? In fact, the whole of system theory studies the principle of systems in context. Looking at components of a system in the context of the bigger picture and their relationships to each other. Not in isolation. Kubernetes enables this sense of context. Dealing with containers and workloads in isolation can slow any infrastructure down and create more work for more people. In theory, this could work just fine on a small scale but when you get to production or your systems get more complex, you’re asking for trouble.

Containerised applications help isolate resources of a host OS, leading to increased performance. They also help segregate the resources and therefore optimise their utilisation. Kubernetes is the vessel with which all the containers are coordinated to run on a single host OS, which is a big improvement in terms of resource consumption from VMs where a host OS is needed in every VM instance.

Why you should care

Kubernetes hasn’t become a buzzword for no reason. It has a myriad of features and benefits that make it so. Ultimately it comes down to automated efficiency. What happens if a container running a work-load goes down? If traffic to a container is too high? What happens if you have so much to work to do that allocating resources is a waste of time? And what happens if you are dealing with confidential data or you need to protect your workload? Well. Kubernetes solves all of those problems and more. 

With Kubernetes, you can automate the creation, removal and management of resources for all your containers. You tell the platform how much CPU and RAM to allocate to each container. It handles unresponsive, or failed containers automatically, re-allocates the workloads and won’t advertise them again until they’re back up and running. And Kubernetes lets you store and manage information such as passwords, SSH key and Auth tokens. So you can deploy and update under a layer of security without having to rebuild container images. 

In fact, the latest Kubernetes candidate is available for testing and available in the snapstore. If you want to try the latest and greatest developments, finish reading this blog and head over. 


If Kubernetes (K8s) is as good as everyone says it is, then the next thing to try is to apply the same model elsewhere. Somewhere where resources are heavily constrained and the management of computational resources is a performance-limiting factor. How about, at the edge?

What is MicroK8s

MicoK8s is the most minimal, fastest version of K8s out there, that keeps the important features of a standard K8s cluster. It is optimised for the edge with hundreds of thousands of lines of codes taken out to be exactly what you need for managing devices. It makes single-(master)-node cluster deployments of Kubernetes easy to deploy for any purpose. There’s no need to deploy a fully-blown production-grade cluster when you’re prototyping or developing, you can test everything on MicroK8s before scaling. 

Problems it solves

Typically devices and device workloads are developed to work in silos. Devices at the edge are failure-prone and managing each and everyone in isolation is cumbersome. If a master node goes down, there’s no standard way to fix a device under the downed master device. With MicroK8s you can centrally manage every device in the cluster using a simple UI. It takes the complexity out of updates and roll-backs so that developers or organisations can manage their device estate with ease.

Why you should care

Edge resource management is already a big problem. The internet of things (IoT) or the internet of everything (IoE) here, is more heavily constrained by Moore’s law than anything else. Moore’s law loosely says that every two years the speed and capability of computers will double and the price will lessen. Where most industries can deal with this by growing or adding more to a system, the edge has tight constraints on its size and number of resources.

Some of the biggest buzzwords that describe this effect are embedded industrial devices, and smart homes. The idea of connecting robots or devices in a home or a factory with the ability to communicate, learn and make decisions. That ability, without large amounts of computation or significant resource management, reaches a hard limit. With MicroK8s and its central node methodology more resources become available, become secure and the value of containers as a scalable way of running workloads is transplanted to the edge. 

Cluster Computing

The idea is simple. You connect a series of computers (nodes) over a network so that they can share resources (in a cluster) and execute workloads more quickly, more efficiently or in parallel. There are three types of cluster computers, load balancing clusters, for resource distribution. High-performance clusters, sometimes called supercomputers, to pool resources for high computational cost workloads. And there are high availability clusters that are predominantly for redundancy or failure recovery. You can use MicroK8s for any of the above.

Why you should care

A cluster computer aims to make a group of computers appear as one system.  In the beginning, cluster computing was almost exclusively a scaling solution for the enterprise. But since, with the rise of the other technologies discussed here, the benefits have grown to include extreme availability, redundancy resilience, load balancing, automated cross-platform management, parallel processing, resource sharing and capacity on demand. The trick is being able to manage said cluster efficiently. The trick is in K8s.

Further MicroK8s, cluster and Raspberry Pi reading

On the Raspberry Pi website, you will find the tutorial, Build an OctaPi. This is a comprehensive tutorial that uses nine (eight for the cluster, one as the client) Raspberry Pis as servers for much the same purposes as already described. It does not use any form of Kubernetes but you will be able to see the result is almost as cool. 

To reiterate over the first paragraph, if you want to build a Raspberry Pi cluster using MicroK8s there is a tutorial on the Ubuntu discourse along with plenty others to start or to get tinkering with other interesting technologies.

MicroK8s is built by the Kubernetes team at Canonical to bring production-grade Kubernetes to the developer community. It’s worth reading more about. Or if you don’t want to build a cluster but have an interest in MicroK8s, there are lots of great tutorials online

Finally, Kubernetes is the biggest buzzword here. And unless you’re a wizard or work in the cloud business you likely haven’t had much exposure to it. If this is the case I can recommend two paths. Going over to the K8s website where all the docs are, this is probably a lot more information than is really digestible right now, but it’s a good source of information. Or you can head over to the Ubuntu tutorial on getting MicroK8s doing things on your own machine. Which, will walk you through a bit more of the information discussed here. 

This blog was originally for Rhys’ personal blog as an information dump for himself.

24 March, 2020 10:30AM

Ubuntu Blog: How to launch IoT devices – Part 4: When to ask for help

(This blog post is part of a 5 part series, titled “How to launch IoT devices”. It will cover the key choices and concerns when turning bright IoT ideas into a product in the market. Sign up to the webinar on how to launch IoT devices to get the full story, all in one place.)

First part: Why does IoT take so long?

Second part: Select the right hardware and foundations

Third part: IoT devices and infrastructure

The best laid plans of mice and men often go awry. By following this series so far, you hopefully have an idea and a plan on getting that idea to market (part 1). Then you selected hardware that works with your software (part 2), as well as infrastructure that supports you along the way (part 3). Do you feel in a good position to launch your IoT product and get a piece of that trillion dollar pie?

Even the most successful products go off-course during development. In addition, when roadmaps, plans and budgets start to go wrong, it is easy to lose stakeholder support. This blog will explain how using specialists to outsource and co-create parts of a product will benefit your product in the short and long term.

Managing the cost of IoT devices

Creating IoT devices is labour-intensive –  specifically with engineering time – and the cost base of IoT projects can be mostly variable. This increases the burn rate of a budget, and results or outcomes will likely come in following years. Due to this, it is easy to lose senior stakeholder support, especially from stakeholders that already see IoT as risky.

In the short term, using a specialist enables your team to deliver outcomes, stay on a project plan/roadmap, and within budget. This is because specialists allow organisations to push through bottlenecks in a roadmap. When a budget cannot sustain high headcount, specialists allow product teams to smooth headcount to meet peak demands.

Specialists can also serve as a parallel work-stream during periods of high activity, which helps break through bottlenecks. This means product teams do not fall behind schedule due to early difficulties.

Finding a helping hand to nurture IoT devices. Photo by Neil Thomas on UnsplashFinding a helping hand to nurture IoT devices. Photo by Neil Thomas on Unsplash

Managing risk in the short and long term

The risk of delivery of parts of the project is removed from a product team, and is moved to the specialist team. This has two main benefits – first specialists are in the best place to manage the risk. They have addressed and solved the problem multiple times and across a variety of customers. Second, specialists can be contracted to deliver a specific outcome, and so there is increased certainty over performance.

In the long term, there is a skills and knowledge exchange between specialists and product teams. This means future iterations of the product will benefit from the initial boost of skill that a product team gets.

Despite the clear benefits, specialists must be used strategically – to solve specific, actionable and difficult problems. Indiscriminate use across product development will lead to two problems. First, prolonged use will be expensive. Second, a team’s skills may be underdeveloped if a specialist performs too large a role, as key lessons are not learnt.


Specialists need to be used strategically – they can fix costs, help break through difficult parts of a project and provide a skills boost that benefits a team now and in the future. Contact us to discuss how Canonical engineers can boost your product, with both specific skill transfer and problem resolution.

Next time, in our final part to this series, we will introduce SMART START – a package of IoT products and solutions from Canonical, that take you through all the processes this series discussed. If you can’t wait until then, sign up to the webinar on How to launch IoT devices to get the full story, all in one place.

24 March, 2020 10:18AM

Ubuntu Blog: Ceph Octopus is now available

Ceph upstream released the first stable version of ‘Octopus’ today, and you can test it easily on Ubuntu with automatic upgrades to the final GA release. This version adds significant multi-site replication capabilities, important for large-scale redundancy and disaster recovery. Ceph v15.2.0 Octopus packages are built for Ubuntu 18.04 LTS, CentOS 7 and 8, Container image (based on CentOS 8) and Debian Buster.

What’s new in Ceph Octopus?

The Ceph Octopus release focuses on five different themes, which are multi-site usage, quality, performance, usability and ecosystem.


Scheduling of snapshots, snapshot pruning and periodic snapshot automation and sync to remote cluster for CephFS are all new features that allow Ceph multi-site replication . New snapshot-based mirroring mode for RBD is also part of Octopus release. These features help automate back-ups, save storage space and make it easy to share and protect the data from potential failures.


Simple health alerts are now raised for Ceph daemon crashes and can trigger mail send, reducing the need to deploy an external cluster monitoring infrastructure. New “device” telemetry channel for hard disk and SSD health metrics reporting improves the device failure prediction model. Users are able to opt-in if the telemetry content is expanded.


Recovery tail latency has been improved, as object sync is now available during recovery by copying only the object’s delta. BlueStore, the object store back-end, has received several improvements and performance updates, including improved accounting for “omap” (key/value) object data by pool, improved cache memory management, and a reduced allocation unit size for SSD devices.


The orchestrator API is now interfacing with a new orchestrator module, cephadm, that allows to manage Ceph daemon hosts, like a working container runtime, over ssh and explicit management commands. The Ceph dashboard is now also integrated with the orchestrator. Additionally, users can now mute health alerts, temporarily or permanently.


The ceph-csi now supports RWO and RWX via RBD and CephFS. Also, integration with Rook provides a turn-key ceph-csi (container storage interface) by default. This interface includes RBD mirroring, RGW multi-site and will eventually include CephFS mirroring too.

Octopus available for testing on Ubuntu

Try Ceph Octopus now on Ubuntu to combine the benefits of a proven storage technology solution with a secure and reliable operating system. You can install the Ceph Octopus Beta from the OpenStack Ussuri Ubuntu Cloud Archive for Ubuntu 18.04 LTS or using the development version of Ubuntu 20.04 LTS (Focal Fossa).

Canonical supports all Ceph releases as part of the Ubuntu Advantage for Infrastructure enterprise support offering. Ceph Octopus charms will be released alongside Canonical’s Openstack Ussuri release on May 20th 2020. These will allow users to automate Ceph Octopus deployments and day-2 operations, using Juju, the application modeling tool.

Learn more about Canonical Ceph storage offerings >

24 March, 2020 09:17AM

Stephen Michael Kellat: Systems Failure At Main Mission

I am still alive. In a prior post I had mentioned that things had been changing rather rapidly. With a daily press conference by the Governor of Ohio there has been one new decree after another relative to the COVID-19 situation.

A “stay at home” order takes effect at 0359 hours Coordinated Universal Time on Tuesday, March 24, 2020. This is not quite a “lockdown” but pretty much has me stuck. The State of Ohio has resources posted as to economic help in this situation but they’re also dealing with many multiple systems crashes as they try to react and some of their solutions are extremely bureaucratic.

Although I wanted to get started with doing daily livestream on Twitch there have been some logistical delays. I am also having to scrape together what equipment I do have at home to set up make-shift production capacity since our proper production facility is now inaccessible for the immediate future. There is an Amazon wish list of replacement items to try to fill in gaps if anybody feels generous though I am not sure when/if those would show up in the current circumstances. That’s also why I’m having to encourage folks to either buy the Kindle version or buy the EPUB version of the novella since the print version is possibly not going to be available any time soon.

I have further testing of packages to do to see what I can make break. OBS Studio certainly does make the fan on my laptop go into high speed action. Life here at Main Mission is getting stranger by the day. With debating ensuing about the economic carnage leading to possible economic disaster, I can only note that I at least got this up shortly before we entered lockdown.

The stay-at-home order gets reassessed on April 6th. It technically has no expiration date to it currently so it can last legally until the current governor leaves office in 2023. I do hope we make progress in getting this mess resolved sooner rather than later.

24 March, 2020 03:03AM

March 23, 2020

The Fridge: Ubuntu Weekly Newsletter Issue 623

Welcome to the Ubuntu Weekly Newsletter, Issue 623 for the week of March 15 – 21, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

23 March, 2020 10:22PM

hackergotchi for Tails


Tails 4.4.1 is out

This release is an emergency release to fix security vulnerabilities in Tor Browser and Tor.

You should upgrade as soon as possible.

Included software

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 4.4.1

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from 4.2, 4.2.2, 4.3, and 4.4 to 4.4.1.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

All the data on this USB stick will be lost.

To download only

If you don't need installation or upgrade instructions, you can download Tails 4.4.1 directly:

What's coming up?

Tails 4.5 is scheduled for April 7.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

23 March, 2020 08:00PM

hackergotchi for Whonix


Fixing the Desktop Linux Security Model

@madaidan wrote:

Fixing the Desktop Linux Security Model

Whonix is a security, privacy and anonymity focused Linux distribution. Recently, we’ve been focusing a lot on important security hardening measures and fixing architectural security issues within the desktop Linux security model. Any Linux distribution can be affected by these issues.

The Issues

There is a common assumption that Linux is a very secure operating system. This is very far from the truth for various different reasons. Security guides aren’t a solution either.

There is no strong sandboxing in the standard desktop. This means all applications have access to each other’s data and can snoop on your personal information. Most programs are written in memory unsafe languages such as C or C++ which has been the cause of the majority of discovered security vulnerabilities and modern exploit mitigations such as Control-Flow Integrity are not widely used.

The kernel is also very lacking in security. It is a monolithic kernel written entirely in a memory unsafe language and has hundreds of bugs, many being security vulnerabilities, found each month. In fact, there are so many bugs being found in the kernel, developers can’t keep up which results in many of the bugs staying unfixed for a long time. The kernel is also decades behind in exploit mitigations and many kernel developers simply do not care enough.

On ordinary desktops, a compromised non-root user account which is member of group sudo is almost equal to full root compromise as there are too many ways for an attacker to retrieve the sudo password. Usually, the standard user is part of group sudo which makes this a massive issue and makes a sudo password almost security theater. For example, the attacker can exploit the plethora of keylogging opportunities such as X’s lack of GUI isolation, the many infoleaks in /proc, use LD_PRELOAD to hook into every process and so much more. Even if we mitigate every single way to log keystrokes, the attacker can just setup their own fake sudo program to grab the user password.

Due to this, the Whonix project has been investing a lot of time into developing proper security measures to help fix these issues.

The Kernel

The kernel is the core of the operating system and has many security issues as discussed above. The following are details about our efforts to improve kernel security.


hardened-kernel consists of hardened configurations and hardening patches for the Linux kernel. There are two kernel configs, hardened-vm-kernel and hardened-host-kernel. hardened-vm-kernel is designed specifically for virtual machines (VMs) and hardened-host-kernel is designed for hosts.

Both configs try to have as many hardening options enabled as possible and have little attack surface. hardened-vm-kernel only has support for VMs and all other hardware options are disabled to reduce attack surface and compile time.

During installation of hardened-vm-kernel, it compiles the kernel on your own machine and does not use a pre-compiled kernel. This ensures the kernel symbols in the compiled image are completely unique which makes it far harder for kernel exploits. This is possible due to hardened-vm-kernel having only VM config options enabled which drastically reduces compile time.

A development goal is that during installation of hardened-host-kernel, the kernel is not compiled on your machine but uses a pre-compiled kernel. This is because the host kernel needs most hardware options enabled to support most devices which makes compilation take a very long time.

The VM kernel is more secure than the host kernel due to having less attack surface and not being pre-compiled but if you want more security for the host, it is recommended to edit the hardened host config, enable only the hardware options you need and compile the kernel yourself. This makes the security of the host and VM kernel comparable.

These kernels use the linux-hardened kernel patch for further hardening. The advantages of this patch includes many ASLR improvements, more read-only kernel structures, writable function pointer detection, stricter sysctl configurations, more sanity checks, slab canaries and a lot more.

We are also contributing to linux-hardened and adding more hardening features. Our contributions include disabling TCP simultaneous connect, restricting module auto-loading to CAP_SYS_MODULE, Trusted Path Execution (TPE), restricting sysfs access to root, restricting perf_event_open() further to deny even root from using it and many more in the future.


security-misc (wiki) enables miscellaneous security features for better kernel self-protection, attack surface reduction, entropy collection improvements and more. It doesn’t just harden the kernel but also various other parts of the operating system. For example, it disables SUID binaries (experimental, soonish default) and locks down root user access to make root compromises far harder. It also uses stricter mount options for various filesystems and stricter file permissions.

Linux Kernel Runtime Guard (LKRG)

Linux Kernel Runtime Guard (LKRG) is a kernel module which performs runtime integrity checking of the kernel and detection of kernel exploits. It can kill entire classes of kernel exploits and while LKRG is bypassable by design, such bypasses tend to require more complicated and/or less reliable exploits.


tirdad is a kernel module that aims to prevent TCP Initial Sequence Number (ISN) based information leaks by randomizing the TCP ISNs. These issues can be potentially catastrophic for anonymity and long-running cryptographic operations.

User space

User space is the code that runs outside of the kernel such as your usual applications.


apparmor-profile-everything is an AppArmor policy to confine all user space processes, including even the init. This allows us to implement strict mandatory access control restrictions on all processes and have fine-grained controls over what they can access.

This is implemented by an initramfs hook which loads an AppArmor profile for systemd, the init.

apparmor-profile-everything gives us many advantages by limiting what an attacker can do if they compromise parts of the system. The benefits are not just for user space though. We can also protect the kernel to a great degree with this by blocking access to dangerous capabilities that allow kernel modification such as CAP_SYS_RAWIO, having fine-grained restrictions on kernel interfaces known for information leaks such as /proc or /sys and so much more. apparmor-profile-everything even allows us to deny access to the CAP_NET_ADMIN capability which prevents even the root user from leaking the IP address on the Whonix Gateway (it would now require a kernel compromise).

With apparmor-profile-everything, the only reasonable way to break out of the restrictions is by attacking the kernel which we make much harder as documented above. The root user cannot disable the protections at runtime as we deny access to the required capabilities and files.


sandbox-app-launcher is an app launcher that starts all user applications in a restrictive sandbox. It creates a separate user for each application ensuring they cannot access each other’s data, runs the app inside a bubblewrap sandbox and confines the app with a strict AppArmor profile.

Bubblewrap allows us to make use of kernel sandboxing technologies called namespaces and seccomp. Namespaces allow us to isolate certain system resources. All apps are run in mount, PID, cgroup and UTS namespaces. Fine-grained filesystem restrictions are implemented via mount namespaces and AppArmor. Seccomp blocks certain syscalls which can greatly reduce kernel attack surface among other things. All apps by default use a seccomp blacklist to block dangerous and unused syscalls. Seccomp isn’t just used for bluntly blocking syscalls either. It’s also used to block unused socket families by inspection of the socket() syscall, dangerous ioctls such as TIOCSTI (which can be used in sandbox escapes), TIOCSETD (this can increase kernel attack surface by loading vulnerable line disciplines) and SIOCGIFHWADDR (this can retrieve the user’s MAC address which is a privacy risk) by inspection of the ioctl() syscall and even strict W^X protections by inspection of the mmap(), mprotect() and shmat() syscalls. AppArmor is used to apply W^X to the filesystem and prevent an attacker from executing arbitrary code. Apparmor also gives fine-grained controls over IPC signals, dbus, UNIX sockets, ptrace and more.

It doesn’t just stop there. sandbox-app-launcher implements an Android-like permissions system which allows you to revoke certain permissions such as network access for any application. During installation of new programs, you are asked which permissions you wish to grant the application.


hardened_malloc is a hardened memory allocator created by security researcher, Daniel Micay. It gives substantial protection from memory corruption vulnerabilities. It is heavily based on the OpenBSD malloc design but with numerous improvements. Daniel Micay is a respected security researcher who has put a lot of work into security and is the creator of GrapheneOS (formerly CopperheadOS), linux-hardened and more.

Whonix installs hardened_malloc by default but it is not used much yet. In the future, we may preload it globally and use it for every application.


VirusForget deactivates malware after a reboot from a non-root compromise by restoring sensitive files. Without this, it’s possible for malware to easily create a persistent, system-wide rootkit by for example, modifying LD_PRELOAD in ~/.bashrc to hook into all user applications.

Verified Boot

Verified boot ensures the integrity of the boot chain by verifying the bootloader, kernel and initrd with a cryptographic signature. It can be extended further and verify the entire base OS, ensuring all executed code has not been tampered with but this extension is unlikely to be implemented due to the layout of traditional Linux distributions.

User Freedom

All of our security features can be reverted by the user if they prefer freedom over security by choosing the necessary boot modes. This is not a security risk and attackers cannot abuse this as it can only be done with local access.


There is still a lot more work to be done and we need your help. Contributions would be greatly appreciated. The implementation of sandbox-app-launcher, packaging of hardened-kernel and verified boot are some of our main issues. Qubes support for hardened-kernel, apparmor-profile-everything, LKRG, tirdad and security-misc’s boot parameters are missing. Also see the list of open tasks.

Edit by Patrick:

Posts: 13

Participants: 2

Read full topic

23 March, 2020 04:52PM by @madaidan

hackergotchi for Maemo developers

Maemo developers

Are we dead yet?

I am quite frustrated with corona graphs in the news, since most reporters seem to have skipped math classes back then. For instance, just plotting the number of confirmed infections at the respective dates does not tell you anything due to the different time point of outbreak. So lets see whether I can do better:

With the site above, I tried to improve on a few things:

  • the Charts are live: they update themselves each time you load the site.
  • The curves are normalized by the time-point of outbreak so you can compare the course in different countries.
  • You can select the countries that you want to compare.
  • Different metrics are computed that allow comparing the corona countermeasures and impact across countries with different population size.
0 Add to favourites0 Bury

23 March, 2020 11:40AM by Pavel Rojtberg (

hackergotchi for ArcheOS


ATOR Project Manager: presentation online

Hello everybody,
despite the difficulties of these days (for COVID-19), we try to go on with normal life and trying to keep ATOR up to date is one of the few tasks we can accomplish from home.
This short post is to report the news that Stefano Campus (one of the organizer of the Italian FOSS4G conference 2020) just notified me: all the presentations have been released freely on internet, within the platform Zenodo. The material is published with open licenses and has a specific DOI.
Here you can see all the presentations of the Italian FOSS4G 2020 (which was held in Turin) and here is a direct link to our slides, about the prototype of Virtual Vocal Assistant for archaeology we are working on, thanks to our friend Andres Reyes.

Have a nice day ("It can't rain all the time")!

23 March, 2020 08:44AM by Luca Bezzi (

hackergotchi for VyOS


VyOS Project March 2020 Update

As usual, we’ve been busy working on the 1.2.x LTS release and on the future 1.3.0 version. One important update is that VyOS 1.2.5-epa2 (early production access) image is now available to subscribers, and everyone can build it from the Crux branch. It includes latest stable release of the FreeRangeRouting protocol stack and multiple bug fixes. We are running that image on our own routers, and if no more serious issues are discovered, we’ll make the final 1.2.5 release and keep working on 1.2.6.

23 March, 2020 04:45AM by Daniil Baturin (

March 22, 2020

hackergotchi for Freedombone


Relaying and Hashtag Federation

I just saw the talk about hashtag federation in the fediverse and since I havn't written anything on this topic here are my current thoughts.

I think relaying of posts, in the style of an email open relay, is probably a bad idea. It's probably a bad idea in the fediverse for the same reasons that it's usually a bad idea for email. The most obvious issue is that it easily enables spam. For example, suppose there was a hashtag for a currently urgent event. A spammer could then just flood that hashtag with ads, or a political adversary could post random garbage with the hashtag attached in order to flood out the signal with noise and make it less likely that people will pay attention to that topic.

The other issue is post integrity. Usually this is ensured by a http signature, but if a post is relayed then how do we know that the post stored on the relay is the same as the original? An evil relay could alter public posts to deliberately create flame wars and instance blocking.

So I think relaying of posts and hashtags could create more problems than they solve. In the scenario mentioned in the talk you may still get to know what's happening in a protest because people you follow will be boosting posts with the hashtag. Boosting becomes a decentralized way of distributing hashtags around between instances, without breaking the integrity checks via signatures and directly following the chain of trust from one person to another. In the relay model you need to somehow trust that the relay is not evil and it becomes too easy for bad actors to try to influence what people are thinking about a topic.

22 March, 2020 10:45AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Security 101: Virtual Private Networks (VPNs)

I’m trying something new – a “Security 101” series. I hope to make these topics readable for those with no security background. I’m going to pick topics that are either related to my other posts (such as foundational knowledge) or just things that I think are relevant or misunderstood.

Today, I want to cover Virtual Private Networks, commonly known as VPNs. First I want to talk about what they are and how they work, then about commercial VPN providers, and finally about common misconceptions.

VPN Basics

At the most basic level, a VPN is intended to provide a service that is equivalent to having a private network connection, such as a leased fiber, between two endpoints. The goal is to provide confidentiality and integrity for the traffic travelling between those endpoints, which is usually accomplished by cryptography (encryption).

The traffic tunneled by VPNs can operate at either Layer 2 (sometimes referred to as “bridging”) or Layer 3 (sometimes referred to as “routing”) of the OSI model. Layer 2 VPNs provide a more seamless experience between the two endpoints (e.g., device autodiscovery, etc.) but are less common and not supported on all platforms. Most VPN protocols operate at the application layer, but IPsec is an extension to IPv4, so operates at Layer 3.

The most common VPN implementations you’re likely to run into are IPsec, OpenVPN, or Wireguard. I’ll cover these in my examples, as they’re the bulk of what individuals might be using for personal VPNs as well as the most common option for enterprise VPN. Other relatively common implementations are Cisco AnyConnect (and the related OpenConnect), L2TP, OpenSSH’s VPN implementation, and other ad-hoc (often TLS-based) protocols.

A Word on Routing

In order to understand how VPNs work, it’s useful to understand how routing works. Now, this isn’t an in-depth dive – there are entire books devoted to the topic – but it should cover the basics. I will only consider the endpoint case with typical routing use cases, and use IPv4 in all my examples, but the same core elements hold for IPv6.

IP addresses are the sole way the source and destination host for a packet are identified. Hostnames are not involved at all, that’s the job of DNS. Additionally, individual sub networks (subnets) are composed of an IP prefix and the “subnet mask”, which specifies how many leading bits of the IP refer to the network versus the individual host. For example, indicates that the host is host number 10 in the subnet 192.168.1 (since the first 3 octets are a total of 24 bits long.).

% ipcalc
Address:         11000000.10101000.00000001. 00001010
Netmask: = 24   11111111.11111111.11111111. 00000000
Network:       11000000.10101000.00000001. 00000000

When your computer wants to send a packet to another computer, it has to figure out how to do so. If the two machines are on the same sub network, this is easy – it can be sent directly on the appropriate interface. So if the host with the IP on it’s wireless network interface wants to send a packet to, it will just send it directly on that interface.

If, however, it wants to send a packet to, it will need to send it via a router (a device that routes packets from one network to another). Most often, this will be via the “default route”, sometimes represented as This is the route used when the packet doesn’t match any other route. In between the extremes of the same network and the default route can be any number of other routes. When routing an outbound packet, the kernel picks the most specific route.

VPN Routing

Most typically, a VPN will be configured to route all traffic (i.e., the default) via the VPN server. This is often done by either a higher priority routing metric, or a more specific route. The more specific route may be done via two routes, one each for the top and bottom half of the IPv4 space. ( and

Of course, you need to make sure you can still reach the VPN server – routing traffic to the VPN server via the VPN won’t work. (No VPN-ception here!) So most VPN software will add a route specifically for your VPN server that goes via the default route outside the VPN (i.e., your local router).

For example, when connected via Wireguard, I have the following routing tables:

% ip route
default dev wg0 table 51820 scope link
default via dev wlp3s0 proto dhcp metric 600 dev wg0 proto kernel scope link src dev wlp3s0 proto kernel scope link src metric 600 is the address and subnet for my VPN, and is my local IP address on my local network. The routing table provides for a default via wg0, my wireguard interface. There’s a routing rule that prevents wireguard traffic from itself going over that route, so it falls to the next route, which uses my home router (running pfSense) to get to the VPN server.

The VPN only provides its confidentiality and integrity for packets that travel via its route (and so go within the tunnel). The routing table is responsible for selecting whether a packet will go via the VPN tunnel or via the normal (e.g., non-encrypted) network interface.

Just for fun, I dropped my Wireguard VPN connection and switched to an OpenVPN connection to the same server. Here’s what the routing table looks like then (tun0 is the VPN interface):

% ip route
default via dev tun0 proto static metric 50
default via dev wlp3s0 proto dhcp metric 600 dev tun0 proto kernel scope link src metric 50 via dev tun0 proto static metric 50 via dev wlp3s0 proto static metric 600 dev wlp3s0 proto kernel scope link src metric 600 dev wlp3s0 proto static scope link metric 600

This is a little bit more complicated, but you’ll still note the two default routes. In this case, instead of using a routing rule, OpenVPN sets the metric of the VPN route to a lower value. You can think of a metric as being a cost to a route: if multiple routes are equally specific, then the lowest metric (cost) is the one selected by the kernel for routing the packet.

Otherwise, the routing table is very similar, but you’ll also notice the route specifically for the VPN server ( is routed via my local gateway ( This is how OpenVPN ensures that its packets (those encrypted and signed by the VPN client) are not routed over the VPN itself by the kernel.

Note that users of Docker or virtual machines are likely to see a number of additional routes going over the virtual interfaces to containers/VMs.

Using VPNs for “Privacy” vs “Security”

There are many reasons for using a VPN, but for many people, they boil down to being described as “Privacy” or “Security”. The single most important thing to remember is that the VPN offers no protection to data in transit between the VPN server and the remote server. Where data reaches the remote server, it looks exactly the same as if it had been sent directly.

Some VPNs are just used to access private resources on the remote network (e.g., corporate VPNs), but a lot of VPN usage these days is routing all traffic, including internet traffic, over the VPN connection. I’ll mostly consider those scenarios below.

When talking about what a VPN gets you, you also need to consider your “threat model”. Specifically, who is your adversary and what do you want to prevent them from being able to do? Some common examples of concerns people have and where a VPN can actually benefit you include:

  • (Privacy) Prevent their ISP from being able to market their browsing data
  • (Security) Prevent man-in-the-middle attacks on public/shared WiFi
  • (Privacy) Prevent tracking by “changing” your IP address

Some scenarios that people want to achieve, but a VPN is ineffective for, include:

  • (Privacy) Preventing “anyone” from being able to see what sites you’re visiting
  • (Privacy) Prevent network-wide adversaries (e.g., governments) from tracking your browsing activity
  • (Privacy) Prevent all tracking of your browsing

Commercial VPNs

Commercial VPN providers have the advantage of mixing all of your traffic with that of their other customers. Typically, a couple of dozen or more inbound connections come out from the same IP address. They also come with no administration overhead, and often have servers in a variety of locations, which can be useful if you’d like to access Geo-Restricted content. (Please comply with the appropriate ToS however.)

On the flip side, using a commercial VPN server has just moved the endpoint of your plaintext traffic to another point, so if privacy is your main concern, you’d better trust your VPN provider more than you trust your ISP.

If you’re after anonymity online, it’s important to consider who you’re seeking anonymity from. If you’re only concerned about advertisers, website operators, etc., than a commercial VPN helps provide a pseudonymous browsing profile compared to coming directly from your ISP-provided connection.

Rolling Your Own

Rolling your own gives you the ultimate in control of your VPN server, but does require some technical know-how. I really like the approach of using Trail of Bits’ Algo on DigitalOcean for a fast custom VPN server. When rolling your own, you’re not competing with others for bandwidth and can choose a hosting provider in the location you want to get nearly any egress you want.

Alternatively, you can set up either OpenVPN or Wireguard yourself. While Wireguard is considered cleaner and uses more modern cryptography, OpenVPN takes care of a few things (like IP address assignment) that Wireguard does not. Both are well-documented at this point and have clients available for a variety of platforms.

Note that a private VPN generally does not have the advantage of mixing your traffic with that of others – you’re essentially moving your traffic from one place to another, but it’s still your traffic.

VPN Misconceptions

When people are new to the use of a VPN, there seems to be a lot of misconceptions about how they’re supposed to work and their properties.

VPNs Change Your IP Address

VPNs do not change the public IP address of your computer. While they do usually assign a new private IP for the tunnel interface, this IP is one that will never appear on the internet, so is not of concern to most users. What it does do is route your traffic via the tunnel so it emerges onto the public internet from another IP address (belonging to your VPN server).

VPN “Leaks”

Generally speaking, when someone refers to a VPN leak, they’re referring to the ability of a remote server to identify the public IP to which the endpoint is directly attached. For example, a server seeing the ISP-assigned IP address of your computer as the source of incoming packets can be seen as a “leak”.

These are not, generally, the fault of the VPN itself. They are usually caused by the routing rules your computer is using to determine how to send packets to their destination. You can test the routing rules with a command like:

% ip route get dev wg0 table 51820 src uid 1000

You can see that, in order to reach the IP (Google’s DNS server), I’m routing packets via the wg0 interface – so out via the VPN. On the other hand, if I check something on my local network, you can see it will go directly:

% ip route get dev wlp3s0 src uid 1000

If you don’t see the VPN interface when you run ip route get <destination>, you’ll end up with traffic not going via the VPN, and so going directly to the destination server. Using to test the IP being seen by servers, I’ll examine the two scenarios:

% host has address
% ip route get dev wgnu table 51820 src uid 1000
% curl -4
... shutdown VPN ...
% ip route get via dev wlp3s0 src uid 1000
% curl -4

Note that my real IP ( is exposed to the service when the route is not destined to go via the VPN. If you see a route via your local router to an IP, then that traffic is not going over a VPN client running on your local host.

Note that routing DNS outside the VPN (e.g., to your local DNS server) provides a trivial IP address leak. By merely requesting a DNS lookup to a unique hostname for your connection, the server can force an “IP leak” via DNS. There are other things that can potentially be seen as an “IP leak,” like WebRTC.

VPN Killswitches

A VPN “killswitch” is a common option in a 3rd party clients. This endeavors to block any traffic not going through the VPN, or block all traffic when the VPN connection is not active. This is not a core property of VPNs, but may be a property of a particular VPN client. (For example, this is not built in to the official OpenVPN or Wireguard clients, nor the IPSec implementations for either Windows or Linux.)

That being said, you could implement your own protection on this. For example, you could block all traffic on your physical interface except that going between your computer and the VPN server. For example, using iptables, and with your VPN server being on UDP port 51820, a pair of rules like this one will block all other traffic from going out on any interface except interfaces beginning with wg:

iptables -A OUTPUT -p udp --dport 51820 -d -j ACCEPT
iptables -A OUTPUT -o wg+ -j ACCEPT

VPNs Protect Against Nation-State Adversaries

There’s a lot of discussion on VPN and privacy forums about selecting “no logging” VPNs or VPN providers outside the “Five Eyes” (and the expanded selections of allied nations). To me, this indicates that these individuals are concerned about Nation-State level adversaries (i.e., NSA, GCHQ, etc.). First of all, consider whether you need that level of protection – maybe you’re doing something you shouldn’t be! However, I can understand the desire for privacy and the uneasy feeling of thinking someone is reading your conversations.

No single VPN will protect you against a nation-state adversary or most well-resourced adversaries. Almost all VPN providers receive the encrypted traffic and route the plaintext traffic back out via the same interface. In such a scenario, any adversary that can see all of the traffic there can correlate the traffic coming into and out of the VPN provider1.

If you need effective protection against such an adversary, you’re best to look at something like Tor.

VPN Routers

One approach to avoiding local VPN configuration issues is to use a separate router that puts all of the clients connected to it through the VPN. This has several advantages, including easier implementation of a killswitch, support for clients that may not support VPN applications (e.g., smart devices, e-Readers, etc.). If configured correctly, it can ensure no leaks (e.g., by only routing from its “LAN” side to the “VPN” side, and never from “LAN” to “WAN”).

I do this when travelling with a gl.inet AR750-S “Slate”. The stock firmware is based on OpenWRT, so you can choose to run a fully OpenWRT custom build (like I do) or the default firmware, which does support both Wireguard and OpenVPN. (Note that, being a low-power MIPS CPU, throughput will not match raw throughput available from your computer’s CPU, however it will still best the WiFi at the hotel or airport.

VPNs are not a Panacea

Many people look for a VPN as an instant solution for privacy, security, or anonymity. Unfortunately, it’s not that simple. Understanding how VPNs work, how IP addresses work, how routing works, and what your threat model is will help you make a more informed decision. Just asking “is this secure” or “will I be anonymous” is not enough without considering the lengths your adversary is willing to go to.

Got a request for a Security 101 topic? Hit me up on Twitter.

22 March, 2020 07:00AM

March 21, 2020

hackergotchi for Whonix


Qubes-Whonix 15 TemplateVMs (4.0.1-202003070901) - Point Release!

@Patrick wrote:



Contains all enhancements there were recently released


In-place release upgrade is possible upgrade using Whonix testers repository.

Posts: 1

Participants: 1

Read full topic

21 March, 2020 02:37AM by @Patrick

March 20, 2020

hackergotchi for Purism PureOS

Purism PureOS

Librem Hardware and the Intel CSME Vulnerability

Whenever a security vulnerability comes out one of the first questions that come to many peoples’ minds is: am I affected? The last couple of years in particular have seen a lot of hardware-based vulnerabilities in Intel processors and in those cases generally it’s a matter of looking at the affected list of hardware and comparing it against your own hardware.

More recently a vulnerability (CVE-2019-0090) was announced in the Intel CSME that can allow an attacker with local access to potentially extract secret Intel hardware signing keys from a system. There are a number of different analyses out there on this vulnerability from the very dry CVE report itself to “sky is falling” reports that contain a lot more hype. If you want more technical details on the vulnerability itself, I’ve found this report to have a good balance of measured technical information on impact without the hype.

But is Librem Hardware Affected?

We’ve gotten a lot of questions recently about whether Librem hardware is affected by this vulnerability, given that the CVE includes a wide range of hardware (including chips in our own systems). After looking into the issue we feel confident in answering that Librem Intel-based computers (our laptops, servers, and the new Librem Mini) are not affected by CVE-2019-0090, due to how we use (and don’t use) the ME. Beyond that, PureBoot users will have extra protection including the ability to detect someone attempting to exploit this vulnerability.

The reason our hardware isn’t vulnerable to this ME vulnerability is similar to why we haven’t been vulnerable to past ME exploits like a recent AMT vulnerability. For starters, we disable and neutralize the ME to remove all but the most essential modules, which for past exploits (such as AMT vulnerabilities) has meant there was nothing to exploit. For CVE-2019-0090, the attack is against a core and fundamental module we do include, however because we do not use Intel hardware signing keys for root of trust at all, it attacks features we don’t use.

Extra Protection with PureBoot

Since this attack exploits features we don’t use, customers who use our default coreboot firmware don’t have anything to worry about, and customers who use our PureBoot firmware have an extra level of protection including detecting the exploit. This is because the contents of the ME is part of the PureBoot firmware image and is among the things that PureBoot tests for tampering. Someone who could modify the ME with an exploit would trigger a PureBoot alert the next time the user turns on the computer.


It’s been encouraging to see how many of our customers are informed on and concerned with the latest security issues out there. I hope this brief explanation helps you understand why our unique approach to security also often offers us special immunity to common security issues.

The post Librem Hardware and the Intel CSME Vulnerability appeared first on Purism.

20 March, 2020 09:06AM by Kyle Rankin

March 19, 2020

hackergotchi for Freedombone


Notes during Pandemic

I thought for a long time about whether I should write anything about the pandemic of 2020 here on this blog. It's such a serious situation that in future anything I might write now could appear to be hopelessly naive or disrespectful. Many mistakes have been made, and I think that's primarily because nobody has experience of this type of event within living memory.

Something which the pandemic has already shown is who is really important in the economy. It's not the highly paid people, like bankers or tenured professors. The people who deliver the most value to the economy, and who keep it going, are the supermarket and warehouse workers, the delivery drivers, care workers, farmers, cleaners, bakers, nurses and refuse collectors. The people who are typically on minimum wage or zero hours contracts. There's a paradox that the people which society values least are actually the ones most essential to its continued functioning.

The crisis will eventually pass, and when it does I hope that the people who got us through it are appropriately recognized. I also hope that grief and anger can be effectively transformed into lasting change. We must not return to the old world which existed before the crisis. In the new economy nobody must be denied medical care or food or housing, and the well-being of everyone, rather than private gain for a few, must be the main priority.

19 March, 2020 10:32PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical Technical Support for Ubuntu and open source during the COVID-19 pandemic

All technical support services are currently at 100% SLA. We have updated our operational plan to accommodate expected sick leave among colleagues and their families as COVID-19 moves through our communities.

We are committed to  24/7 technical support, configuration advice and online access to your accounts and support tickets.

24/7 Technical Support and Managed Services Preparedness

Our support teams have the most comprehensive redundancy in the industry, and are designed exactly for situations like this. Our teams are disaster-ready, dedicated to doing one job: responding to your support needs 24/7.

Here’s how we prepared to keep you covered during this crisis: 

  • All engineers work from home and are distributed globally. This is not a new position for Canonical, our Support Teams have been remote workers since the company was founded.  We are currently based in over 40 countries and are not susceptible to impacts to any one country.
  • Canonical Technical Support has a geographic staffing model, this allows for redundancy in skills outside any one country or region. Each region is prepared to back up other regions as necessary.  
  • All of our support systems (phone & online) are redundant, hosted in the 3 major geographical areas, Americas, EMEA & APAC.

Engagement Project Management Office

Canonical’s Engagement Project Management team is in a unique position in the industry to provide continuous service to our customers.  The EPMO team has been remote based since its inception and is prepared for contingencies and crisis:

  • The EPMO team has the tools, procedures and experience to continue to drive and deliver projects from start to completion without disruption in a remote working environment. 
  • We are distributed across 13 countries in the Americas, EMEA and APAC.
  • Our distribution and skill set will allow for maximum contingency and backup for any need that may arise around the globe.

We will be providing updates on our COVID-19 preparedness and information as things evolve on this blog post.

Should you have any questions or concerns, feel free to contact our Technical Support Team via one of the following methods:

United Kingdom
Local: +44 203 656 5270
Toll free: +44 800 0588703
United States
Local: +1 737 2040281
Toll free: +1 888 986 1311
Local: +4961512746800
Toll free: +49 800 1838220
Local: +34 932201120
Toll free: +34 900 833871
Local: +33 184889310
Toll free: +33 800913911
Local: +88 6255924768
Toll free: +88 6801127797
Local: +52 5541636670
Toll free: +52 1800 0623718
Local: +86 1057897356
Toll free: +86 400 842 3255
Local: +82 318108750
Toll free: +82 798142033848
Local: +81-3-4577-7725
Toll free: 0066-33-813640

We are all together in this time of crisis, and we take our job of supporting Ubuntu, all of our products and customers very seriously. Thank you for entrusting us. 

Pete Graner

Vice President, Global Support Services

19 March, 2020 08:41PM

Ubuntu Blog: Update: Canonical managed services and Ubuntu support during COVID-19 outbreak


Canonical’s fully managed OpenStack, Kubernetes, Kafka, Elastic, Postgres and other open source stacks, are operating at full SLA.

Ubuntu and broader open source support services are unaffected and our teams have adjusted schedules to allow for colleagues to be out sick while maintaining full coverage in the months to come. 

Security update coverage is similarly resilient.

We are now 100% remote, and can sustain that posture indefinitely.


With remote colleagues by default, and a policy of flexible office work, Canonical was well placed for the adjustments needed globally to slow the spread of COVID-19. We have given our teams space and time to ensure those vulnerable close to them are shielded as possible, and to enable them to make any needed childcare arrangements.

We have moved the teams who previously did work in offices – finance, design, inside sales and device enablement – to remote work, and assigned mentors to those groups for the transition.


We have planned for up to 15% of our colleagues to be unable to work at any given time, either personally ill or taking care of someone in the immediate family who is ill. All service delivery teams have the needed capacity and have adjusted schedules accordingly.


Additional updates will be posted on and communicated on social media (Twitter, LinkedIn, Facebook). Please follow those for regular news.


This is a testing time for everyone. We feel privileged to have a well-established pattern of remote work and our team have been glad to help others by sharing our experience of distributed collaboration and operations. Please lean on us, we stand ready to help.

Mark Shuttleworth, CEO, Canonical

19 March, 2020 06:55PM

James Hunt: Procenv 0.46 - now with more platform goodness

I have just released procenv version 0.46. Although this is a very minor release for the existing platforms (essentially 1 bug fix), this release now introduces support for a new platform...


Yup - OS X now joins the ranks of supported platforms.

Although adding support for Darwin was made significantly easier as a result of the recent internal restructure of the procenv code, it did present a challenge: I don't own any Apple hardware. I could have borrowed a Macbook, but instead I decided to see this as a challenge:

  • Could I port procenv to Darwin without actually having a local Apple system?
 Well, you've just read the answer, but how did I do this?

Stage 1: Docker

Whilst surfing around I came across this interesting docker image:

It provides a Darwin toolchain that I could run under Linux. It didn't take very long to follow my own instructions on porting procenv to a new platform. But although I ended up with a binary, I couldn't actually run it, partly because Darwin uses a different binary file format to Linux: rather than ELF, it uses the Mach-O format.

Stage 2: Travis

The final piece of the puzzle for me was solved by Travis. I'd read the very good documentation on their site, but had initially assumed that you could only build Objective-C based projects on OSX with Travis. But a quick test proved my assumption to be incorrect: it didn't take much more than adding "osx" to the os list and "clang" to the compiler list in procenv's .travis.yml to have procenv building and running (it runs itself as part of its build) on OSX under Travis!

Essentially, the following YAML snippet from procenv's .travis.yml did most of the work:

language: c
  - gcc
  - clang
  - linux
  - osx

All that remained was to install the build-time dependencies to the same file with this additional snippet:

  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install expat check perl; fi

(Note that it seems Travis is rather picky about before_install - all code must be on a single line, hence the rather awkward-to-read "if; then ....; fi" tests).


Although I've never personally run procenv under OSX, I have got a good degree of confidence that it does actually work.

That said, it would be useful if someone could independently verify this claim on a real system!) Feel free to raise bugs, send code (or even Apple hardware :-) my way!

19 March, 2020 05:29PM by James Hunt (

Ubuntu Blog: Snapcraft tricks: Improve release flexibility with pull and build overrides

Sometimes, software projects are simple – one architecture, one version, one release. But often, they can be complex, targeting multiple platforms, and with different versions at that. If you are packaging your apps as snaps, you might wonder about the optimal way to accommodate a multi-dimensional release matrix.

One yaml to rule them all

We encountered this during our 2018 November Snapcraft sprint, while working with the Kata containers team. They had a somewhat unique build process, where they would use multiple snapcraft.yaml files, each targeting a different version-architecture pair.

The way around this problem is to use a single snapcraft.yaml, with override-pull and override-build scriptlets used to resolve the pairing logic. Normally, snapcraft uses a single pull and build declaration for any specified part. However, developers have the flexibility to override the defaults, and create their own pull and build steps (as shell scripts).

Specifically, the Kata containers project required a different version of Golang for different platform architectures. This is the section of the snapcraft.yaml that satisfies this requirement:

      git clone .
      case "$(arch)" in
          git checkout go1.11.2
        git checkout go1.10.1
        git checkout go1.10.5
        git checkout go1.11

We can see how this applies to the build step, too. The Kata containers project features an unusual case where the kernel is bundled into the snap – however, this is expected for hypervisor technology. Similar to the pull step, the kernel configuration differs, and another case statement satisfies this requirement in much the same manner, except the override is for the build rather than the pull step. With the Golang part, the project needed different branches. The kernel source is identical, but the config files are platform-specific.

  source-type: tar
  plugin: kernel
  override-build: |
    case "$(arch)" in
    cp ${SNAPCRAFT_STAGE}/kernel/configs/${config} .config
    make -s oldconfig EXTRAVERSION=".container" > /dev/null
    make -j $(nproc) EXTRAVERSION=".container

Generic use case

This functionality can be extended to any project requirement where a developer would require either two different sources for different versions of the software, or different configurations for an identical source. Overall, it can simplify the release process, as there is no need to maintain multiple snapcraft.yaml files.

override-pull: |
  case “$variable” in
    something else
override-build: |
  case “$variable” in
    something else

Lastly, the override-pull and override-build clauses can also include the standard snapcraft pull and build scriptlets. For instance, you may not require multiple sources, but you might need to alter specific files or folders once they are downloaded, or edit certain components before the build step. A very rudimentary example would look like:

override-pull: |
      snapcraftctl pull
      ln -sf ../file ../../file


The pull and build override scriptlets offer snap developers a lot of freedom in how they construct their projects. There is often no need to maintain a complex release process, and it is possible to make the necessary adjustments through a single snapcraft.yaml file. The only practical limitation is the software itself, and your ability to tinker with shell scripts.

We hope you enjoyed this article, and if you have any feedback or suggestions, please join our forum for a discussion.

Photo by Macau Photo Agency on Unsplash.

19 March, 2020 03:29PM

hackergotchi for Whonix


Whonix VirtualBox - Point Release!

@Patrick wrote:

This is a point release.

Download Whonix for VirtualBox:

Alternatively, in-place release upgrade is possible.

Please Donate!

Please Contribute!

This release would not have been possible without the numerous supporters of Whonix!

Notable Changes

Full difference of all changes

About Whonix

Whonix is being used by Edward Snowden, journalists such as Micah Lee, used by the Freedom of the Press Foundation and Qubes OS. It has a 7 years history of keeping its users safe from real world attacks. [1]

The split architecture of Whonix relies on leveraging virtualization technology as a sandbox for vulnerable user applications on endpoints. This is a widely known weakness exploited by entities that want to circumvent cryptography and system integrity. Our Linux distribution come with a wide selection of data protection tools and hardened applications for document/image publishing and communications. We are the first to deploy tirdad, which addresses the long known problem of CPU activity affecting TCP traffic properties in visible ways on the network and vanguards, an enhancement for Tor produced by the developers of Tor, which protects against guard discovery and related traffic analysis attacks. Live Mode was recently added. We deliver the first ever solutions for user behavior masking privacy protections such as Kloak. Kloak prevents websites from recognizing who the typist is by altering keystroke timing signatures that are unique to everyone.

In the future we plan to deploy a hardened Linux kernel with a minimal number of
modules for OS operation, which will greatly decrease attack surface. An AppArmor profile for the whole system as well as Linux Kernel Runtime Guard (LKRG), which quote performs runtime integrity checking of the Linux kernel and detection of security vulnerability exploits against the kernel.


Posts: 1

Participants: 1

Read full topic

19 March, 2020 12:31PM by @Patrick

hackergotchi for Tails


JavaScript sometimes enabled in the Safest security level of Tor Browser

Tor is aware of a technical issue that allows JavaScript execution in the Safest security level of Tor Browser in some situations.

Until then, do the following to completely disable JavaScript instead of setting the security level:

  1. Open the address about:config in Tor Browser.

  2. Click the I accept the risk! button.

  3. At the top of the page, search for javascript.enabled.

  4. Double-click on the javascript.enabled line in the results to set its value to false.

This issue will be fixed in the next release of Tails.

19 March, 2020 10:00AM

March 18, 2020

hackergotchi for Cumulus Linux

Cumulus Linux

Kernel of Truth season 3 episode 3: Linux networking with eBPF

Subscribe to Kernel of Truth on iTunes, Google Play, SpotifyCast Box and Sticher!

Click here for our previous episode.

This podcast is all about Linux and to talk about it, we have two of the top Linux kernel experts. Kernel of Truth host Roopa Prabhu is one and chats with our special guest David Ahern about eBPF. If you haven’t heard of eBPF, it’s the hottest Linux kernel technology bringing programmability and acceleration to many Linux subsystems. In this podcast we focus on eBPF’s impact on networking and the million possibilities it brings to the table.

Guest Bios

Roopa Prabhu: Roopa Prabhu is Chief Linux Architect at Cumulus Networks. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working at Cumulus and with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at @__roopa.

David Ahern: David is a member of the kernel group within the Systems team at DigitalOcean. His focus right now is networking on hypervisors, and he is working to improve XDP capabilities for cloud hosting environments. Prior to DigitalOcean, he worked on the Linux networking stack for Cumulus Networks, most notably adding support for VRF and improving route scale via nexthop objects.

Episode links

Join our community Slack channel here. We’re also on LinkedInTwitterFacebook and Instagram!

Lwn introduction to eBPF:

XDP Documentation:


Bpfrace: high level tracing language for eBPF:

Bpftrace in buster:

Learn eBPF tracing:

Cilium Project:

Using eBPF in Kubernetes:

Project Calico and eBPF:

Netdev 0x14 Talks:

Cloudflare’s exporter (hooks ebpf to Prometheus):

Mellanox ebpf_exporter for dropped flows:

libbpf on github:

Brendan Gregg’s web site (contains reference to his latest BPF book and related material):

18 March, 2020 11:54PM by Katie Weaver

hackergotchi for SparkyLinux



There is a new tool available for Sparkers: CudaText

What is CudaText?

CudaText is a free, open-source, cross-platform (runs on Microsoft Windows, Linux, macOS or FreeBSD) code editor written in Lazarus. It evolved from the previous editor named SynWrite which is no longer developed. It is extensible by Python add-ons (plugins, linters, code tree parsers, external tools). Syntax parser is feature-rich, based on EControl engine (though not as fast as in some competitors).

– Syntax highlight for lot of languages (230+ lexers).
– Code tree: structure of functions/classes/etc, if lexer allows it.
– Code folding.
– Multi-carets and multi-selections.
– Find/Replace with regular expressions.
– Configs in JSON format. Including lexer-specific configs.
– Tabbed UI.
– Split view to primary/secondary. Split window to 2/3/4/6 groups of tabs.
– Command palette, with fuzzy matching.
– Minimap. Micromap.
– Show unprinted whitespace.
– Support for many encodings.
– Customizable hotkeys.
– Binary/Hex viewer for files of unlimited size (can show 10 Gb logs).
– Correctly saves binary files.

Features for HTML/CSS coding:
– Smart auto-completion for HTML, CSS.
– HTML tags completion with Tab-key (Snippets plugin).
– HTML color codes (#rgb, #rrggbb) underline.
– Show pictures inside editor area (jpeg/png/gif/bmp/ico).
– Show tooltip when mouse moves over picture tag, entity, color value.

Installation on Debian “Buster”/”Bullseye” / Sparky 5/6 64 bit:

sudo apt update
sudo apt install cudatext

or via APTus-> Office.


The project page :
The project developers: Alexey Torgashin, Andrey Kvichanskiy
License: MPL 2.0


18 March, 2020 10:51PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Working from Home: Lessons Learned Over 20 Years & a Shopping List

I've been a full-time, work-from-home employee for the vast majority of the last 20 years, and 100% since 2008.

In this post, I'm going to share a few of the benefits and best practices that I've discovered over the years, and I'll share with you a shopping list of hardware and products that I have come to love or depend on, over the years.

I worked in a variety of different roles -- software engineer, engineering manager, product manager, and executive (CTO, VP Product, Chief Product Officer) -- and with a couple of differet companies, big and small (IBM, Google, Canonical, Gazzang, and Apex).  In fact, I was one of IBM's early work-from-home interns, as a college student in 2000, when my summer internship manager allowed me to continue working when I went back to campus, and I used the ATT Global Network dial-up VPN client to "upload" my code to IBM's servers.

If there's anything positive to be gained out of our recent life changes, I hope that working from home will become much more widely accepted and broadly practiced around the world, in jobs and industries where it's possible.  Moreover, I hope that other jobs and industries will get even more creative and flexible with remote work arrangements, while maintaining work-life-balance, corporate security, and employee productivity.

In many cases, we would all have a healthier workplace, if anyone generally stayed home when they were feeling even just a bit unwell.  Over these next few weeks, I hope that many other people discover the joy, freedom, and productivity working from home provides.  Here are a few things that I've learned over the years, and some of the tools that I've acquired...

Benefits, Costs, and Mitigations


  • Commute -- If you're like me, you hate sitting in traffic.  Or waiting on a train.  Erase your commute entirely when you work from home.  I love having an extra hour in the morning, to set out my day, and extra time in the evenings with my family.
  • Family -- Speaking of family, I'm adding this as a benefit all on its own.  I love being able to put my kids on the bus in the morning, and be home when they get home, and have quality time in the evenings with my spouse and daughters and dogs.  When I have worked in an office, I've often found that I've left for work before anyone else was awake, and I often got home after everyone was in bed.
  • Location -- Work-from-home, in most cases, usually means, work-from-anywhere.  While I spend the vast majority of my time actually working from my home office, I've found that I can be nearly as effective working from a hotel, coffee shop, airplane, my in-laws' house, etc.  It takes some of the same skills and disciplines, but once you break free of the corporate desk, you'll find you can get great work done anywhere.
  • Productivity -- Your mileage may vary, but I find I'm personally more productive in the comfort of my own home office, which has evolved to meet my needs.  Yes, I love my colleagues and my teams, and yes, I spend plenty of time traveling, on the road meeting them.

Costs and Mitigations

  • Work-life-balance -- This one is important, but it's not hard to fix.  Some people find it hard to separate work and home life, when working from home.  Indeed, you could find yourself "always on", and burn out.  Definitely don't do that.  See the best practices below for some suggestions on mitigating this one.
  • Space and Equipment -- There's quite literally a dollar cost, in some cases, to having the necessary space and equipment necessary to work from home.  To mitigate this, you should look into any benefits your employer offers on computer equipment, and potentially speak to an accountant about tax deductions for dedicated space and hardware.  My office is a pretty modest 10'x12' (120 sqft), but it helps that I have big windows and a nice view.
  • Relationships -- It can seem a little lonely at home, sometimes, where you might miss out on some of the water cooler chatter and social lunches and happy hours.  You do have to work a little harder to forge and maintain some of those remote relationships.  I insist on seeing my team in person a couple of times a year (once a quarter at least, in most cases), and when I do, I try to ensure that we also hang out socially (breakfast, coffee, lunch, dinner, etc.) beyond just work.  It's amazing how far that will carry, into your next few dozen phone calls and teleconferences.
  • Kids -- (UPDATED: 2020-03-10) I'm adding this paragraph post publication, based on questions/comments I've received about how to make this work with kids.  I have two daughters (6 and 7 years old now), that were 18 months apart, so there was a while in there where I had two-under-two.  I'm not going to lie -- it was hard.  I'm blessed with a few advantages -- my wife is a stay-at-home-mom, and I have a dedicated room in my house which is my office.  It has a door that locks.  I actually replaced the cheap, contractor-grade hollow door, with a solid wood door, which greatly reduces noise.  When there is a lot of background noise, I switch from speakers-and-computer-mic, to my noise cancelling headset (more details below).  Sometimes, I even move all the way to the master bedroom (behind yet another set of doors).  I make ample use of the mute button (audio and/or video) while in conference meetings.  I also switch from the computer to the phone, and go outside sometimes.  In a couple of the extreme cases, where I really need silence and focus (e.g. job interviews), I'll sit in my car (in my own garage or at a nearby park), and tether my computer through my phone.  I've worked with colleagues who lived in small spaces, turn a corner of their own master bedroom into a office, with a hideaway desk, with a folding bracket and a butcher block.  My kids are now a little older, and sometimes they're just curious about what I'm doing.  If I'm not in a meeting, I try to make 5 minutes for them, and show them what I'm working on.  If I am in a meeting, and it's a 1:1 or time with a friendly colleague, I'll very briefly introduce them, let them say hi, and them move them along.  Part of the changes happening around the work-from-home shift, is that we're all becoming more understanding of stuff like this.

 Best Practices

  • Dedicated space -- I find it essential to have just a bit of dedicated space for an office, that you and the family respect as your working space.  My office is about 8' x 12', with lots of natural light (two huge windows, but also shades that I can draw).  It hangs off of the master bedroom, and it has a door that locks.  Not that I have to lock it often, but sometimes, for important meetings, I do, and my family knows that I need a little extra quiet when my office door is locked.
  • Set your hours -- It's really easy to get swept away, into unreasonably long working days, when working from home, especially when you enjoy your job, or when things are extra busy, or when you have a boss or colleagues who are depending on you, or a 1000 other reasons.  It's essential to set your working hours, and do you best to get into a consistent rhythm.  I usually put the kids on the bus around 7am, and then either go for a run or play the piano for a bit, and then start my day pretty consistently at 7:30am, and generally try to wrap up by 5:30pm most days.  Of course there are always exceptions, but that's the expectations I usually set for myself and the people around me.
  • Get up and move around -- I do try to get up and move around a few times per day.  If I'm on a call that doesn't require a screen, I'll try to take at least one or two of those from my phone and go move around a bit.  Take a walk down the street or in the garden or even just around the house.  I also try to unplug my laptop and work for at least an hour a day somewhere else around the house (as long as no one else is around) -- perhaps the kitchen table or back porch or couch for a bit.  In the time that I spent at Google, I really came to appreciate all of the lovely bonus spaces where anyone can curl up and work from a laptop for a few hours.  I've tried to reproduce that around my house.
  • Greenery -- I think I worked from home for probably 4 years before I added the first house plant in my office.  It's so subtle, but wow, what a difference.  In my older age, I'm now a bit of gardner (indoors and outside), and I try to keep at least a dozen or so bonsai trees, succulents, air plants, and other bits of greenery growing in my office.  If you need a little more Feng Shui in your life, check out this book.

Shopping List

  • Technology / Equipment
    • Computers
      • Macbook Pro 13 -- I use a 13" Apple Macbook Pro, assigned by my employer for work.  I never use it for personal purposes like Gmail, etc. at all, which I keep on a separate machine.
      • Thinkpad X250 -- I have an older a Thinkpad running Ubuntu on my desk, almost always streaming CNBC on YouTube TV in full screen (muted).  Sometimes I'll flip it over to CNN or a similar news channel.
      • Dell T5600 -- I keep a pretty beefy desktop / server workstation running Ubuntu, with a separate keyboard and monitor, for personal mail and browsing.
    • Keyboard / Mouse
      • Thinkpad USB Keyboard -- I love the Thinkpad keyboard, and this the USB version is a must have, for me.
      • Apple Wireless Keyboard and Trackpad and Tray -- I use the wireless Bluetooth keyboard and mouse pad for my work computer.  I find the tray essential, to keep the keyboard and mouse closely associated.
    • Monitors
      • Samsung 32" 4K UHD -- I use two monitors, one in portrait, one in landscape.  I really like Samsung, and these are the exact models I use: Gaming Monitor (LU32J590UQNXZA) – 60Hz Refresh, Widescreen Computer Monitor, 3840 x 2160p Resolution, 4ms Response, FreeSync, HDMI, Wall Mount.
      • Monitor Desk Mount -- And for those monitors, I use this desk mount, which is full motion, rotates, and attaches to my standing desk.
    • USB Hub
      • I use this dongle to connect my Macbook to the external 4K monitor, wired gigabit ethernet, and power supply.  This simple, single plug certainly helps me quickly and easily grab my laptop and move around the house easily during the day.
    • Laptop Stand
      • Nulaxy Ergonomic Riser -- I find this laptop stand helps get the camera angle on the top of my Macbook in a much place, and also frees up some space on my desk.  I sometimes take both the laptop and the stand outside with me, if I need to relocate somewhere and take a couple of conference calls.
    • Network
    • Storage
      • Synology -- I generally keep copies of our family photo archive in Google Photos, as well as a backup here at home, as well.  I'm a fan of the Synology DS218, and the Western Digital Caviar Green 3TB hard drives.  Really nice, simple interface, and yet feature-rich.
    • Printer / Scanner
      • HP Officejet -- While I avoid printing as much as possible, sometimes it's just inevitable.  But, also, working-from-home, you'll find that you often need to scan documents.  You'll definitely need something with an automatic document feeder and can scan directly to PDF.  I like the HP Officejet Pro 9015, but you're looking for a less expensive option, the HP Officejet 5255 is a fine printer/scanner too.
    • Speakers
      • Google Home Max -- I can't stress this enough: I find it extremely important to have high quality, full range speaker, that faithfully reproduces highs and lows.  I really need something much better than laptop speakers or cheap PC speakers.  Personally, I use a Google Home Max, with the Google Assistant microphone muted, and connected over Bluetooth.  I actually like it positioned behind me, believe it or not.  You could just as easily use an Amazon Echo or any other high quality Bluetooth speaker.
      • Bang and Olufsen Beoplay A9 -- This speaker is an absolute dream!  I used it in my office for a while, but eventually moved it to the family room where it's used much more for music and entertainment.  Besides sounding absolutely incredible, it's basically a work of art, beautiful to look at, in any room.
    • Headphones
      • Apple AirPods -- I use AirPods as my traveling headphones, mostly on planes.  I like that they're compact.  The short battery life leaves a lot to be desired, so I actually travel with two sets, and switch back and forth, recharging them in the case when I switch.
      • Bang and Olufsen Beoplay H9 -- Overwhelmingly, I use the Bluetooth speaker for my daily slate of teleconferences, meetings, and phone calls.  However, occasionally I need noise cancelling headphones.  The Beoplay H9i are my favorite -- outstanding comfort, excellent noise cancelling, and unbeatable audio quality.
      • Bose QuietComfort 35 ii -- These Bose headphones were my standards for years, until I gave them to my wife, and she fell in love with them.  They're hers now.  Having used both, I prefer the B&O, while she prefers the Bose.  
      • Wired headset with mic -- If you prefer a wired headset with a microphone, this gaming headset is a fantastic, inexpensive option.  Note that there's no noise cancellation, but they're quite comfortable and the audio quality is fine.
    • Webcam
      • Truth be told, at this point, I just use the web cam built into my Macbook.  The quality is much higher than that of my Thinkpad.  I like where it's mounted (top of the laptop screen).  While I connect the laptop to one of the external 4K monitors, I always use the 13" laptop screen as my dedicated screen for Zoom meetings.  I like that the built-in one just works.
      • Logitech -- All that said, I have used Logitech C920 webcams for years, and they're great, if you really want or need an external camera connected over USB.
    • Microphone
      • Like the Webcam, these days I'm just using the built-in mic on the Macbook.  I've tested a couple of different mics with recordings, and while the external ones do sound a little better, the difference is pretty subtle, and not worth the extra pain to connect 
      • Blue Snowball -- Again, all that said, I do have, and occasionally use, a Blue Snowball condenser mic.  While subtle, it is definitely an upgrade over the laptop built-in microphone.
    • Phone
      • For many years working from home, I did have a wired home phone system.  I use Ooma and a Polycom Voicestation.  But about two years ago, I got rid of it all and deliberately moved to using Google Hangouts and Zoom for pretty much everything, and just using my cell phone (Pixel 3) for the rest.
  • Furniture / Appliances
    • Standing desk
      • Uplift (72"x30") -- While I don't always stand, I have become a believer in the standing desk!  I change my position a couple of times per day, going from standing to sitting, and vice versa.  I'm extremely happy with my Uplift Desk, which is based here in Austin, Texas.
      • Apex -- I don't have direct experience with this desk, but this was the one I was looking at, and seems quite similar to the Uplift desk that I ended up getting.
    • Desk mat
      • Aothia Cork and Leather -- I really love desk mats.  They're so nice to write on.  These add a splash of color, and protect the desk from the inevitable coffee spill.  I have a couple of these, and they're great!
    • Coffee machine
      • Nespresso -- Yes, I have a coffee machine in my office.  It's essential on those days when you're back-to-back packed with meetings.  While I love making a nice pot of coffee down in the kitchen, sometimes I just need to push a button and have a good-enough shot of espresso.  And that's what I get with this machine and these pods (I recently switched from the more expensive authentic Nespresso pods, and can't really tell the difference).
    • Coffee Mug
      • Ember -- I received an Ember coffee mug as a gift, and I've really come to appreciate this thing.  I don't think I would have bought it for myself on my own, but as a gift, it's great.  Sleek looking and rechargeable, it'll keep your coffee hot down to your last sip.
    • Water cooler
      • Primo -- And yes, I have a water cooler in my office.  This has really helped me drink more water during the day.  It's nice to have both chilled water, as well as hot water for tea, on demand.
    • White board
    • Chair
    • Light / Fan
      • Haiku L -- My office is extremely well lit, with two huge windows.  Overhead, I used to have a single, nasty canned light, which I replaced with this Haiku L ceiling fan and light, and it's just brilliant, with a dimmer, and voice controls.
    • Air purifier
      • HEPA Filter -- Some years ago, I added an air purifier to my office, mainly to handle the pet dander (two big dogs and a cat) that used to room my office.  It's subtle, but generally something I'm glad to have pulling dust and germs out of the air.

Now I'm curious...  What did I miss?  What does your home office look like?  What are you favorite gadgets and appliances?

Disclosure: As an Amazon Associate I earn from qualifying purchases.


18 March, 2020 08:38PM by Dustin Kirkland (

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 Keyboard Improvements

Smartphones haven’t really needed a full computer keyboard, until now. We are turning the Linux desktop ecosystem into a convergent one. In order to access desktop applications, our keyboard has to feel good on the small screen, as well as expose common desktop keystrokes.

Terminal use

While you can comfortably use the GUI in PureOS, the terminal is fully empowered. Giving you all your favorite GNU commands, anything missing is likely an install command away. Actually using a terminal on a smartphone poses a few challenges. Displaying on a smaller screen size was solved easily by adding a touch-friendly zoom function.

King’s Cross menu

Keys like Tab, arrows, ESC are required to operate a terminal comfortably. While far from finished, a super simple initial terminal layout has been added into squeekboard/pureOS.

Early terminal layout


Keyboard shortcuts like copy and paste (ctrl + c, ctrl + v) work everywhere on the desktop, so why not your phone? Since the Librem 5 will be able to scale up to use on a full-size monitor, it will be useful to access the same desktop-like functionality while mobile. Preserving the same workflow while docked and on the go. While it’s not done yet, you can follow along here.

Mockup for a more comprehensive terminal layout

If you have a strong opinion of how the keyboard should look, head on over to and make your opinion known.


The keyboard layouts are .yaml files. It’s simple to test out your own design, or even contribute your changes to the main repo.

git clone

mkdir -p ~/.local/share/squeekboard/keyboards/

cp ./squeekboard/data/keyboards/us.yaml ~/.local/share/squeekboard/keyboards/terminal.yaml

edit ~/.local/share/squeekboard/keyboards/terminal.yaml

Now select the terminal layout and you should see your changes.

King’s Cross keyboard selection dialog

The Librem 5 has an improved emoji keyboard layout. Giving you access to six pages of various icons.

King’s Cross emoji layout


Other keyboard improvements have found their way into PureOS, to name a few:

Soon the keyboard will size better at other screen scales. This will help access apps that are not yet designed for the small screen. Helping enable the large collection of Debian software.

Early scaling done by Dorota from our team.

Rapid progress is being made thanks to community involvement and the supporters of the Librem 5.

The post Librem 5 Keyboard Improvements appeared first on Purism.

18 March, 2020 07:48PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Kubernetes 1.18 release candidate available for testing

The latest release of Kubernetes is now available for download and experimentation, with the MicroK8s Kubernetes 1.18 release candidate.

The easiest, fastest way to get the latest Kubernetes is to install MicroK8s on your machine. Just run:

sudo snap install microk8s --channel=1.18/candidate --classic

Or from select 1.18/candidate

MicroK8s is installable across Ubuntu and other Linux distributions or on Windows and macOS.

For any questions or support requests on Kubernetes and MicroK8s feel free to contact us.

18 March, 2020 06:05PM

hackergotchi for ZEVENET


Delivering the best continuity for non-connection oriented services

Did you know that ZEVENET implements different techniques to provide higher performance and high scale for non-connection oriented and real-time services like UDP (User Datagram Protocol), SIP (Session Initiation Protocol), VoIP (Voice over IP), VPN (Virtual Private Network), SYSLOG (System Logging Protocol), DNS (Domain Name System), DHCP (Dynamic Host Configuration Protocol) among others?


18 March, 2020 04:25PM by Zevenet

hackergotchi for Purism PureOS

Purism PureOS

Announcing the Purism Librem Mini

Our small form-factor mini-PC that puts freedom, privacy and security first. We’re really excited about the Librem Mini, it’s a device our community have wanted and we’ve wanted to offer for some time.

The Librem Mini is accessible, small, light and powerful featuring a new 8th gen quad core i7 processor, up to 64 GB of fast DDR4 memory and 4k 60 fps video playback. It’s a desktop for your home or office, a media center for your entertainment, or an expandable home server for your files and applications.

Like our other products the Librem Mini will feature state of the art privacy and security with PureOS, Pureboot and Librem Key support. Find out more about the privacy and security features on the Librem Mini product page.

The Librem Mini is available to order now from $699 for the base configuration with 8gb of memory and 250gb SSD. Shipping starts one month after reaching the pre-order goal.

Get your Librem Mini

Hardware Specifications

ProcessorIntel Core i7-8565U (Whiskey Lake)
Active (fan) Cooling
GraphicsIntel UHD 620
MemoryDDR4 2400MHz 1.2V, 2 SO-DIMM slots
Max 64GB Support
1 SATA III 6GB/s SSD/HDD (7mm)
1 M.2 SSD (SATA III/NVMe x4)
Video1 HDMI 2.0 4K @ 60Hz
1 DisplayPort 1.2 4K @ 60Hz
USB Ports4 USB 3.0
2 USB 2.0
1 Type-C 3.1
Audio3.5mm AudioJack (Mic-in & headphone out combo)
Networking1 RJ45 Gigabit Ethernet LAN
optional WiFi 802.11n (2.4/5.0 GHz) via Atheros ATH9k module
Bluetoothoptional Bluetooth 4.0 included in WiFi module
Power1 Powerbutton, DC-IN Jack
DimensionsWidth 12.8cm (5.0 in)
Height 3.8cm (1.5 in)
Depth 12.8cm (5 in)
Weight1 kg (2.2 lbs)

The post Announcing the Purism Librem Mini appeared first on Purism.

18 March, 2020 08:00AM by Purism

hackergotchi for Qubes


Qubes Architecture Next Steps: The GUI Domain

It has been some time since the last design post about Qubes. There have been some big changes happened, and it took us a little while to find people best suited to write these articles. But, as you can see, the victims volunteers have been found. The team has been hard at work on the changes that are coming in 4.1, and we want to tell you more about them.

One of the Big Things coming soon, in Qubes 4.1, is the first public version of the GUI domain: the next step in decoupling the graphical hardware, the display and management, and the host system. Very briefly, the GUI domain is a qube separate from dom0 that handles all the display-related tasks and some system management.

Why make a GUI domain at all?

One of the biggest security concerns at the moment for Qubes is how much power is in dom0. Once a person has access to it, they can do anything: and while we separate it quite effectively from what is running inside application qubes, dom0 is still a big, bloated and complex domain that performs many disparate functions. It handles managing other domains, display and graphical interfaces, multiple devices (including audio devices), memory and disk management, and so on.

We mitigate many of the GUI-related risks (like the powers wielded by the window manager, or the fact that huge, complex libraries such as Qt/Gtk are always an increased attack surface) through compartmentalization: Applications in VMs can’t talk to GUI toolkits in dom0 other than through a very limited Qubes-GUI protocol, and GUI toolkits in application VMs can’t talk directly to dom0’s X server. Moreover, dom0 is responsible for drawing the colored window borders the represent trust levels, so compromised VMs can’t spoof them.

Nonetheless, having a GUI in dom0 at all is, at best, a source of many dangerous temptations. It’s far too easy to use it to access untrusted (and thus potentially dangerous data), for example by mounting a disk from a qube into dom0. Even browsing relaxing landscapes as desktop wallpapers can expose dom0 to numerous vulnerabilities that intermittently appear in image-processing libraries.

Furthermore, while in theory dom0 is isolated from the outside world, some graphical devices (e.g. displays connected via HDMI or DVI) offer two-way communication, which threatens this isolation and makes it harder to maintain. If a malicious device (rather than the user’s trusted monitor) were to be connected to one of these ports, it could inject data that could be processed inside of dom0. As long as graphical devices are in dom0, they also cannot be safely proxied to other domains. This is because the various solutions to multiplexing access to the GPU at the GPU/driver level (which would expose the “full” GPU to a VM) are orders of magnitude more complex than running display drivers in just one place. We consider this added complexity too risky to put it in dom0. Errors in the drivers could expose dom0 to an attack, and attacks on dom0 are the biggest threat to the Qubes security model.

The current model, in which the GUI and administrative domains are both within dom0, is also problematic from a management point of view. The way existing user-based privilege control works in most modern systems is one of the reasons why we need Qubes at all: It provides far too little separation, and root exploits seem to be inescapable in a system as monolithic as Linux. Separating the GUI domain from dom0 allows us to manage its access to the underlying system.

This has obvious uses in an organizational context, allowing for (possibly remotely) managed Qubes installations, but even in a personal computer context it is often extremely useful to have multiple user accounts with truly separate permissions and privileges. Perhaps you would like to create a guest account for any friend who needs to borrow your computer for a moment, and allow that account to create Disposable VMs, but not to create normal qubes and not to access other users’ qubes. It becomes possible when the GUI domain is decoupled from dom0. All kinds of kiosk modes, providing safer environments for less-technical users who prefer to be sure they cannot break something accidentally, multi-user environments — they all become possible.

What needs to be ready?

There were two big issues in the previous Qubes architecture that needed to be handled for an effective approach to a GUI domain: how the GUI protocol relied on dom0-level privileges and how managing anything in the system required dom0-level access to the hypervisor.

GUI Protocol

Detailed documentation of the current GUI protocol is available here. In brief, it consists of a GUI agent and a GUI daemon. The GUI agent runs in a qube and connects to the GUI daemon in dom0, passing a list of memory addresses of window buffers. As the GUI daemon is running in dom0, with privileged access to, well, everything, it can just map any page of any qube’s memory. You can see why this might be a bit worrying: Access to memory is power, thus dom0 is all-powerful. It would be far worse if we tried to duplicate this architecture and make our GUI domain a qube with the same memory-related privileges. It would just result in two dom0s. Rather than being reduced, the attack surface would be increased.

The upcoming 4.1 release changes this protocol to a more flexible form. It will no longer use direct memory addresses, but an abstract mechanism in which the qube has to explicitly allow access to a particular memory page. In our current implementation — under Xen — we use the grant tables mechanism, which provides a separate memory allocation API and allows working on grants and not directly on memory pages. Other implementations will also be possible: whether for another hypervisor (e.g. KVM) or for a completely different architecture not based on shared memory (e.g. directly sending frames to another machine).

Managing the system

The second problem — system management — is actually partially solved already in Qubes 4.0. Administrative actions such as creating, changing or starting qubes can be handled via qrexec calls and controlled via qrexec policy. You can read more about the Admin API, one of the big changes in Qubes 4.0 that made all this possible here.

Currently, in Qubes 4.0, dom0 handles all these administrative actions. However, in order to avoid unpleasant surprises and to prepare the architecture for the GUI domain, we already always perform them via Admin API. At the design level, dom0 is no longer a special and different case: It makes qrexec calls like any other qube.

There’s an interesting, subtle detail here: We just accepted dom0 being able to run anything in any way inside other qubes. But if we want to implement a more contained and less-privileged GUI domain, it would defeat part of its purpose to just permit it to run any sort of qvm-run do-what-I-want in any of the managed qubes. Qubes 4.0 introduces a special qubes.StartApp qrexec service that runs only applications defined inside the target qube (currently defined via .desktop files in Linux and .lnk files in Windows, placed in predetermined directories). This mechanism allows a qube to explicitly define “what the GUI domain can execute inside me,” and not just hand over all the power to the managing domain. This also makes it possible to define allowed applications using the qrexec policy!

Other issues

Actually implementing a GUI domain (more details below) revealed a lot of minor problems that require some handling. Unsurprisingly, it turns out a modern operating system encourages a very close relationship between whatever part of it deals with graphical display and all the rest of the hardware.

Power management has numerous vital graphical tools that need some kind of access to underlying hardware. From a battery level widget to laptop power management settings, those innocuous GUI tools would like to have a surprisingly broad access to the system itself. Even suspend and shutdown need special handling. In Qubes 4.0, we could just turn off dom0 and know the rest of the system would follow, but it is no longer so simple with a non-privileged GUI domain in the picture.

Keyboard and user input need to be carefully proxied to the GUI domain to enable us to actually use the system. The existing InputProxy system needs to be expanded to ferry information from the USB domain (in the case of USB keyboards and mice) and from dom0 (in some other cases, like PS/2 keyboards) to the GUI domain.

The current state of those minor (minor in comparison to broad, architecture-level changes, but by no means unimportant) issues is tracked here.

How can the GUI domain actually work?

GPU passthrough: the perfect-world desktop solution

GUIVM with PCI passthrough

In the perfect world, we could simply connect the graphics card to the VM as a PCI device and enjoy a new, more comfortable level of separation. Unfortunately, the world of computer hardware is very far from a perfect one. This solution works only very rarely. For most graphics cards, it just fails, although some success has been observed on some AMD cards. Even if, in theory, the architecture supports GPU passthrough, many implementations rely on various hardware quirks and peculiarities absent when there is no direct access to the underlying system. For example, the video BIOS (the code that the GPU provides to the system to initialize itself) in many cases assumes that it is running with full privileges and tries to access various registers and memory areas not available to (or virtualized in) VMs.

And all that is without even approaching issues with multiple graphical cards, multiple outputs or suspending the host; or the fact that some hardware manufactures (like NVIDIA) attempt to block GPU passthrough for some of their products.

At the moment, a group of very brave university students are working on the basic GPU passthrough case as their bachelor’s degree project. We wish them a lot of luck in this difficult endeavor!

Virtual server: the perfect remote solution


Instead of wrestling with the hardware problems, GUI domain could instead connect to a virtual graphical server such as FreeRDP or VNC. This server could be accessible from anywhere on the network (in practice, it should be secured with at least a VPN, as bugs allowing unauthorized users access could be very dangerous), allowing for a Qubes Server hosting many separate sets of qubes used by different users, still maintaining comfortable separation between the qubes and the users. Qrexec policy allows the administrator to comfortably manage this solution: Every GUI domain can have its own set of privileges, managed qubes, Disposable VM permissions etc.

Surprisingly, a virtual server solution does actually work with the current state of Qubes as of the 4.1 developer preview build, and it allows us to bypass the dreaded GPU passthrough complications. The only not-so-small problem is that it does not actually handle our main use case: Qubes running locally on a single machine. This is because it uses the network to expose the GUI, and the place where the local display is handled (dom0) doesn’t have access to the network.

The compromise solution

GUIVM with Xephyr

While GPU passthrough is a work-in-progress and a server-based solution is impractical, there is a compromise solution: Dom0 can keep the X Server and graphics drivers but use them to run only a single, simple application — a full-screen proxy for the GUI domain’s graphical server (an approach similar to the one used by OpenXT). We could even use VNC for this, but luckily, there is another solution based on protocols that have already been tested and implemented. Through the GUI protocol’s shared memory and a Xephyr server on the dom0 side, we can achieve something of a GUI protocol nesting.

Like many compromises, it is far from completely satisfying. The biggest problem is that it still keeps clutter (in the form of drivers and X Server) in dom0 — much less clutter given that huge libraries and desktop environments no longer need to live there, but still clutter. Many of the GPU passthrough problems are still here: Power management will require some finesse, and multi-monitor setups are still untested. (They may require us to extend some of Xephyr’s functionality.)

However, we can be pretty sure there will never be a GPU passthrough solution that works on every system. It is not just about the complexity of the problem and the multitude of GPU products available. As mentioned above, some manufacturers intentionally obstruct GPU passthrough in their graphics cards, so it is likely that some hardware configurations will never have full support. This is why the compromise solution will be available as a fallback even once more robust GPU passthrough is developed.

Surprise dependency: audio

As far as system architecture goes, audio systems are a completely separate set of processes, communication channels and tools — but this is only the theory. In practice, audio is very tightly connected to GUI in most modern systems. On Qubes 4.0, pulseaudio tools start together with GUI tools both in dom0 and in application qubes.

While audio drivers and tools are not nearly as bloated and sprawling as GUI tools, keeping them in dom0 is still suboptimal, and with the move toward a GUI domain, it will become increasingly impossible. Our first step was to see how we could move audio away from dom0: Connect it together with the GPU to the GUI domain and see what breaks. Surprisingly, few things did, and while some hard-coded “connect to dom0 for all of your audio needs” configurations needed to be updated, those changes are already done in Qubes 4.1.

This is not the final solution we would like, though; it would be best to truly decouple audio and GUI, creating a dedicated and separate audio domain.

Audio Domain

The audio domain will be a separate virtual machine that accesses and proxies audio card access. This way, we can not only remove audio from dom0 (making it smaller and less exposed) but also from the GUI domain (which, by virtue of being still quite privileged, should also have as little additional capabilities as possible).

All the complex audio subsystems, from pulseaudio (which controls volume for each domain) to audio mixers and microphones, would reside in the audio domain. It will have its own set of particular privileges. For example, due to the current audio hardware architecture, the audio domain will have access to the complete audio intput and output, but isolating them in a separate domain will significantly reduce the attack surface. Keeping audio in the same domain as the keyboard or screen could, in theory, lead to eavesdropping attacks. In a separate audio domain, all those potentially vulnerable devices are isolated. Even Bluetooth audio devices (like headphones) could finally be used securely, without exposing the whole system to attack.

What will actually be in Qubes 4.1

Most of code to handle the compromise solution is either already merged into the Qubes master branch or currently awaits final merging and will be available in Qubes 4.1. However, it will not be the default.

The GUI domain will be an experimental feature. We will provide a salt formula to easily configure it for anyone who wants to try it out and play around with it. Our main goal is to test everything we can test without GPU passthrough in order to reach a state in which the aforementioned more minor problems are handled. Then we’ll be ready for a GPU passthrough solution once it is developed (which is being worked on separately).

The GUI domain is currently ready for Linux-based qubes and for fullscreen HVMs, not for the Windows GUI agent. At the moment, nobody on our team is the sort of Windows wizard who could do that, so Qubes 4.1 will not have GUI domain support for for the Windows GUI agent. (Coincidentally, this is the same reason that the GUI agent is not compatible with Windows 10 at the moment. If you, dear reader, would like to work with us on Windows 10 GUI agent and GUI domain support, please let us know!)

Currently, many parts of the Qubes architecture assume a singular target GUI domain (or audio domain) for every qube. There may be multiple GUI domains in the system, but each qube can only use one of them. We do not plan to change this in the foreseeable future.

Plans for the future

Introducing the GUI domain opens up a lot of interesting new possibilities. First and foremost, even in the middle-of-the-road, painful-compromise solution, dom0 will still be much, much smaller (no desktop managers or huge graphical libraries), thus it can be much more easily ported to another distribution.

A smaller dom0 could also be placed completely in RAM, making the whole disk controller and storage subsystem independent from it and possibly isolated in its own storage domain, as described in the Qubes Architecture Specification only 10 years ago. Now we’re finally moving closer to this goal!

Finally, decoupling support for VNC and other remote desktop capabilities opens the door to various server-based solutions in which Qubes can run on a remote server, and we can delegate some or all of our domains to other machines (potentially with faster harder and more resources). This is a another step toward Qubes Air.

18 March, 2020 12:00AM

March 17, 2020

hackergotchi for Cumulus Linux

Cumulus Linux

A new era for Cumulus in the Cloud

When we launched Cumulus in the Cloud (CitC) over two years ago, we saw it as a way for our customer base to test out Cumulus Linux in a safe sandboxed environment. Looking back, September 2017 feels like an eternity ago.

Since then, CitC has become a place where we’ve been able to roll out new functionality and solutions to customers and Cumulus-curious alike — and we’ve done some really interesting things (some of our favs include integrating it with an Openstack demo and Mesos demo). It’s pretty much become a Cumulus technology playground.

As our CitC offering has evolved, we’ve also taken stock of the requirements from our customers and realized the direction we want to take CitC. So where is it heading? We’re excited to share that with the launch of our production-ready automation solution last week, CitC will have a new user experience and user interface.

Out with the old:

In with the new:

This redesigned UI comes with some really great enhancements:

  • Customized external connectivity to oob-mgmt-server to run user customized applications

  • Default lifetime increased to 12 hours

  • NetQ native integration within the demo

  • Ability to store and wake up simulations


  • Improvements to provisioning and load times, simulations should load in less than 5 minutes!

We have some exciting future plans for CitC that will help us to better demonstrate how to leverage all the power of open networking. Stay tuned in the coming weeks as we release more posts around how the new and improved CITC is going to enable better demos and playground environments!

If you want a deeper dive into some of the details of why and how we ended up with such a unique production-ready automation solution, Justin Betz gives a great overview in his blog here.

As we make this cutover to the new redesign of CitC, we are taking any feedback via our public slack on via #citc channel. Head over to to try our new CitC experience.

17 March, 2020 06:06PM by Rama Darbha

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: SBI Group unlocks infrastructure automation with secure, on-premises OpenStack cloud

SBI Group unlocks infrastructure automation with secure, on-premises OpenStack cloud

SBI BITS provides IT services and infrastructure to the SBI Group — Japan’s market leading financial services company group — which is made up of over 250 companies, and 6,000 employees.

To increase time to market and meet heavy client requirements, SBI BITs was looking for alternative solutions beyond bare metal servers and decided upon OpenStack. After evaluating their existing suppliers, SBI BITS turned to Canonical to bring in the external support and expertise required to move into production with an economical, flexible solution.

There was no risk of lock-in with Canonical’s OpenStack. It’s close to the upstream version, so we have the freedom to fully support internally, if we want to.

Georgi Georgiev, CIO, SBI BITS

Taking just a few weeks to implement, SBI BITS are benefiting from an OpenStack cloud that is cost effective from both CAPEX and OPEX perspectives and providing them with the confidence to deploy across the entire SBI Group.

In this case study, learn how SBI BITS:

  • Met regulatory compliance through Canonical’s OpenStack across two sites, utilising CEPH storage, and deployed using MAAS and Juju.
  • Benefit from ongoing support through Ubuntu Advantage for Infrastructure to empower the internal team to focus on the core business.
  • Plans to implement Kubernetes, with Canonical support, as a managed service across the SBI Group.

Download the case study by filling out the below form:

17 March, 2020 12:38PM

Harald Sitter: No SMB1 to Share Devices

As it recently came up I thought I should perhaps post this more publicly…

As many of you will know SMB1, the super ancient protocol for windows shares, shouldn’t be used anymore. It’s been deprecated since like Windows Vista and was eventually also disabled by default in both Windows 10 and Samba. As a result you are not able to find servers that do not support either DNS-SD aka Bonjour aka Avahi, or WS-Discovery. But there’s an additional problem! Many devices (e.g. NAS) produced since the release of Vista could support newer versions of SMB but for not entirely obvious reasons do not have support for WS-Discovery-based … discovery. So, you could totally find and use a device without having to resort to SMB1 if you know its IP address. But who wants to remember IP addresses.

Instead you can just have another device on the network play discovery proxy! One of the many ARM boards out there, like a rapsberrypi, would do the trick.

To publish a device over DNS-SD (for Linux & OS X) you’ll first want to map its IP address to a local hostname and then publish a service on that hostname.

avahi-publish -a blackbox.local
avahi-publish -s -H blackbox.local SMB_ON_BLACKBOX _smb._tcp 445

If you also want to publish for windows 10 systems you’ll additionally want to run wsdd -v -n BLACKBOX

Do note that BLACKBOX in this case can be a netbios, or LLMNR, or DNS-SD name (Windows 10 does support name resolution via DNS-SD these days). Unfortunate caveat of wsdd is that if you want to publish multiple devices you’ll need to set up a bridge and put the different wsdd instances on different interfaces.

17 March, 2020 10:13AM

March 16, 2020

The Fridge: Ubuntu Weekly Newsletter Issue 622

Welcome to the Ubuntu Weekly Newsletter, Issue 622 for the week of March 8 – 14, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

16 March, 2020 08:51PM