July 07, 2020

hackergotchi for SparkyLinux

SparkyLinux

Sparky 5.12

A quarterly update point release of Sparky 5.12 “Nibiru” of the stable line is out. This release is based on Debian stable 10 “Buster”.

Changes between 5.11 and 5.12:
• system upgraded from Debian stable repos as of July 5, 2020
• Linux kernel 4.19.118
• Firefox 68.10.0esr
• Thunderbird 68.9.0
• VLC 3.0.11
• LibreOffice 6.1.5
• Otter Browser replaced by Epiphany Browser (MinimalGUI)
• added Openbox Noir to the desktop list to be installed as a choice (via MinimalGUI & MinimalCLI and APTus)
• ‘debi-tool’ replaced by ‘gdebi’
• disabled package list updating, during installing Sparky via Calamares; even you install Sparky with active Internet connection, the Debian or Sparky server can be temporary off, so it could stop the installation

System reinstallation is not required; if you have Sparky 5.x installed make full system upgrade:

sparky-upgrade

or:

sudo apt update
sudo apt full-upgrade

or via the System Upgrade tool.

Sparky 5 is available in the following flavours:
– amd64 & i686: LXQt, Xfce, MinimalGUI (Openbox) & MinimalCLI (text mode)
– armhf: Openbox & CLI (text mode)

New iso/img images of the stable line can be downloaded from the download/stable page.

07 July, 2020 10:22AM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

Librem 14 Launch FAQ

There has been a lot of excitement ever since we announced the Librem 14 last week. There has also been quite a few questions. In this post we’ll go through some of the most Frequently Asked Questions for those of you still deciding whether to pre-order and take advantage of our $300 off sale:

Q: When will the Librem 14 ship?
A:
Early Q4 2020

Q: How long will the sale continue? Are there coupon codes?
A: We haven’t set an official date yet, but will make an announcement on social media and on this site at least a few days before the sale ends. The discount is automatically applied at the shop while the sale is active, no coupon codes are necessary.

Q: How many RAM slots are there?
A: Two. There is a small chance during final mechanical design testing that we have to drop back to one, but we are confident from our early MD testing it will work so are offering two RAM slots, up to 64GB of RAM.

Q: What about international keyboard layouts?
A: At the moment we will only be providing the Librem 14 with the current keyboard layout. We might consider other keyboard layouts at some point in the future if there is sufficient demand to justify keeping a large number of that layout in stock.

Q: What is the screen brightness? How far can you open the screen lid?
A: The screen brightness is 300 cd/m2 and you can open the screen lid almost 180°.

Q: What are the video out options? What about Thunderbolt?
A: The Librem 14 will be able to drive up to two 4k displays using the HDMI2 port and the USB-C port. The USB-C port will have power delivery and DisplayPort support but will not be a Thunderbolt port.

Q: What is replaceable?
A: Like with previous Librem laptops, the RAM, disk, WiFi module and battery are replaceable. The WiFi module is the same one we’ve used in past laptops.

Q: Will there be other CPU options (such as cheaper, less powerful i5 CPUs) for the Librem 14?
A: All Librem 14s will use the i7 10710U CPU.

Q: Does each M.2 socket have its own x4 PCIe-3.0 connection?
A: Yes!

Q: Will Coreboot, PureBoot and the Librem Key work on the Librem 14 like on the Librem 13 and 15?
A: Yes.

Q: What about my very specific question about other specifications?
A: We are working to squeeze as much power and as many features as we can into the Librem 14. We will provide more detailed specifications on anything we haven’t yet put on the Librem 14 product page as final specifications are confirmed.

The post Librem 14 Launch FAQ appeared first on Purism.

07 July, 2020 09:45AM by Purism

hackergotchi for Qubes

Qubes

XSAs 317, 319, 327, and 328 do not affect the security of Qubes OS

The Xen Project has published Xen Security Advisories 317, 319, 327, and 328 (XSA-317, XSA-319, XSA-327, and XSA-328, respectively). These XSAs do not affect the security of Qubes OS, and no user action is necessary.

These XSAs have been added to the XSA Tracker:

https://www.qubes-os.org/security/xsa/#317
https://www.qubes-os.org/security/xsa/#319
https://www.qubes-os.org/security/xsa/#327
https://www.qubes-os.org/security/xsa/#328

07 July, 2020 12:00AM

QSB #058: Insufficient cache write-back under VT-d (XSA-321)

We have just published Qubes Security Bulletin (QSB) #058: Insufficient cache write-back under VT-d (XSA-321). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #058 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-058-2020.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View XSA-321 in the XSA Tracker:

https://www.qubes-os.org/security/xsa/#321



             ---===[ Qubes Security Bulletin #58 ]===---

                             2020-07-07


          Insufficient cache write-back under VT-d (XSA-321)


Summary
========

On 2020-07-07, the Xen Security Team published Xen Security Advisory
321 (CVE-2020-15565 / XSA-321) [1] with the following description:

| When page tables are shared between IOMMU and CPU, changes to them
| require flushing of both TLBs.  Furthermore, IOMMUs may be non-coherent,
| and hence prior to flushing IOMMU TLBs CPU cached also needs writing
| back to memory after changes were made.  Such writing back of cached
| data was missing in particular when splitting large page mappings into
| smaller granularity ones.
| 
| A malicious guest may be able to retain read/write DMA access to
| frames returned to Xen's free pool, and later reused for another
| purpose.  Host crashes (leading to a Denial of Service) and privilege
| escalation cannot be ruled out.

A malicious HVM qube with a PCI device (such as sys-net or sys-usb in
Qubes' default configuration) can potentially compromise the whole
system.

Only Intel systems are affected. AMD systems are not affected.


Patching
=========

The specific packages that resolve the problems discussed in this
bulletin are as follows:

  For Qubes 4.0:
  - Xen packages, version 4.8.5-19

The packages are to be installed in dom0 via the Qube Manager or via
the qubes-dom0-update command as follows:

  For updates from the stable repository (not immediately available):
  $ sudo qubes-dom0-update

  For updates from the security-testing repository:
  $ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.


Credits
========

See the original Xen Security Advisory.


References
===========

[1] https://xenbits.xen.org/xsa/advisory-321.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

07 July, 2020 12:00AM

July 06, 2020

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 638

Welcome to the Ubuntu Weekly Newsletter, Issue 638 for the week of June 28 – July 4, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

06 July, 2020 11:55PM by guiverc

hackergotchi for Purism PureOS

Purism PureOS

Getting Started with the Librem Mini

With the Librem Mini shipping, we put together this short quickstart guide so you can know your hardware before it arrives. Dive into how the Librem Mini protects your digital freedom as well as look at the technical specs here.

In the box, you should expect to see the Mini itself, as well as a power adapter. All of which are covered by a one-year warranty. Enjoy the peace of mind that comes from expert support staff ready to ensure your Mini runs well.

PureBoot

For those that need a tamper-evident way to power-on their Mini, the PureBoot bundle secures your freedom and boot process. In addition to the Mini and power adapter, you’ll receive a Librem Key and a Librem Vault.

If you’re still thinking about buying a Librem Mini, take a look at what you can do with the hardware and order your Librem Mini now.

The post Getting Started with the Librem Mini appeared first on Purism.

06 July, 2020 11:36PM by David Hamner

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Wrong About Signal

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down for at least a week, likely longer. Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive

Conclusion

In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Riot Matrix client. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments

06 July, 2020 08:18PM

Ubuntu Blog: Canonical Developer Advocate Named Microsoft MVP

We would like to congratulate Hayden Barnes, a Developer Advocate at Canonical for Ubuntu on WSL, who was awarded MVP (Most Valuable Professional) by Microsoft for 2020-2021 for his contributions to the Windows Subsystem for Linux (WSL) community.

Hayden Barnes, pictured above, was a curious innovator from an early age.

The Microsoft MVP Award is awarded annually by Microsoft to technology experts who passionately share their knowledge with the community. This award is a wonderful recognition for how he has empowered WSL users and been a true innovator.

“For more than two decades, the Microsoft MVP Award is our way of saying ‘Thanks!’ to outstanding community leaders. The contributions MVPs make to the community, ranging from speaking engagements, to social media posts, to writing books, to helping others in online communities, have incredible impact.”

– Microsoft MVP Program

Hayden is the founder of WSLConf, the first community conference dedicated to WSL, held March 10-11, 2020. Hayden joined Canonical in 2019 and leads Ubuntu on WSL development and advocacy. He regularly blogs, speaks at conferences, assists WSL users on social media and GitHub, appears on podcasts, and finds new and interesting uses for WSL. Hayden is also writing a book on advanced WSL to be published by Apress in 2021.

The WSL team at Canonical has since grown to include myself, Sohini Roy, Product Manager on Ubuntu on WSL, and Patrick Wu, Lead Engineer on Ubuntu on WSL. In collaboration with the Canonical Public Cloud and Field Engineering teams the WSL team provides the most popular Linux distribution for WSL for individual developers and enterprise clients.

06 July, 2020 08:11PM

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live 2.6.7-28 Released

This release of Clonezilla live (2.6.7-28) includes major enhancements and bug fixes.
ENHANCEMENTS and CHANGES from 2.6.6-15

  • The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2020/Jun/30).
  • Linux kernel was updated to 5.7.6-1.
  • ocs-iso, ocs-live-dev: sync syslinux-related files when copying syslinux exec files.
  • When creating recovery iso/zip file, if it's in Clonezilla live environment, we have those syslinux files. Use that first so the version mismatch can be avoided. Ref: https://sourceforge.net/p/clonezilla/support-requests/127/
  • Move grub-header.cfg from bootx64.efi to grub.cfg so that it's more flexible.
  • To avoid conflict with the patch of grub in CentOS/Fedora, for GRUB EFI NB MAC/IP config style, the netboot file is now like grub.cfg-drbl-00:50:56:01:01:01 and grub.cfg-drbl-192.168.177.2 not grub.cfg-01-* anymore.
  • Add xen-tools
  • Partclone was updated to 0.3.14. The codes about xfs was updated to be 4.20.0.
  • Package exfat-fuse was removed since the kernel has module for that.
  • A better mechanism to deal with linuxefi/initrdefi or linux/initrd in the grub config was added.

BUG FIXES

Steven

06 July, 2020 02:58PM by Steven Shiau

July 05, 2020

hackergotchi for SolydXK

SolydXK

SolydXK 10 point release 10.4

Highlights

  • Based on Debian Buster 10.4 release with the latest kernel version 4.19.
  • usr directories have been merged where the /{bin,sbin,lib}/ directories become symbolic links to /usr/{bin,sbin,lib}/.
    More info on the subject: http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge
  • Many bugs were resolved and we changed the SolydXK Firefox settings even further to improve user privacy. This is done in the firefox-solydxk-adjustments package which can be purged if you don't need it.

Enthusiast's Editions

Work is still done on the Enthusiast's Editions. They will be released at a later date.

Downloads

You can download the ISOs from our community site: https://solydxk.com/downloads

Do not forget to verify your download before you use it.

Enjoy this new point release!

05 July, 2020 01:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 97 – Pente

Um episódio equilibrado pelo desequilíbrio natural a que os 2 oradores de serviço vos têm habituado. A caminho do centésimo episódio, deleitem-se com mais esta aventura do PUP.

Já sabem: oiçam, subscrevam e partilhem!

  • https://libretrend.com/specs/librebox
  • https://www.humblebundle.com/books/technology-essentials-for-business-manning-publications-books?partner=PUP
  • https://www.humblebundle.com/books/circuits-electronics-morgan-claypool-books?partner=PUP

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

05 July, 2020 10:31AM

David Tomaschik: Security 101: Encryption, Hashing, and Encoding

Encryption, Hashing, and Encoding are commonly confused topics by those new to the information security field. I see these confused even by experienced software engineers, by developers, and by new hackers. It’s really important to understand the differences – not just for semantics, but because the actual uses of them are vastly different.

I do not claim to be the first to try to clarify this distinction, but there’s still a lack of clarity, and I wanted to include some exercises for you to give a try. I’m a very hands-on person myself, so I’m hoping the hands-on examples are useful.

Encoding

Encoding is a manner of transforming some data from one representation to another in a manner that can be reversed. This encoding can be used to make data pass through interfaces that restrict byte values (e.g., character sets), or allow data to be printed, or other transformations that allow data to be consumed by another system. Some of the most commonly known encodings include hexadecimal, Base 64, and URL Encoding.

Reversing encoding results in the exact input given (i.e., is lossless), and can be done deterministically and requires no information other than the data itself. Lossless compression can be considered encoding in any format that results in an output that is smaller than the input.

While encoding may make it so that the data is not trivially recognizable by a human, it offers no security properties whatsoever. It does not protect data against unauthorized access, it does not make it difficult to be modified, and it does not hide its meaning.

Base 64 encoding is commonly used to make arbitrary binary data pass through systems only intended to accept ASCII characters. Specifically, it uses 64 characters (hence the name Base 64) to represent data, by encoding each 6 bits of raw data as a single output character. Consequently, the output is approximately 133% of the size of the input. The default character set (as defined in RFC 4648) includes the upper and lower case letters of the English alphabet, the digits 0-9, and + and /. The spec also defines a “URL safe” encoding where the extra characters are - and _.

An example of base 64 encoding, including non-printable characters, using the base64 command line tool (-d is given to decode):

1
2
3
4
5
6
$ echo -e 'Hello\n\tWorld\n\t\t!!!' | base64
SGVsbG8KCVdvcmxkCgkJISEhCg==
$ echo 'SGVsbG8KCVdvcmxkCgkJISEhCg==' | base64 -d
Hello
        World
                !!!

Notice that the tabs and newlines become encoded (along with the other characters) in a format that uses only printable characters and could easily be included in an email, webpage, or almost any other protocol that supports text. It is for this reason that base 64 is commonly used for things like HTTP Headers (such as the Authorization header), tokens in URLs, and more.

Also note that nothing other than the encoded data is needed to decode it. There’s no key, no password, no secret involved, and it’s completely reversible. This demonstrates the lack of any security property offered by encoding.

Encryption

Encryption involves the application of a code or cipher to input plaintext to render it into “ciphertext”. Decryption is the reversal of that process, converting “ciphertext” into “plaintext”. All secure ciphers involve the use of a “key” that is required to encrypt or decrypt. Very early ciphers (such as the Caesar cipher or Vignere cipher) are not at all secure against modern techniques. (Actually they can usually be brute forced by hand even.)

Modern ciphers are designed to withstand “Kerckhoff’s principle”, which refers to the idea that a properly designed cipher assumes your opponent has the cipher algorithm (but not the key):

It should not require secrecy, and it should not be a problem if it falls into enemy hands;

Encryption is intended to provide confidentiality (and sometimes integrity) for data at rest or in transit. By encrypting data, you render it unusable to anyone who does not possess the key. (Note that if your key is weak, someone can perform a dictionary or brute force attack to retrieve your key.) It is a two way process, so it’s only suitable when you want to provide confidentiality but still be able to retrieve the plaintext.

I’ll do a future Security 101 post on the correct applications of cryptography, so I won’t currently go into anything beyond saying that if you roll your own crypto, you will do it wrong. Even cryptosystems designed by professional cryptographers undergo peer review and multiple revisions to arrive at something secure. Do not roll your own crypto.

Using the OpenSSL command line tool to encrypt data using the AES-256 cipher with the password foobarbaz:

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz | hexdump -C
00000000  53 61 6c 74 65 64 5f 5f  08 65 ef 7e 17 31 5d 31  |Salted__.e.~.1]1|
00000010  55 3c d3 b7 8b a5 47 79  1d 72 16 ab fe 5a 0e 62  |U<....Gy.r...Z.b|
00000020

I performed a hexdump of the data because openssl would output the raw bytes, and many of those bytes are non-printable sequences that would make no sense (or corrupt my terminal). Note that if you run the exact same command twice, the output is different!

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz | hexdump -C
00000000  53 61 6c 74 65 64 5f 5f  d4 36 43 bf de 1c 9c 1e  |Salted__.6C.....|
00000010  e4 d4 72 24 97 d8 da 95  02 f5 3e 3f 60 a4 0a aa  |..r$......>?`...|
00000020

This is because the function that converts a password to an encryption key incorporates a random salt and the encryption itself incorporates a random “initialization vector.” Consequently, you can’t compare two encrypted outputs to confirm that the underlying plaintext is the same – which also means an attacker can’t do that either!

The OpenSSL command line tool can also base 64 encode the output. Note that this is not part of the security of your output, this is just for the reasons discussed above – that the encoded output can be handled more easily through tools expecting printable output. Let’s use that to round-trip some encrypted data:

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz -base64
U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=
$ echo 'U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=' | openssl enc -d -aes-256-cbc -pass pass:foobarbaz -base64
Hello world

What if we get the password wrong? Say, instead of foobarbaz I provide bazfoobar:

1
2
3
$ echo 'U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=' | openssl enc -d -aes-256-cbc -pass pass:bazfoobar -base64
bad decrypt
140459245114624:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:../crypto/evp/evp_enc.c:583:

While the error may be a little cryptic, it’s clear that this is not able to decrypt with the wrong password, as we expect.

Hashing

Hashing is a one way process that converts some amount of input to a fixed output. Cryptographic hashes are those that do so in a manner that is computationally infeasible to invert (i.e., to get the input back from the output). Consequently, cryptographic hashes are sometimes referred to as “one way functions” or “trapdoor functions”. Non-cryptographic hashes can be used as basic checksums or for hash tables in memory.

Examples of cryptographic hashes include MD5 (broken), SHA-1 (broken), SHA-256/384/512, and the SHA-3 family of functions. Do not use anything based on MD5 or SHA-1 for any new applications.

There are three main security properties of a cryptographic hash:

  1. Collision resistance is the inability to find two different inputs that give the same output. If a hash is not collision resistant, you can produce two documents that would both have the same hash value (used in digital signatures). The Shattered Attack was the first Proof of Concept for a collision attack on SHA-1. Both inputs can be freely chosen by the attacker.
  2. Preimage resistance is the inability to “invert” or “reverse” the hash by finding the input to the hash function that produced that hash value. For example, if I tell you I have a SHA-256 hash of 68b1282b91de2c054c36629cb8dd447f12f096d3e3c587978dc2248444633483, it should be computationally infeasible to find the input (“The quick brown fox jumped over the lazy dog.”).
  3. 2nd preimage resistance is the inability to find a 2nd preimage: that is, a 2nd input that gives the same output. In contrast to the collision attack, the attacker only gets to choose one of the inputs here – the other is fixed. (Imagine someone gives you a copy of a file, and you want to modify it but have the same hash as the file they gave you.)

Hashing is commonly used in digital signatures (as a way of condensing the data being signed, since many public key crypto algorithms are limited in the amount of data they can handle. Hashes are also used for storing passwords to authenticate users.

Note that, although preimage resistance may be present in the hashing function, this is defined for an arbitrary input. When hashing input from a user, the input space may be sufficiently small that an attacker can try inputs in the same function and check if the result is the same. A brute force attack occurs when all inputs in a certain range are tried. For example, if you know that the hash is of a 9 digit national identifier number (i.e., a Social Security Number), you can try all possible 9 digit numbers in the hash to find the input that matches the hash value you have. Alternatively, a dictionary attack can be tried where the attacker tries a dictionary of common inputs to the hash function and, again, compares the outputs to the hashes they have.

You’ll often see hashes encoded in hexadecimal, though base 64 is not too uncommon, especially with longer hash values. The output of the hash function itself is merely a set of bytes, so the encoding is just for convenience. Consider the command line tools for common hashes:

1
2
3
4
5
6
$ echo -n 'foo bar baz' | md5sum
ab07acbb1e496801937adfa772424bf7  -
$ echo -n 'foo bar baz' | sha1sum
c7567e8b39e2428e38bf9c9226ac68de4c67dc39  -
$ echo -n 'foo bar baz' | sha256sum
dbd318c1c462aee872f41109a4dfd3048871a03dedd0fe0e757ced57dad6f2d7  -

Even a tiny change in the input results in a completely different output:

1
2
3
4
$ echo -n 'foo bar baz' | sha256sum
dbd318c1c462aee872f41109a4dfd3048871a03dedd0fe0e757ced57dad6f2d7  -
$ echo -n 'boo bar baz' | sha256sum
bd62b6e542410525d2c0d250c4f69b64e42e57e356e5260b4892afef8eacdfd3  -

Salted & Strengthened Hashing

There are special properties that are desirable when using hashes to store passwords for user authentication.

  1. It should not be possible to tell if two users have the same password.
  2. It should not be possible for an attacker to precompute a large dictionary of hashes of common passwords to lookup password hashes from a leak/breach. (Attackers would build lookup tables or more sophisticated structures called “rainbow tables”, enabling them to quickly crack hashes.)
  3. An attacker should have to attack the hashes for each user separately instead of being able to attack all at once.
  4. It should be relatively slow to perform brute force and dictionary attacks against the hashes.

“Salting” is a process used to accomplish the first three goals. A random value, called the “salt” is added to each password when it is being hashed. This way, two cases where the password is the same result in different hashes. This makes precomputing all hash/password combinations prohibitively expensive, and two users with the same password (or a user who uses the same password on two sites) results in different hashes. Obviously, it’s necessary to include the same salt value when validating the hash.

Sometimes you will see a password hash like $1$4zucQGVU$tx2SvCtH7SYaiH.4ASzNt.. The $ characters separate the hash into 3 fields. The first, 1, indicates the hash type in use. The next, 4zucQGVU is the salt for this hash, and finally, tx2SvCtH7SYaiH.4ASzNt. is the hash itself. Storing it like this allows the salt to be easily retrieved to compute a matching hash when the password is input.

The fourth property can be achieved by making the hashing function itself slow, using large amounts of memory, or by repeatedly hashing the password (or some combination thereof). This is necessary because the base hashing functions are fast for even cryptographically secure hashes. For example, the password cracking program hashcat can compute 2.8 Billion plain SHA-256 hashes per second on a consumer graphics card. On the other hand, the intentionally hard function scrypt only hahses at 435 thousand per second. This is more than 6000 times slower. Both are a tiny delay to a single user logging in, but the latter is a massive slowdown to someone hoping to crack a dump of password hashes from a database.

Common Use Cases

To store passwords for user authentication, you almost always want a memory- and cpu-hard algorithm. This makes it difficult to try large quantities of passwords, whether in brute force or a dictionary attack. The current state of the art is the Argon2 function that was the winner of the Password Hashing Competition. (Which, while styled after a NIST process, was not run by NIST but by a group of independent cryptographers.) If, for some reason, you cannot use that, you should consider scrypt, bcrypt, or at least pbkdf2 with a very high iteration count (e.g., 100000+). By now, however, almost all platforms have support for Argon2 available as an open-source library, so you should generally use it.

To protect data from being inspected, you want to encrypt it. Use a high-level crypto library like NaCl or libsodium. (I’ll be expanding on this in a future post.) You will need strong keys (ideally, randomly generated) and will need to keep those keys secret to avoid the underlying data from being exposed. One interesting application of encryption is the ability to virtually delete a collection of data by destroying the key – this is often done for offline/cold backups, for example.

To create an opaque identifier for some data you want to hash it. For example, it’s fairly common to handle uploaded files by hashing the file and storing it under a filename derived from the hash of the file contents. This provides a predictable filename format and length, and prevents two files from ending up with the same filename on the server. (Unless they have the exact same contents, but then the duplication does not matter.) This can also be used for sharding: because the values are uniformly distributed with a good hashing function, you can do things like using the first byte of the hash to identify a storage repository that is distributed.

To allow binary data to be treated like plain text, you can use encoding. You should not use encoding for any security purpose. (And yes, I feel this point deserves repeating numerous times.)

Misconceptions

There are big misconceptions that I see repeated, most often by people outside the hacking/security industry space. A lot of these seem to be over the proper use of these technologies and confusion over when to select one.

Encoding is Not Encryption

For whatever reason, I see lots of references to “base64 encryption.” (In fact, there are currently 20,000 Google results for that!) As I discussed under encoding, base64 (and other encodings) do not do encryption – they offer no confidentiality to the underlying data, and do not protect you in any way. Even though the meaning of the data may not be immediately apparent, it can still be recovered with little effort and with no key or password required. As one article puts it, Base64 encryption is a lie.

If you think you need some kind of security, some kind of encryption, something to be kept secret, do not look to encodings for this! Use proper encryption with a well-developed algorithm and mode of operation, and preferably use a library or tool that completely abstracts this away from you.

Encryption is Not Hashing

There is somewhere upwards of a half million webpages talking about password encryption. Unfortunately, the only time passwords should be encrypted is when they will need to be retrieved in plaintext, such as in a password manager. When using passwords for authentication, you should store them as a strongly salted hash to avoid attackers being able to retrieve them in the case of database access.

Encryption is a two-way process, hashing is one way. To validate that the server and the user have the same password, we merely apply the same transformation (the same hash with the same salt) to the input password and compare the outputs. We do not decrypt the stored password and compare the plaintext.

Conclusion

I hope this has been somewhat useful in dispelling some of the confusion between encryption, hashing, and encoding. Please let me know if you have feedback.

05 July, 2020 07:00AM

July 03, 2020

hackergotchi for Purism PureOS

Purism PureOS

Librem Mini Shipping with Active Cooling

There’s nothing like making a public announcement to ensure that a situation will change. That’s certainly been true in the case of our Librem Mini. Just over a week ago we announced the Librem Mini was ready to ship and highlighted one issue we intended to solve with a future software update:

If you ordered a Librem Mini, you will receive an email confirming your order status and shipping information. As with any newly brought to market product, the Librem Mini running PureOS will have software updates to apply as we continue to refine the firmware. One forthcoming software update that we want to bring to your attention concerns the fan speed control, as currently the CPU is passively cooled and may throttle down under heavy load. Full active cooling will be coming in a firmware update so we highly recommend following our published announcements. If you are uncomfortable with applying a firmware update using our coreboot firmware update tool, you also have the option for Purism to hold the order until we release that software update. If you desire that, let us know when we contact you to confirm shipping information, otherwise you will be enjoying your Librem Mini soon!

Well it turns out that while we were contacting all of the Mini customers to determine whether they wanted their Mini immediately, or whether they wanted to wait for a firmware update, we resolved the fan speed control issue! As we ship out all of the Librem Mini orders, they will all have fully-updated firmware and active cooling.


Thank you everyone for your patience and if you were waiting for active cooling to place your own Librem Mini order, order now!

The post Librem Mini Shipping with Active Cooling appeared first on Purism.

03 July, 2020 05:44PM by Purism

hackergotchi for SparkyLinux

SparkyLinux

NsCDE

There is a new desktop available for Sparkers: NsCDE

What is NsCDE?

Not so Common Desktop Environment (NsCDE) is a retro but powerful (kind of) UNIX desktop environment which resembles CDE look (and partially feel) but with a more powerful and flexible framework beneath-the-surface, more suited for 21st century unix-like and Linux systems and user requirements than original CDE.
NsCDE can be considered as a heavyweight FVWM theme on steroids, but combined with a couple other free software components and custom FVWM applications and a lot of configuration, NsCDE can be considered a lightweight hybrid desktop environment.

Installation

sudo apt update
sudo apt install nscde-desktop

or via APTus-> Desktop tab -> NsCDE icon

It can be installed on all Sparky editions, means: Sparky 4/5/6 amd64/i386/armhf.

It is a release candidate 21, build and works ok on amd64, but the workspace switcher doesn’t load on i386 and ARMHF so far.

NsCDE

Copyright: “Hegel3DReloaded”
License: GNU GPL 3
GitHub: github.com/NsCDE/NsCDE

 

03 July, 2020 03:36PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Feeling at home in a LXD container

At Home in a LXD container.

In this post, we will see how we can containerize our home in LXD simply managing our personal configuration files – a.k.a. dotfiles. Yeah dotfiles, named after their common ~/.my_config form, you know, all of those small configuration files lying across our $HOME. In other words, how one can change the house while keeping the furniture and decoration in place.

Because there is no place like $HOME

Since we are spending so much time on our machine, be it for work or for fun (maybe both?), we love to tweak our environment to our taste and needs. Change the UX, create some aliases, using a dark theme and what not. Most, if not all of these are saved in some configuration files somewhere. And since we spent so much time making a home for ourselves, wouldn’t it be great if we could quickly set it up again on a different computer? This is precisely what we are going to see here.

In a previous post, we saw how to set up a LXD container for developing with ROS (now also available with an accompanying video). Here we will see how we can quickly set up our personal development environment in the said containers and enable a tidy and seamless workflow.

Picking a dotfiles manager

A dotfiles manager is essentially a piece of software that takes care of your configuration files for you. What does that mean? Well I would personally consider the bare minimum features to include versioning, preferably through git, and of course the installation of the files to their correct location, as they are typically expected to be found at a given path. But we may have different expectations from a dotfiles manager based on our needs and habits. Looking on the web for such manager, you may encounter many of them – find a whole list of them here. Most of them work off the same principles, being a small set of utils to help manage our dotfiles.

In this post we settled using chezmoi. It is vastly popular, open source and is available as a snap making it usable virtually anywhere. Some other nice features include being git-based, being cli-based, and supporting multi dotfiles repos. It is chezmoi which will set up our home in LXD as we will see.

You may want to give a look at the aforementioned list of managers and pick one that best answers your needs and expectations. Note that many of them are interchangeable, so you can get started with chezmoi for now and later move to another one as you see fit.

Alright so how do we get started?

Creating a dotfiles repository

Before creating our dotfiles backup, we need to install the manager. To install chezmoi, nothing easier, simply use the command:

$ snap install chezmoi --classic

And we are done. 

We can now create our git-based dotfiles repository and start filling it up. To create the repository managed by chezmoi, simply enter:

$ chezmoi init

This command creates an empty repository in ~/.local/share/chezmoi.

To backup a dotfile and populate our repository, we make use of the add command:

$ chezmoi add .bashrc

The command copies the file in the repository managed by chezmoi at ~/.local/share/chezmoi/dot_bashrc.

Now all we have to do is to commit our change and save our repository online. The command,

$ chezmoi cd

places us directly in the git repository where we can use the usual git commands,

$ git add dot_bashrc
$ git commit -m 'add .bashrc'

Finally we can save our dotfiles online, e.g. on GitHub,

$ git remote add origin git@github.com:user/dotfiles.git
$ git push -u origin master

We should now repeat this operation for each and every configuration file we would like to save. Note that if you have secrets in your dotfiles, say some credentials, SSH keys and such, chezmoi offer many options for your secrets to remain as such.

With our dotfiles safely backed up online, we will now see how we can quickly set up our environment on a new machine.

Quickly setting up a new machine

Whether you bought a new computer or nuked your old hardware with a fresh new distro, you will now witness the true power of chezmoi.

To install our cosy environment on a fresh distro, all we have to do is,

1. Install chezmoi 

$ snap install chezmoi --classic

2. Import our dotfiles

$ chezmoi init https://github.com/username/dotfiles.git

3. Let chezmoi works its magic,

$ chezmoi apply

Voila! Home sweet home.

Of course this post is only a quick overview of a given dotfiles manager. I won’t detail here all of its options and features and let you discover them for yourself in its documentation page.

At this point you may be wondering if this is really worth it. You probably install a fresh distro every 2 years or so and completely change hardware even less frequently. So why bother? Well, fellow developer, aren’t you using containers? If not, you definitely should consider it and check the aforementioned post where we detail a development workflow for ROS in LXD.

A disposable tiny home in LXD

If you are like me, trying your best to keep a tidy laptop while messing around with plenty of different software toys, then you may have had one of these days during which you spawn several containers. Containers in which we don’t have our sweet bash aliases; on our very own machine! But thanks to chezmoi we can now start up a fresh container and have it mimic $HOME in a matter of seconds. Let me demonstrate it for you with a LXD container,

$ lxc launch ubuntu:20.04 tmp-20-04
$ lxc profile add chezmoi tmp-20-04
$ lxc ubuntu tmp-20-04

Ahhh, what a cozy tiny disposable home in LXD!

That seemed too easy to you? Alright I confess, I used some of my own LXD profile and aliases here. But isn’t it what this whole post is about? In case you want to use the same kind of shortcuts, allow me to point you once more to our LXD blog post. Anyway, note that the above 3 lines really boils down to,

$ lxc launch ubuntu:20.04 tmp-20-04
$ lxc exec tmp-20-04 -- sudo --login --user ubuntu
...
$ snap install chezmoi --classic
$ chezmoi init https://github.com/username/dotfiles.git
$ chezmoi apply

With this example, I hope that I managed to offer you a glimpse at the power of chezmoi (and more generally of dotfiles managers), especially when coupled to a containerized workflow.

Before closing, let me give you one last tip. Because we made our containerized workflow rather seamless, it can be easy to lose track of which shell is in a container and which is not. To differentiate them, add the following to your .bashrc:

function prompt_lxc_header()
{
  if [ -e /dev/lxd/sock ]; then
    echo "[LXC] ";
  fi
}
PS1='$(prompt_lxc_header)'$PS1

When used in a container, a shell prompt in the said container will now look something like:

[LXC] ubuntu@tmp-20-04:~$

No more confusion.

Photo by Albert13377 on Wikimedia Commons.

03 July, 2020 03:04PM

hackergotchi for VyOS

VyOS

VyOS News Digest June

This year June was slightly hotter than usual, nonetheless not only heatwaves caught our attention. We gathered fresh releases, long-awaited decisions, and unexpected resolutions from the vast leading companies, strategically important VyOS solutions. Read below the monthly news digest about development and technology in the global IT industry.

03 July, 2020 03:00PM by Yuriy Andamasov (yuriy@sentrium.io)

hackergotchi for Grml developers

Grml developers

Michael Prokop: Grml 2020.06 – Codename Ausgehfuahangl

We did it again™, at the end of June we released Grml 2020.06, codename Ausgehfuahangl. This Grml release (a Linux live system for system administrators) is based on Debian/testing (AKA bullseye) and provides current software packages as of June, incorporates up to date hardware support and fixes known issues from previous Grml releases.

I am especially fond of our cloud-init and qemu-guest-agent integration, which makes usage and automation in virtual environments like Proxmox VE much more comfortable.

Once as the Qemu Guest Agent setting is enabled in the VM options (also see Proxmox wiki), you’ll see IP address information in the VM summary:

Screenshot of qemu guest agent integration

Using a cloud-init drive allows using an SSH key for login as user "grml", and you can control network settings as well:

Screenshot of cloud-init integration

It was fun to focus and work on this new Grml release together with Darsha, and we hope you enjoy the new Grml release as much as we do!

03 July, 2020 02:32PM

hackergotchi for Pardus

Pardus

Pardus 19.3 Sürümü Yayınlandı

TÜBİTAK ULAKBİM tarafından geliştirilmeye devam edilen Pardus’un 19.3 sürümü yayınlandı. Pardus 19.3, Pardus 19 ailesinin üçüncü ara sürümüdür.

En yeni Pardus’ u hemen şimdi indirebilir, bu sürüm hakkında detaylı bilgi edinmek için sürüm notlarını inceleyebilirsiniz.

03 July, 2020 01:48PM by wwwtestadmin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: A snap confined shell based on Mir: Mircade

Mircade: An example snap confined user shell

Screenshot%20from%202020-05-29%2014-57-33A snap confined shell running on the desktop

There are various scenarios and reasons for packaging a Snap confined shell and a selection of applications together in a confined environment. You might have applications that work well together for a particular task; or, you may want to offer a number of alternative applications and have them available on a wide range of target platforms. Mircade illustrates this approach.

Contents of the Mircade snap

The user shell

A user shell is a program that allows the user to interact with the computer. It could be as simple as a command-line shell or as complex as a full desktop environment.

For Mircade I use a modified example Mir shell (egmde) I’ve presented in other writings. This “mircade” version of egmde allows the user to select one of a number of programs and run it all within the Snap confined environment.

  egmde:
    source: https://github.com/AlanGriffiths/egmde.git
    source-branch: mircade
    plugin: cmake-with-ppa
    ppa: mir-team/release
    build-packages:
      - pkg-config
      - libmiral-dev
      - libboost-filesystem-dev
      - libfreetype6-dev
      - libwayland-dev
      - libxkbcommon-dev
      - g++
    stage-packages:
      - try: [libmiral4]
      - else: [libmiral3]
      - mir-graphics-drivers-desktop
      - fonts-freefont-ttf
    stage:
      - -usr/share/wayland-sessions/egmde.desktop
      - -bin/egmde-launch

The applications

A successful “bundled” snap is really down to choosing a compelling set of applications.

I’ve taken a bunch of games from the Ubuntu archive and bundled them into the snap. That choice is only an illustration, there’s no need to choose games, or programs from the archive.

  neverball:
    plugin: nil
    stage-packages:
      - neverball

In this example, most of the applications use SDL2 and all use Wayland.

  sdl2:
    plugin: nil
    stage-packages:
      - libsdl2-2.0-0
      - libsdl2-image-2.0-0
      - libsdl2-mixer-2.0-0
      - libsdl2-net-2.0-0

I’ve not covered other toolkits in the Mirade example. In spite of this, applications based on GTK, Qt and X11 can also be packaged.

The target platforms

Running on Ubuntu Core

There are a lot of advantages to running Ubuntu Core on IoT devices, and Mircade shows how a bundle of applications can be delivered for this. When installed on Ubuntu Core, Mircade connects to a Wayland server (such as mir-kiosk).

Running on Classic Linux

On Ubuntu Classic there are four ways that Mircade can run, the first three are:

  1. Connecting to an X11 compositor as a window on a traditional desktop
  2. Connecting to a Wayland compositor as a fullscreen window on a traditional desktop
  3. Running directly on the hardware as a graphical login session

For each of these the corresponding interface needs to be connected:

Connecting to an X11 compositorsnap connect mircade:x11
Connecting to a Wayland compositorsnap connect mircade:wayland
Running directly on the hardwaresnap connect mircade:login-session-control

The fourth option, typically on an Ubuntu Server installation, is to run in the same way as on Ubuntu Core using a mir-kiosk daemon as to access the hardware.

Security: Snap confinement

What does ‘confinement’ mean?

Historically, there’s been a high cost of entry to application development and distribution meaning that developers and packagers have had to establish a reputation and trust. Some people may have suspicions of what applications like Microsoft Office or Google Chrome do. Regardless, there’s no realistic fear that Microsoft or Google will steal money from you or hold information on your computer for ransom.

Remember, running an application on a computer gives it, and by extension, the developers of that application, access to your computer. Without precautions, it gets access to everything you can access. It is not just running an application requires caution: before running an application you have to install it. Traditional packaging solutions such as .deb and .rpm, give root access to your system to the packagers.

Over the years the barrier to entry in application development has become low. Writing an app and getting it into an “app store” has never been easier: application development and packaging is no longer the preserve of a few well-known organizations. The basis for trust that used to exist has been eroded.

Computers are being trusted with more and more sensitive information. We carry pocket computers with us everywhere and trust them to hold personal information including access to bank accounts, credit cards and medical details.

Taking precautions to mitigate the risk posed by untrusted code is where app confinement comes into play. Confining the app at the operating system level restricts its access to your computer to only those things that are needed for it to work.

How does ‘app confinement’ work?

Software developers know that something that sounds simple in the user domain can involve some serious work in the solution domain. App confinement is no exception: we need to consider what the operating system needs to do to confine an app; how that can be controlled; how the user can review and configure the confinement. Mircade provides an example of applications being selected and run with restricted access to the system.

Kernel and userspace

The code running on a computer can be divided into ‘kernel’ and ‘userspace’. The kernel is that part of the operating system that mediates all interaction with hardware and between processes. The userspace is everything that runs within a normal app.

If we write a “hello world” application, the code we write runs in userspace. And so does the output function from the library we use (maybe operator<<() , or printf() or …) but at some point it writes to the console and at that point the kernel takes over and, eventually (there may well be further userspace and kernel code executed), some pixels are lit on the screen.

Without the kernel code can’t access your files, it can’t access the internet, it can’t access your keyboard, mouse, touchpad, interact with other processes, etc.

That makes the interface between userspace and kernel a useful place to restrict the activities of a program.

Snaps and AppArmor

AppArmor is an implementation detail of ‘snap confinement’, which is a component of Canonical’s ‘Snap’ packaging format. Snaps make use of lists of AppArmor rules called ‘interfaces’, each of which covers identifiable capabilities. These interfaces are reviewed by the Snap developers and can be enabled (or disabled) by the end user.

These are the permissions used by Mircade:

$ snap connections mircade
Interface              Plug                           Slot             Notes
audio-playback         mircade:audio-playback         :audio-playback  -
hardware-observe       mircade:hardware-observe       -                manual
login-session-control  mircade:login-session-control  -                manual
network-bind           mircade:network-bind           :network-bind    -
opengl                 mircade:opengl                 :opengl          -
wayland                mircade:wayland                :wayland         manual
x11                    mircade:x11                    :x11             -

Conclusion

The Mircade snap confined shell demonstrates how it is possible to take some applications, a user shell and Snap technology and use them deliver a portable, secure package to multiple Linux platforms including Ubuntu Core, Ubuntu Desktop and many other distros.

Targeting multiple platforms is important to the developers of snaps and confinement is important as users of a snap can ensure that it has limited access to their computer and what they are doing with it.

Do you have, or know of, a set of applications that would benefit from this approach?

Get it from the Snap Store


References

Mircade on GitHubhttps://github.com/MirServer/mircade
Egmde on GitHubhttps://github.com/AlanGriffiths/egmde/
The Mir display serverhttps://mir-server.io/

03 July, 2020 01:10PM

David Tomaschik: Security 101: Beginning with Kali Linux

I’ve found a lot of people who are new to security, particularly those with an interest in penetration testing or red teaming, install Kali Linux1 as one of their first forays into the “hacking” world. In general, there’s absolutely nothing wrong with that. Unfortunately, I also see many who end up stuck on this journey: either stuck in the setup/installation phase, or just not knowing what to do once they get into Kali.

This isn’t going to be a tutorial about how to use the tools within Kali (though I hope to get to some of them eventually), but it will be a tour of the operating system’s basic options and functionality, and hopefully will help those new to the distribution get more oriented.

Table of Contents

What is Kali Linux, and Why Do I Need It?

Kali Linux is a Linux distribution that is derived from Debian Linux, but is specialized for security testing work in several ways, the most obvious of which is the pre-installation of a variety of security testing software. As described by the Kali team themselves:

Kali Linux is a Debian-based Linux distribution aimed at advanced Penetration Testing and Security Auditing. Kali contains several hundred tools which are geared towards various information security tasks, such as Penetration Testing, Security research, Computer Forensics and Reverse Engineering. Kali Linux is developed, funded and maintained by Offensive Security, a leading information security training company.

Kali includes over 600 penetration testing tools in its repositories, but some of the more commonly used ones include:

  • Metasploit
  • Burp Suite
  • Aircrack
  • nmap
  • Wireshark
  • John the Ripper (JtR)
  • sqlmap

Menu of Tools

So, why do you need Kali Linux for these tools? Well, the short answer is that you don’t. The longer answer is that Kali provides these tools and a wealth of others already configured and (more or less) ready to go out of the box, so it’s a very nice situation to be in, but not an absolute must. Kali isn’t the only linux distribution to do so, either. There are several others, including the very smooth and visually appealing Parrot Security, the Arch Linux-based BlackArch, and the Gentoo-based Pentoo. I can only recommend BlackArch or Pentoo if you’re already familiar with their base distributions – neither one is particularly friend to “noobs”, in my experience.

You can, alternatively, manually install the tools yourself, or use tools that will manage your tools for you, such as the PenTesters Framework (PTF) or katoolin. Some people have switched to using Docker for a lot of their tools (I’ll have a post about that coming soon as well) or other methods of managing their toolset.

Install or Live? Virtual Machine or Bare Metal?

There’s two general approaches to running Kali Linux: either as a “Live” distribution (formerly known as a “LiveCD”, but I don’t suppose CDs see much use anymore) or installed. When running in a “Live” configuration, the system completely resets itself on each boot. (There is an option for “persistence” on a flash drive, but that’s a bit of an edge case.) In the Live case, nothing about your previous session carries over from one to another – your history, your settings, any additional applications you install – nothing. An install, on the other hand, is like any other operating system – all your changes will persist on disk and carry over from session to session.

Live Environment

In general, I suggest the Live approach when you’re first getting started because you make no permanent changes and can get used to things. If you don’t actively choose to do so, it won’t modify the drive of your computer, and you can always go back to your normal OS anytime you want. It’s a great way to find out whether Kali is useful for you and how you like it. You can also take some of the other distributions I listed above for a spin as well. Obviously, the Live approach may be useful if you’re regularly using a bunch of different computers, but I’m not sure how prevalent that is.

Installed

If you want to use it for actual work, I like a full installation rather than the Live media approach. This is because I will significantly customize the installation to my particular needs. Among other things, I will set up configuration files for various programs (sometimes called “dotfiles”), install additional applications, and other customization. (For example, I use zsh instead of bash as my shell.)

Both the Live and Installed approaches can be run both in a piece of virtualization software or on “bare metal” (directly on your computer). In the case of virtualization, Kali runs within your base operating system, giving you access to both the Kali tools and your host operating system’s tools and files. In the case of bare metal, you either boot from a Live USB or install to a partition on your hard drive/SSD to run the Kali installation.

Virtual Machine

In a virtual machine, you install a piece of Virtualization Software, such as Virtualbox, KVM or VMWare, then run Kali Linux within that software. Your Kali installation will essentially seem like a window within your regular operating system (whether Windows, Linux, or MacOS), called your “host” operating system. This allows you to have access to all your host OS applications and data, as well as access to Kali. You can boot the live image, or create a virtual disk for an installation. Additionally, most VM software supports the idea of “snapshots”, where you can roll back changes to your installation to a checkpoint. This allows you to undo changes that may break your system or be otherwise undesirable. A virtual machine also leaves your host operating system unaffected, so you’re unlikely to break your entire OS installation.

There are some downsides to virtual machines, however. Most notably, you need to split the hardware resources of your computer between your host operating system and the “guest” (Kali). CPU isn’t usually a problem if you’re not doing anything too intensive, but RAM can be. I would expect to want at least 4GB of RAM for each your host and your guest, so I would not try virtualization without at least 8GB of RAM. You’ll also need to be more aware of your networking setup, since incoming packets will go through both your host OS and your guest, so you may need to setup some kind of bridging or port forwarding to accept incoming connections. Additionally, doing wireless or hardware attacks can be harder because you’ll need to forward the raw device into your virtual machine.

Bare Metal

For a bare metal setup, it’s easy enough for a Live environment. Just plug in your prepared flash drive and tell your computer’s UEFI/BIOS to boot from it. For an installation, it’s a bit more complex. If you’re not going to dedicate a computer to only Kali, you’ll probably want to “dual boot”, which will require either resizing your existing installation or reinstalling. You’ll also need to install a bootloader that allows you to select between your installations. If you go for a bare metal installation, be absolutely certain to backup anything you care about first! I’ve known more than one person to make a mistake in repartitioning a disk and destroying data – including myself!

Basic Kali Usage

First and foremost, Kali Linux is a Linux distribution. If this is your first foray into Linux, you will have somewhat of a learning curve. Sorry, it’s to be expected when moving into something new. If you’ve been a Windows user up until now, you will need to learn a few new things. Linux distributions are a collection of the kernel (Linux) and a set of applications. Most of them are based on the GNU Collection of utilities, the POSIX specification, and some Linux Foundation standards.

For example, rather than having “drive letters”, all of the files on the system are in a hierarchy delimited by /. So you’ll see paths like /etc/passwd, which is a file called passwd in a directory called etc in the root of the filesystem.

Since Kali Linux is based on Debian Linux, it shares the software package management tools used by Debian. This means tools like apt and apt-get for installing software pre-packed for Kali. To search for software, you can use apt search <keywords> and then to install a package, you can use apt install <package names>. If you want a GUI for package management, you can open a console and then run apt install synaptic for the synaptic package management GUI (which is a front end to apt itself, so uses the same underlying data).

Synaptic Package Manager

Some distributions are different, so if you have experience with, say, Fedora, not everything will translate directly – for example, the use of apt instead of yum or dnf for package management. Static network configuration is also quite different, although both of them support systemd based configurations.

Hacking Tools

Obviously, if you’re using Kali, you want to get into the hacking tools included. It’s a good idea to give them a try in a controlled space, like a home lab or the levels on PentesterLab or the boxes on Hack the Box.

Rather than trying to learn all of the tools at once, I suggest checking out a single area at a time. If you’re interested in Web Security, learning Burp Suite or mitmproxy and some of the other web tools like sqlmap. If network pentesting is more your area, check out metasploit. For network forensics, learning wireshark can be useful.

Customizing Kali

Additional Software

There’s some software that doesn’t come in the default Kali image that you might or probably want. It ships with Firefox as the default browser, but you might prefer Chrome, Chromium, or the Brave Browser. On older versions of Kali, simple command line tools like git and tmux were not present, but they now are. vim is available by default, so I’m set, as is nano, but if you’re more of a Sublime Text, Atom, VSCode, or emacs user, you probably want to install those. Some people like additional utilities like the terminator terminal emulator.

If you have a license for any commercial security tools, now’s the time to install that. I have a Burp Professional license, so I install that first thing, but Burp Suite Community Edition is included if you’re just getting started.

As an aside, if you’re doing web application assessments, penetration testing, or bug bounty hunting professionally, I can’t recommend paying for a Burp Pro license enough. It’s reasonably priced and the ability to save/restore projects is alone worth the license, not to mention all the scanners and other features that come with Professional.

There are also many metapackages that install additional security tooling in bulk. These all begin with kali-tools-, such as kali-tools-sdr for Software Defined Radio tools, kali-tools-forensics for Forensics tooling, or kali-tools-exploitation for exploitation. If you want to see all of the tools included in one of these metapackages, you can use apt-cache depends to get a list of the direct dependencies:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apt-cache depends kali-tools-sdr
kali-tools-sdr
  Depends: chirp
  Depends: gnuradio
  Depends: gqrx-sdr
  Depends: gr-air-modes
  Depends: gr-iqbal
  Depends: gr-osmosdr
  Depends: hackrf
  Depends: inspectrum
  Depends: kalibrate-rtl
  Depends: multimon-ng
  Depends: rtlsdr-scanner
  Depends: uhd-host
  Depends: uhd-images

Personal Setup

Make it your own! There’s some things that you can do to get it set up for you and make your experience more comfortable.

  1. Select and customize your shell. Kali ships with bash by default, but zsh is also popular and mostly a drop-in replacement for bash. Some people like fish. Many like to customize their prompt to display more information.
  2. Set a nice wallpaper. Yes, this is trivial, but it makes me feel better about the OS being “mine”. I’m a big fan of the work being done by David Hughes in his 100 Days of Hacker Art.
  3. Setup tools that require personalization, such as setting your name/email for git, adding any custom metasploit resource files, configuring your editor of choice, etc.

Using Kali Professionally

There are some who claim that Kali is not (or should not be) used by professionals in the industry. I think this is a bit of a perception that “real hackers” have to do things the “hard way.” In reality, there are plenty of professional penetration testers who use Kali Linux on a regular basis. There are, however, some things you can do to make it more useful in this context.

I start by creating a virtual machine installation of Kali and customizing it to my needs by placing my dotfiles on the system, installing additional software (Chrome, my Burp Suite Pro license, VPN configurations, etc.), then taking a virtual machine snapshot. Then, for each engagement I’m working on, I clone that snapshot and run a VM dedicated to that engagement. This gives me a clean start for each engagement and prevents data related to one project from creeping into another project’s VM. Each time I update the base image, I take a new snapshot – this way, if a software update introduces a bug or breaks a feature, I can trivially go back to an older (known working) version so I’m not interrupted in the middle of a project.

Remember that, as a penetration tester, you will have access to lots of sensitive data about your clients/employer & their engagements. You should enable full-disk encryption with a strong passphrase to protect their data, credentials, access, etc. Ideally, this would be on your host OS (or Kali itself, if a bare metal installation). If you can’t do that for some reason, at least enable FDE on the Kali install in your VM. Change the default password, and if you must enable SSH, then make sure you set it to allow only SSH keys. Protecting your client’s/employer’s data is a significant responsibility for penetration testers.

Kali Warnings

Anonymity

Kali Linux is an operating system designed for professional penetration testers, not for accessing the internet anonymously. By default, it does not do anything to hide the source of your traffic, such as routing through a VPN, Proxy, or Tor. If your primary concern is anonymity, such as in countries with filtered internet, operating as a whistleblower, journalists in a society with a less-than-free press, etc., you should look for other tools to meet those needs. Tools designed for those use cases will be more effective at protecting you.

If you are performing an authorized penetration test or bug bounty, you may want to use a VPN to reroute your traffic to simulate a particular adversary, but anonymity is not your primary concern, since it is an authorized test. If you’re doing other shady things, you really need to learn about proper OPSEC and don’t just pick up Kali Linux and expect it to solve your problems.

Running as Root

It used to be the default to run Kali as root, and if you use an older version, it will still be configured that way. They’ve changed that recently, but some users may still want the “old” way. There’s a few downsides to running as root, the most notable of which is to protect you from yourself. It’s much easier to make a mistake like rm -rf / tmp/x (notice the space) and blow things up when running as root. Additionally, some types of sandboxes will not run properly as root (because it’s too easy to escape them), so things like Chrome will not work properly as root.

Other Resources

  1. KALI LINUX™ is a trademark of Offensive Security. 

03 July, 2020 07:00AM

July 02, 2020

hackergotchi for Purism PureOS

Purism PureOS

Librem 14 Thoughts From a Librem 13 Early Adopter

I’ve been involved with Purism in one way or another since almost the beginning. Originally I found out about Purism and Todd back in 2014 before the end of the original Librem 15 crowdfunding campaign when I reviewed the Librem 15 prototype for Linux Journal Magazine. While the Librem 15 was far too big for my tastes, I was really impressed with Todd and his mission and started helping out a bit behind the scenes with advice (and later on, with early PureOS install tools). When the original Librem 13 campaign was announced, I immediately asked to review it for Linux Journal as it was right up my alley in terms of form factor. My Linux Journal review summed up my feelings pretty well:

I want one. Maybe I’ve just spent too long on older hardware but it’s nice to be able to use a laptop with modern specs without having to compromise on my Open Source and privacy ideals. The Librem 15 was definitely too big for me but while the Librem 13 is bigger than most of my personal laptops, it’s about the same size as a modern Thinkpad X series (but thinner and lighter). I’m more than willing to add an inch or so to the width in exchange for such a nice, large, high-res screen. Even though my X200 is technically smaller, it’s definitely heavier and just feels clunkier.

I ended up backing the crowdfunding campaign. The Librem 13v1 I got in 2015 was actually also one of the first prototypes for our anti-interdiction service, with hand-written custom text over stickers covering the plastic around the laptop and with pictures of that and the motherboard sent to me out of band. One interesting thing about the Librem 13v1 was what an improvement it already was over the Librem 15v1 I had reviewed only about six months before. It had the darker anodized finish that we now associate with Librem laptops and in my opinion had even better build quality than the Librem 15v1. It was also different from more recent Librem 13 revisions: it had hardware kill switches on the display hinge instead of the side, and it had a pop-down RJ45 jack with a Gigabit network card.

Early Librem 13 kill switch prototype

Five years later, that Librem 13v1 is still serving as my personal laptop and is still running strong although I did invest in a RAM upgrade a year or so back to better handle recent RAM-hungry QubesOS upgrades.

Librem 13 Generations

Now that I work at Purism, I’ve used just about every generation of Librem 13 laptop either as a lab device or as my own work laptop. Each Librem 13 generation added improvements and refinements such as upgrading the CPU, moving the hardware kill switch to the side of the laptop, integrating a TPM chip by default for PureBoot and replacing the RJ45 jack with a USB-C port.

Of course most of the changes to the Librem 13 were incremental. The overall appearance of the laptop has been the same throughout the generations like you might expect–why reinvent the wheel with each revision? Yet sometimes it does pay to revisit a design and start fresh. Planning for the Librem 14 allowed us the opportunity to start from scratch and design a “dream laptop” based on our own wishlist combined with the wishlists you have given us over the years. This dream laptop is precisely what we built with the Librem 14.

Introducing the Librem 14

There are many things about the Librem 14 that remind me of the first generation of the Librem 13. By popular demand we have brought back the gigabit Ethernet card with an integrated RJ45 jack. Even though I use wireless networking as well, whenever I need to backup my laptop I always plug it directly into my local gigabit network. And as someone who recently got gigabit Internet access, I have even more reasons to connect to a physical cable.

We added the RJ45 port while retaining the existing HDMI, USB-A ports and USB-C ports, but in the case of the USB-C port, it now supports video out as well as power delivery so I can either charge it with the same standard barrel connector I use for the rest of my Librem laptops, or use a USB-C charger.


Kill Switches Are Back On Top

After we moved the hardware kill switches to the side of the laptop, we heard from a number of you that you preferred the kill switches on the laptop hinge. For some this was because it was easier to see the state of the kill switches without having to bend your head over to the side of the laptop. Others commented that sometimes they’d accidentally flip a kill switch when inserting the laptop into a backpack or sleeve.

Regardless of the reason, we hear you and we’ve moved kill switches for the Librem 14 back on top so you can easily see the state of your webcam/microphone and WiFi devices at a glance and know that they will retain their state when you put the laptop away.

14″ Screen in a 13″ Footprint

Laptop footprint is very important to me. I’ve owned ultraportable laptops like the Toshiba Libretto 50CT and the Fujitsu P2110, and a 13″ laptop is right at the upper end of what I personally consider “portable” and a “laptop” (although of course tastes and lap sizes vary). As we worked on the design for the successor to the current generation of the Librem 13, one of the things that came up was screen size. I personally would not have been in favor of increasing the Librem 13 footprint to accommodate a 14″ screen, but current advances in laptop design meant we were able to squeeze a larger screen in the same footprint by reducing the size of the bezel. A win for everyone.

Seriously? Six Cores in a Laptop?

I admit that personally, the new i7-10710U CPU is what I’m most excited about with the Librem 14. While some desktop use cases may not necessarily take advantage of parallelization, I use QubesOS (a high security OS that makes heavy use of virtual machines to isolate applications from each other) as my primary OS both personally and professionally. While Qubes still runs fine on my five-year-old personal Librem 13v1, and also runs well on the Librem 13v4 I use for work, using Qubes means you might end up running four to six (or more) web browsers at the same time, each isolated into their own virtual machine. Modern, bloated web applications spread across multiple browsers with virtualization overhead can take a toll as they share time on a 2-core CPU so I’m looking forward to seeing how Qubes performs when each browser can have a core of its own.

Stay Tuned

We are all very excited about the Librem 14 and have so much more we want to share with you about it. Over the coming weeks we will be publishing more information about specific features in the laptop (along with some surprise features we haven’t announced yet!) so watch our site for more information. Do take advantage of the early-bird pricing for the Librem 14 while it lasts and pre-order now!

The post Librem 14 Thoughts From a Librem 13 Early Adopter appeared first on Purism.

02 July, 2020 07:54PM by Kyle Rankin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: A blast from the past – Shutter

The wheel of software turns, and apps come and go. But the end of development does not always mean the end of usefulness. Sometimes, programs stubbornly remain around, offering a complete experience that can withstand the test of time.

Several weeks ago, we talked about how you can preserve old applications with snaps. Today, we would like to expand on this concept and talk about Shutter, a feature-rich screenshot application that was rather popular several years ago. Its development has stalled in recent years, and it has become more difficult to install and run it on newer versions of various Linux distributions. But Shutter has gained a new life as a snap.

Old but not obsolete

If you were one of the long-time users of Shutter, you can still enjoy most of its tools and features. If you’ve not used it before, then a short tour is in order. Install the snap and launch the application.

Shutter allows you to take screenshots of selections, your entire desktop – including different workspaces, and individual application windows. Screenshots are displayed in tabs, so you can conveniently scroll through your session gallery. Sessions are also preserved across application restarts.

You can also edit images. Shutter comes with an integrated drawing tool, offering a limited if still quite powerful image editing set. This means you do not necessarily have to export your screenshots to a program like GIMP, you can make some basic changes and decorations inside Shutter. You can add layers, type in text, draw various shapes including arrows, or fill parts of the image with different colors.

On top of that, the detailed preferences section allows you to tweak your workflow, including image export to online sharing services. This makes Shutter a handy tool for collaboration, in addition to local work with screenshots. Furthermore, you can configure multiple profiles, allowing you to use different settings for specific usage scenarios.

Not everything is perfect …

However, certain features do not currently work, including some outstanding problems that were present in Shutter even in the earlier days of its popularity and have not been resolved or patched. Shutter can take screenshots of specific application menus, balloon tips and other overlay elements on the desktop, but this may not necessarily work as intended. Plugins – a series of automated image modifications like distort, sepia, polaroid and others may also be outdated and not work as expected.

This is where you step in!

With Shutter’s functionality enshrined and secure as a snap, users can have a reliable experience on modern Linux distributions, even if the application is no longer available in the standard repository archives. It is packaged using strict confinement, which ensures isolation from the underlying system, and it will run without any library dependency conflict. This means we can focus on trying to improve the functionality where possible – with Shutter as well as other software.

Snap City needs its heroes …


We would like to ask you for your help and ideas. If you have old applications that merit preservation and still have relevant use cases in the modern age, please start a thread on the snapcraft forum so we can discuss the best way to include it in the Snap Store.

Moreover, if you think you can assist in helping resolve outstanding bugs in old software currently in the Store catalog, like say Shutter, we would also like to hear from you. Indeed, in the coming weeks, we will have an article that outlines all the different ways you can help and contribute to the development of the snap ecosystem.

Superhero photo by Ali Kokab on Unsplash, main page photo by Jakob Owens on Unsplash.

02 July, 2020 03:46PM

Ubuntu Podcast from the UK LoCo: S13E15 – Vertical chopsticks

This week we’ve been helping HMRC and throwing a 10th birthday party. We discuss “Rolling Rhino”, split personality snaps, UBPorts supporting Project Treble devices, ZFS on Ubuntu 20.04 plus our round-up from the tech news.

It’s Season 13 Episode 15 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

02 July, 2020 02:00PM

Ubuntu Blog: Building Kubeflow pipelines: Data science workflows on Kubernetes – Part 2

This blog series is part of the joint collaboration between Canonical and Manceps.
Visit our AI consulting and delivery services page to know more.

Introduction

Kubeflow Pipelines are a great way to build portable, scalable machine learning workflows. It is a part of the Kubeflow project that aims to reduce the complexity and time involved with training and deploying machine learning models at scale. For more on Kubeflow, read our Kubernetes for data science: meet Kubeflow post.

In this blog series, we demystify Kubeflow pipelines and showcase this method to produce reusable and reproducible data science. 🚀

In Part 1, we covered WHY Kubeflow brings the right standardization to data science workflows. Now, let’s see HOW you can accomplish that with Kubeflow Pipelines.

In Part 2 of this blog series, we’ll work on building your first Kubeflow Pipeline as you gain an understanding of how it’s used to deploy reusable and reproducible ML pipelines. 🚀

Now, it is time to get our hands dirty! 👨🏻‍🔬

Building your first Kubeflow pipeline

In this experiment, we will make use of the fashion MNIST dataset and the Basic classification with Tensorflow example and turn it into a Kubeflow pipeline, so you can repeat the same process with any notebook or script you already have worked on.

You can follow the process of migration into the pipeline on this Jupyter notebook.

Ready? 🚀

Step 1: Deploy Kubeflow and access the dashboard

If you haven’t had the opportunity to launch Kubeflow, that is ok! You can deploy Kubeflow easily using Microk8s by following the tutorial – Deploy Kubeflow on Ubuntu, Windows and MacOS.

We recommend deploying Kubeflow on your workstation if you have a machine with 16GB of RAM or more. Otherwise, spin up a virtual machine with these resources (e.g. t2.xlarge EC2 instance) and follow the same deployment process.

You can find alternative deployment options here.

Step 2: Launch notebook server

Once you have access to the Kubeflow dashboard, setting up a Jupyter notebook server is fairly straightforward. You can follow the steps here.

Launch the server, wait a few seconds, and connect to it.

Step 3: Git clone example notebook

Once in the Notebook server, launch a new terminal from the menu on the right (New > Terminal).


In the terminal, download the notebook from GitHub:

$ git clone https://github.com/manceps/manceps-canonical.git

Now, open the “KF_Fashion_MNIST” notebook:


Jupyter notebook for this experiment – download here.

Step 4: Initiate Kubeflow pipelines SDK

Now that we’re on the same page, we can kickstart our project together in the browser. As you see, the first section is adapted from the Basic classification with Tensorflow example. Let’s skip that and get on with converting this model into a running pipeline.

To ensure access to the packages needed through your Jupyter notebook instance, begin by installing Kubeflow Pipelines SDK (kfp) in the current userspace:

!pip install -q kfp --upgrade --user

Step 5: Convert Python scripts to docker containers

The Kubeflow Python SDK allows you to build lightweight components by defining python functions and converting them using func_to_container_0p.

To package our python code inside containers you define a standard python function that contains a logical step in your pipeline. In this case, we have defined two functions: train and predict.

The train component will train, evaluate, and save our model.

 The predict component takes the model and makes a prediction on an image from the test dataset.

# Grab an image from the test dataset
Img = test_images[image_number]
# Predict the label of the image
predictions = probability_model.predict(img)

The code used in these components is in the second part of the Basic classification with Tensorflow example, in the “Build the model” section.

The final step in this section is to transform these functions into container components. You can do this with the func_to_container_op method as follows.

train_op = comp.func_to_container_op(train, base_image='tensorflow/tensorflow:latest-gpu-py3')
predict_op = comp.func_to_container_op(predict, base_image='tensorflow/tensorflow:latest-gpu-py3')

Step 6: Define Kubeflow pipeline

Kubeflow uses Kubernetes resources which are defined using YAML templates. Kubeflow Pipelines SDK allows you to define how your code is run, without having to manually manipulate YAML files. 

At compile time, Kubeflow creates a compressed YAML file that defines your pipeline. This file can later be reused or shared, making the pipeline both scalable and reproducible.

Start by initiating a Kubeflow client that contains client libraries for the Kubeflow Pipelines API, allowing you to further create experiments and runs within those experiments from the Jupyter notebook.

client = kfp.Client()

We then define the pipeline name and description, which can be visualized on the Kubeflow dashboard.

Next, define the pipeline by adding the arguments that will be fed into it.

In this case, define the path for where data will be written, the file where the model is to be stored, and an integer representing the index of an image in the test dataset:

Step 7: Create a persistent volume

One additional concept we need to add is the concept of Persistent Volumes. Without adding persistent volumes, we would lose all the data if our notebook was terminated for any reason. kfp allows for the creation of persistent volumes using the VolumeOp object.

VolumeOp parameters include:

  • name – the name displayed for the volume creation operation in the UI
  • resource_name – name which can be referenced by other resources.
  • size – size of the volume claim
  • modes – access mode for the volume (See Kubernetes docs for more details on access mode). 

Step 8: Define pipeline components

It is now time to define your pipeline components and dependencies. We do this with ContainerOp, an object that defines a pipeline component from a container.

The train_op and predict_op components take arguments which were declared in the original python function. At the end of the function we attach our VolumeOp with a dictionary of paths and associated Persistent Volumes to be mounted to the container before execution.

Notice that while train_op is using the vop.volume value in the pvolumes dictionary, the <Container_Op>.pvolume argument used by the other components ensures that the volume from the previous ContainerOp is used, rather than creating a new one.

This inherently tells Kubeflow about our intended order of operations. Consequently, Kubeflow will only mount that volume once the previous <Container_Op> has completed execution.

The final print_prediction component is defined somewhat differently. Here we define a container to be used and add arguments to be executed at runtime.

This is done by directly using the ContainerOp object.

ContainerOp parameters include:

  • name – the name displayed for the component execution during runtime.
  • image – image tag for the Docker container to be used.
  • pvolumes – dictionary of paths and associated Persistent Volumes to be mounted to the container before execution.
  • arguments – command to be run by the container at runtime.

Step 9: Compile and run

Finally, this notebook compiles your pipeline code and runs it within an experiment. The name of the run and of the experiment (a group of runs) is specified in the notebook and then presented in the Kubeflow dashboard. You can now view your pipeline running in the Kubeflow Pipelines UI by clicking on the notebook link run.

Results

Now that the pipeline has been created and set to run, it is time to check out the results. Navigate to the Kubeflow Pipelines dashboard by clicking on the notebook link run or Pipelines → Experiments → fasion_mnist_kubeflow. The components defined in the pipeline will be displayed. As they complete the path of the data pipeline will be updated.

To see the details for a component, we can click directly on the component and dig into a few tabs. Click on the logs tab to see the logs generated while running the component.

Once the echo_result component finishes executing, you can check the result by observing the logs for that component. It will display the class of the image being predicted, the confidence of the model on its prediction, and the actual label for the image.

Final thoughts

Kubeflow and Kubeflow Pipelines promise to revolutionize the way data science and operations teams handle machine learning operations (MLOps) and pipelines workflows. However, this fast-evolving technology can be challenging to keep up with.

In this blog series we went through a conceptual overview, in part 1, and a hands-on demonstration in part 2. We hope this will get you started on your road to faster development, easier experimentation, and convenient sharing between data science and DevOps teams.

To keep on learning and experimenting with Kubeflow and Kubeflow Pipelines:

  1. Play with Kubeflow examples on GitHub
  2. Read our Kubernetes for data science: meet Kubeflow post.
  3. Visit ubuntu.com/kubeflow

Would you like us to deploy and maintain your Kubeflow deployments? Find out more on Canonical’sKubeflow consulting page.

02 July, 2020 09:38AM

hackergotchi for Purism PureOS

Purism PureOS

Purism Launches Librem 14, Successor to Security-focused Librem 13 Product Line

SAN FRANCISCO, July 2, 2020 — Purism, a security-first hardware and software maker, has launched the Librem 14 laptop for pre-order, the successor to its popular Librem 13 laptop line. The Librem 14 was designed based on Purism’s experience with four generations of Librem 13 laptops along with customer feedback. It retains popular security features such as hardware kill switches to disable the webcam/microphone and WiFi and supports PureBoot, Purism’s high security boot firmware. The laptop comes preloaded with PureOS–Purism’s operating system endorsed by the Free Software Foundation.

The most distinctive feature of the Librem 14 is the new 14″ 1080p IPS matte display which, due to the smaller bezel, fits within the same footprint as the Librem 13. Other upgrades and improvements include:

  • Intel Core i7-10710U CPU with 6 cores, 12 threads
  • Gigabit ethernet card with built-in RJ45 connector is back by popular demand
  • Support for two external monitors via HDMI and USB-C
  • USB-C power delivery in addition to the standard barrel connector

Customers also have the option of leveraging Purism’s anti-interdiction services for added security in transit to verify hardware has not been tampered with during shipment.

“I am beyond excited to see the Librem laptop journey arrive at the build quality and specifications in the Librem 14. This fifth version of our line is the culmination of our dream device rolled into a powerful professional laptop. We have invested heavily so every customer will be proud to carry our laptops, and the Librem 14 will be the best one yet.” — Todd Weaver, CEO and founder of Purism.

The Librem 14 is available for pre-order now with an “early bird” base price of $1199 and will ship in early Q4 2020. For more details on pricing and hardware specifications for Librem 14 visit https://puri.sm/products/librem-14/.

The post Purism Launches Librem 14, Successor to Security-focused Librem 13 Product Line appeared first on Purism.

02 July, 2020 08:00AM by Purism

hackergotchi for Grml developers

Grml developers

Evgeni Golov: Automatically renaming the default git branch to "devel"

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'

headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = []

url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code

#!/usr/bin/env python3

import requests

BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'

headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)

session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

repos_to_change = []

url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

02 July, 2020 07:12AM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Encryption at rest with Ceph

Do you have a big data center? Do you have terabytes of confidential data stored in that data center? Are you worried that your data might be exposed to malicious attacks? One of the most prominent security features of storage solutions is encryption at rest. This blog will explain this in more detail and how it is implemented in Charmed Ceph, Canonical’s software-defined storage solution.

What is data at rest?

Before we dive into encryption, we need to define what data at rest is. There are three states for digital data: data in use, data in transit and data at rest. Data in use refers to active data stored in non-persistent volumes, typically RAM or CPU caches. Data in transit is the state where data is transferred over a network, either private or public. Data at rest means inactive data that is stored physically on persistent storage, i.e. disks, databases, data warehouses, mobile devices, archives, etc. When at rest, data can be subject to malicious threats such as data theft or data corruption by obtaining physical access to the storage hardware. There are multiple security measures to protect data at rest, starting from password protection, federation and data encryption.

What is data encryption at rest?

Encryption at rest is the encoding of data when it is persisted. It is designed to prevent the attacker from accessing unencrypted data by ensuring all raw data is encrypted when stored on a persistent device. 

Encryption at rest addresses a multitude of potential threats. Starting from the lowest threat level like the theft of an HDD device, server loss, up to extremes such as the compromise of an entire rack of servers or the entire data center, businesses will have peace of mind as long as the stolen data was encrypted. The attacker could still get physical access to the storage, but without the encryption keys, it is significantly more complex and resource-consuming to read the encrypted data.

Nowadays, most businesses are interested in data security, especially after the introduction of GDPR. Some also need to comply with industry and government regulations such as HIPAA, PCI-DSS and FedRAMP. Encryption at rest is a prerequisite for some of those regulations and Canonical’s security certification program can help your business stay compliant.

How does encryption at rest work?

Encryption of data on block storage in a Linux environment is quite straightforward. The Ubuntu kernel supports the dm-crypt and LUKS utilities, for transparent disk encryption and on-disk encryption key management respectively. However, encryption at rest also requires a key management solution (KMS) to ensure the security of the encryption keys and proper role-based access control (RBAC) definitions. 

Ceph encryption at rest

Charmed Ceph supports encryption at rest out-of-the-box both as part of an OpenStack private cloud deployment and as a standalone storage solution. Charmed Ceph is based on a model-driven approach. All Ceph components are wrapped in charms, that is, code that drives lifecycle management automation.

Ceph and Vault communication model

For Ceph encryption at rest, the selected KMS is Hashicorp Vault. Vault is a widely used Encryption-as-a-Service solution that supports centralised key management and key rotation to ensure cryptographic best practices. When booting up, Vault needs to be unsealed in order for services to connect to it and read their encryption keys. Unsealing Vault requires a Master encryption key that needs a number of unseal keys to be unlocked. After initialising Vault, the data center operations team needs to provide a token retrieved from Vault to establish a connection between the Ceph charms and Vault.

Charmed Ceph uses Vaultlocker as an integration component between dm-crypt and Vault. Vaultlocker ensures the encryption keys are only ever held in memory locally and stored persistently in Vault, only to be read from Vault into memory during any subsequent operation, such as unlocking or encryption of a block device.

RBAC is implemented through the Vault charm. The charms use Vault AppRoles to handle communication between Vault and the Ceph cluster. Every storage server of the Ceph cluster has a specific AppRole (consisting of a role ID and secret) which can only be used from a specific IP address.

If all of the above sounds fairly complicated, it is mostly because Canonical ensures that the attack surface for Charmed Ceph is the smallest possible. Using Vault and Vaultlocker, Charmed Ceph has a solid approach to data encryption at rest to protect against all possible types of physical device loss in your data center.


Learn more about Charmed Ceph or contact us about your data center storage needs.

Read our Charm Deployment Guide sections on using Vault and encryption-at-rest.

02 July, 2020 07:00AM

July 01, 2020

hackergotchi for SparkyLinux

SparkyLinux

June 2020 donation report

Many thanks to all of you for supporting our open-source projects, specially in this difficult days. Your donations help keeping them alive.

Unfortunately, we were not able to collect all the amount we needed for this month, but we thank all of you very much for your support.

Don’t forget to send a small tip in July too, please 🙂

Country
Supporter
Amount
Poland
Krzysztof S.
PLN 7
Poland
Wojciech H.
PLN 2
Poland
Andrzej T.
PLN 100
Poland
Krzysztof M.
PLN 50
USA
Ben J.
€ 4.5
Poland
Krzysztof S.
PLN 50
Poland
Robert C.
PLN 100
Poland
Karol N.
PLN 120
World
Gernot P.
$ 10
World
John C.
$ 5
World
Tom C.
$ 15
World
Carlos F.
$ 15
Germany
Wilm S.
€ 20
Germany
Bernhard N.
€ 10
France
Sebastien M.
€ 10
United Kingdom
Rudolf L.
€ 10
World
Luis J.
€ 5
World
Karl A.
€ 1.66
Poland
Andrzej M.
PLN 5
Poland
Marek B.
PLN 10
Poland
Artur A.
PLN 35
Poland
Stanisław G.
PLN 20
Poland
Grzegorz S.
PLN 50
Germany
Alexander F.
€ 10
Germany
Antony J.
€ 10
Germany
Jorg S.
€ 5
Poland
Andrzej P.
PLN 10
Poland
Bartłomiej P.
PLN 15
Poland
Dariusz M.
€ 10
World
Aymeric L.
€ 10
Poland
Paweł W.
PLN 5
Poland
Jacek B.
PLN 15
Poland
Jacek G.
PLN 40
World
The Sailor
€ 50
USA
Andrew W.
€ 25
USA
Ryans Products LLC
€ 25
World
Daniel H.
€ 5
Germany
Ralf A.
€ 15
Germany
Manfred C.
€ 31.18
Germany
Wolfgang L.
€ 12
World
Volodymyr S.
€ 4.5
Spain
Marcelo A.
€ 19.37
Total:
€ 293.21
PLN 634
$ 45

* Keep in mind that some amounts coming to us will be reduced by commissions for online payment services. Only direct sending donations to our bank account will be credited in full.

* Miej na uwadze, że kwota, którą przekażesz nam poprzez system płatności on-line zostanie pomniejszona o prowizję dla pośrednika. W całości wpłynie tylko ta, która zostanie przesłana bezpośrednio na nasze konto bankowe.

01 July, 2020 08:12PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Oliver Grawert: Rebuilding the node-red snap in a device focused way with additional node-red modules

While there is a node-red snap in the snap store (to be found at https://snapcraft.io/node-red with the source at https://github.com/dceejay/nodered.snap) it does not really allow you to do a lot with it on i.e. a Raspberry Pi if you want to read sensor data that does not actually come in via the network …

The snap is missing all essential interfaces that could be used for any sensor access (gpio, i2c, Bluetooth, spi or serial-port) and it does not even come with basics like hardware-observe, system-observe or mount-observe to get any systemic info from the device it runs on.

While the missing interfaces are indeed a problem, there is the fact that strict snap packages need to be self contained and hardly have any ability to dynamically compile any software …. Now, if you know nodejs and npm (or yarn or gyp) you know that additional node modules often need to compile back-end code and libraries when you add them to your nodejs install. Technically it is actually possible to make “npm install” work but it is indeed hard to predict what a user may want to install in her installation so you would also have to ship all possible build systems (gcc, perl, python, you name it)
plus all possible development libraries any of the added modules could ever require …

That way you might technically end up with a full OS inside the snap package. Not really a desirable thing to do (beyond the fact that this would even with the high compression snap packages use end up in a gigabytes big snap).

So lets take a look at whats there already in the upstream snapcraft.yaml we can find a line like the following:

npm install --prefix $SNAPCRAFT_PART_INSTALL/lib node-red node-red-node-ping node-red-node-random node-red-node-rbe node-red-node-serialport

This is actually great, so we can just append any modules we need to that line …

Now, as noted above, while there are many node-red modules that will simply work this way, many that are interesting for us to access sensor data will need additional libs that we will need to include in the snap as well …

In Snapcraft you can easily add a dependency via simply adding a new part to the snapcraft.yaml, so lets do this with an example:

Lets add the node-red-node-pi-gpio module, lets also break up the above long line into two and use a variable that we can append more modules to:

DEFAULT_MODULES="npm node-red node-red-node-ping node-red-node-random node-red-node-rbe \
                 node-red-node-serialport node-red-node-pi-gpio"
npm install --prefix $SNAPCRAFT_PART_INSTALL/lib $DEFAULT_MODULES

So this should get us the GPIO support for the Pi into node-red …

But ! Reading the module documentation shows that this module is actually a front-end to the RPi.GPIO python module, so we need the snap to ship this too … luckily snapcraft has an easy to use python plugin that can pip install anything you need. We will add a new part above the node-red part:

parts:
...
  sensor-libs:
    plugin: python
    python-version: python2
    python-packages:
      - RPi.GPIO
  node-red:
    ...
    after: [ sensor-libs ]

Now Snapcraft will pull in the python RPi.GPIO module before it builds node-red (see the “after:” statement i added) and node-red will find the required RPi.GPIO lib when compiling the node-red-node-pi-gpio node module. This will get us all the bits and pieces to have GPIO support inside the node-red application …

Snap packages are running confined, this means they can not see anything of the system we do not allow it to via an interface connection. Remember that i said above the upstream snap is lacking some such interfaces ? So lets better add them to the “apps:” section of our snap (the pi-gpio node module wants to access /dev/gpiomem as well as the gpio device-node itself, so we make sure both these plugs are available to the app):

apps:
  node-red:
    command: bin/startNR
    daemon: simple
    restart-condition: on-failure
    plugs:
      ...
      - gpio
      - gpio-memory-control

And this is it, we have added GPIO support to the node-red snap source, if we re-build the snap, install it on an Ubuntu Core device and do a:

snap connect node-red:gpio-memory-control
snap connect node-red:gpio pi:bcm-gpio-4

We will be able to use node-red flows using this GPIO (for other GPIOs you indeed need to connect to the pi:bcm-gpio-* of your choice … (the mapping for Ubuntu Core follows https://pinout.xyz/ )

I have been collecting a good bunch of possible modules in a forked snap that can be found at https://github.com/ogra1/nodered-snap a binary of this is at https://snapcraft.io/node-red-rpi and i plan a series of more node-red centric posts the next days telling you how to wire things up, with example flows and some deeper insight how to make your node-red snap talk to all the Raspberry Pi interfaces, from i2c to Bluetooth.

Stay tuned !

01 July, 2020 04:11PM

hackergotchi for VyOS

VyOS

Growing VyOS community and project side by side

 

There is a popular fiction that if you have a good project, a community automatically forms around it and gives it the publicity it deserves. In reality, it varies. It’s more than possible to have a lot of publicity without any project at all, as we see with a lot of startups that turn out to be vaporware. They get their publicity by talking about the supposed product everywhere.

The reality is that good projects also get their publicity… by talking about their project everywhere. The more you talk about it, the more visible you are. In CatB, Eric Raymond admits that he “grew his beta list by adding to it everyone who contacted him about fetchmail”—a practice that looks borderline spammy.

01 July, 2020 08:31AM by Daniil Baturin (daniil@sentrium.io)

June 30, 2020

hackergotchi for Cumulus Linux

Cumulus Linux

Modular networking in a volatile business environment

Organizational change, growth, and environmental diversity are all challenges for IT teams, and they’re going to be a part of everyday life for the foreseeable future. As the number of device models and network architectures increases, so, too, does management complexity. Coping with 2020’s ongoing gift of unpredictability requires technological agility, something Cumulus Networks, acquired by NVIDIA, can help you with.

It’s easy to worry about the consequences of our collective, rapidly changing economic circumstances as though the problems presented are somehow novel. They’re not.

2020 has increased uncertainty, leading to an increased velocity of change, but change is the only constant in life, and the need for agile networking has been obvious to many in the industry for some time. Even without problems like having to rapidly figure out how to cope with large chunks of the workforce working from home, change-responsive networking has been a challenge for organizations experiencing growth for decades, a problem many continue to struggle with today.

At a practical level, one of the biggest problems with rapid change is that it quickly leads to a dilemma: precisely meet the needs of the moment, resulting in a significant uptick in equipment diversity, or deploy a limited range of devices. Increased equipment diversity creates increased administrative overhead, while deploying a limited range of devices usually results in the provisioning of unnecessarily powerful (and consequently expensive) equipment.

NVIDIA-Cumulus Linux

As “austerity” sees renewed adoption in the corporate lexicon, the traditional practice of massive over-provisioning is less likely to be acceptable. It may be easier to manage all your switches if you only have three models out in the wild, but if the bean counters haven’t started asking questions about why 48-port switches are being deployed to locations with five devices, they soon will.

This leaves allowing equipment diversity to accelerate as the only realistic option to coping with change, and that means finding a way to manage it. The ability to deploy a single operating system to all your networking equipment is beneficial, as is having a capable central management platform, as well as strong support for industry standards and open source.

In other words, a complex and changing business environment really plays to NVIDIA-Cumulus Linux’s strengths!

NVIDIA-Cumulus Linux is an open network operating system that sits at the center of a diverse (and growing) hardware ecosystem. Whatever your needs, there exists a switch on which Cumulus Linux can be installed to meet them. All of these switches, big and small, can be centrally managed using Cumulus NetQ, greatly reducing the problems associated with equipment diversity.

A single operating system has more advantages than enabling more-capable centralized management, however: It provides the perfect environment for organizations to deploy scripting and other software directly onto their networking equipment. One operating system means a single application environment, which means that scripts written for one switch will work on another. This really starts to matter at scale.

Go big

In order to take advantage of this, however, organizations need to be able to deploy NVIDIA-Cumulus Linux as widely as possible throughout their infrastructure. That involves not only the ability to deploy NVIDIA-Cumulus to small access and branch switches, it also means getting some mighty capable iron into the data center.

Cumulus Networks announced at OCP Summit 2019 that Cumulus Linux is the first network operating system to fully support the Minipack next-generation modular switch platform. Minipack was developed by Edgecore and subsequently contributed to the Open Compute Project by Facebook.

Minipack is a modular switch, and conveys two notable benefits. First, it goes zoom: It can be equipped with 128x 100Gbit or 32x 400Gbit ports, offering an incredible 12.8Tbit of throughput.

Second, Minipack switch chassis can be deployed with only the exact number and type of modules required, allowing IT teams to keep both equipment diversity and over-provisioning to a minimum.

Put it all together

It’s easy to view both centralized management and modular switches without much excitement: Networking vendors have been shipping both for years. What makes Cumulus so different? The answer is: What you can do when you start putting it all together.

Open networking switches tend to be cheaper than their proprietary competitors. That was one of the goals of the project. Modular switches are more flexible by their very nature, and this also helps control costs. NVIDIA-Cumulus Linux brings all the advantages of a single operating system to the table, and NVIDIA-Cumulus NetQ centralized management makes it all easier to use.

NVIDIA-Cumulus Networks allows businesses to grow rapidly if needed, without having to pay for a lot of unused capacity. This keeps risks and costs low, but offers the agility to respond to market opportunities. To learn more about the benefits of open networking, download our free whitepaper here.

30 June, 2020 07:15PM by Katherine Gorham

hackergotchi for Maemo developers

Maemo developers

Developing on WebKitGTK with Qt Creator 4.12.2

After the latest migration of WebKitGTK test bots to use the new SDK based on Flatpak, the old development environment based on jhbuild became deprecated. It can still be used with export WEBKIT_JHBUILD=1, though, but support for this way of working will gradually fade out.

I used to work on a chroot because I love the advantages of having an isolated and self-contained environment, but an issue in the way bubblewrap manages mountpoints basically made it impossible to use the new SDK from a chroot. It was time for me to update my development environment to the new ages and have it working in my main Kubuntu 18.04 distro.

My mail goal was to have a comfortable IDE that follows standard GUI conventions (that is, no emacs nor vim) and has code indexing features that (more or less) work with the WebKit codebase. Qt Creator was providing all that to me in the old chroot environment thanks to some configuration tricks by Alicia, so it should be good for the new one.

I preferred to use the Qt Creator 4.12.2 offline installer for Linux, so I can download exactly the same version in the future in case I need it, but other platforms and versions are also available.

The WebKit source code can be downloaded as always using git:

git clone git.webkit.org/WebKit.git

It’s useful to add WebKit/Tools/Scripts and WebKit/Tools/gtk to your PATH, as well as any other custom tools you may have. You can customize your $HOME/.bashrc for that, but I prefer to have an env.sh environment script to be sourced from the current shell when I want to enter into my development environment (by running webkit). If you’re going to use it too, remember to adjust to your needs the paths used there.

Even if you have a pretty recent distro, it’s still interesting to have the latests Flatpak tools. Add Alex Larsson’s PPA to your apt sources:

sudo add-apt-repository ppa:alexlarsson/flatpak

In order to ensure that your distro has all the packages that webkit requires and to install the WebKit SDK, you have to run these commands (I omit the full path). Downloading the Flatpak modules will take a while, but at least you won’t need to build everything from scratch. You will need to do this again from time to time, every time the WebKit base dependencies change:

install-dependencies
update-webkitgtk-libs

Now just build WebKit and check that MiniBrowser works:

build-webkit --gtk
run-minibrowser --gtk

I have automated the previous steps as go full-rebuild and runtest.sh.

This build process should have generated a WebKit/WebKitBuild/GTK/Release/compile_commands.json
file with the right parameters and paths used to build each compilation unit in the project. This file can be leveraged by Qt Creator to get the right include paths and build flags after some preprocessing to translate the paths that make sense from inside Flatpak to paths that make sense from the perspective of your main distro. I wrote compile_commands.sh to take care of those transformations. It can be run manually or automatically when calling go full-rebuild or go update.

The WebKit way of managing includes is a bit weird. Most of the cpp files include config.h and, only after that, they include the header file related to the cpp file. Those header files depend on defines declared transitively when including config.h, but that file isn’t directly included by the header file. This breaks the intuitive rule of “headers should include any other header they depend on” and, among other things, completely confuse code indexers. So, in order to give the Qt Creator code indexer a hand, the compile_commands.sh script pre-includes WebKit.config for every file and includes config.h from it.

With all the needed pieces in place, it’s time to import the project into Qt Creator. To do that, click File → Open File or Project, and then select the compile_commands.json file that compile_commands.sh should have generated in the WebKit main directory.

Now make sure that Qt Creator has the right plugins enabled in Help → About Plugins…. Specifically: GenericProjectManager, ClangCodeModel, ClassView, CppEditor, CppTools, ClangTools, TextEditor and LanguageClient (more on that later).

With this setup, after a brief initial indexing time, you will have support for features like Switch header/source (F4), Follow symbol under cursor (F2), shading of disabled if-endif blocks, auto variable type resolving and code outline. There are some oddities of compile_commands.json based projects, though. There are no compilation units in that file for header files, so indexing features for them only work sometimes. For instance, you can switch from a method implementation in the cpp file to its declaration in the header file, but not the opposite. Also, you won’t see all the source files under the Projects view, only the compilation units, which are often just a bunch of UnifiedSource-*.cpp files. That’s why I prefer to use the File System view.

Additional features like Open Type Hierarchy (Ctrl+Shift+T) and Find References to Symbol Under Cursor (Ctrl+Shift+U) are only available when a Language Client for Language Server Protocol is configured. Fortunately, the new WebKit SDK comes with the ccls C/C++/Objective-C language server included. To configure it, open Tools → Options… → Language Client and add a new item with the following properties:

  • Name: ccls
  • Language: *.c;.cpp;*.h
  • Startup behaviour: Always On
  • Executable: /home/enrique/work/webkit/WebKit/Tools/Scripts/webkit-flatpak
  • Arguments: --gtk -c ccls --index=/home/enrique/work/webkit/WebKit

Some “LanguageClient ccls: Unexpectedly finished. Restarting in 5 seconds.” errors will appear in the General Messages panel after configuring the language client and every time you launch Qt Creator. It’s just ccls taking its time to index the whole source code. It’s “normal”, don’t worry about it. Things will get stable and start to work after some minutes.

Due to the way the Locator file indexer works in Qt Creator, it can become confused, run out of memory and die if it finds cycles in the project file tree. This is common when using Flatpak and running the MiniBrowser or the tests, since /proc and other large filesystems are accessible from inside WebKit/WebKitBuild. To avoid that, open Tools → Options… → Environment → Locator and set Refresh interval to 0 min.

I also prefer to call my own custom build and run scripts (go and runtest.sh) instead of letting Qt Creator build the project with the default builders and mess everything. To do that, from the Projects mode (Ctrl+5), click on Build & Run → Desktop → Build and edit the build configuration to be like this:

  • Build directory: /home/enrique/work/webkit/WebKit
  • Add build step → Custom process step
    • Command: go (no absolute route because I have it in my PATH)
    • Arguments:
    • Working directory: /home/enrique/work/webkit/WebKit

Then, for Build & Run → Desktop → Run, use these options:

  • Deployment: No deploy steps
  • Run:
    • Run configuration: Custom Executable → Add
      • Executable: runtest.sh
      • Command line arguments:
      • Working directory:

With these configuration you can build the project with Ctrl+B and run it with Ctrl+R.

I think I’m not forgetting anything more regarding environment setup. With the instructions in this post you can end up with a pretty complete IDE. Here’s a screenshot of it working in its full glory:

Anyway, to be honest, nothing will ever reach the level of code indexing features I got with Eclipse some years ago. I could find usages of a variable/attribute and know where it was being read, written or read-written. Unfortunately, that environment stopped working for me long ago, so Qt Creator has been the best I’ve managed to get for a while.

Properly configured web based indexers such as the Searchfox instance configured in Igalia can also be useful alternatives to a local setup, although they lack features such as type hierarchy.

I hope you’ve found this post useful in case you try to setup an environment similar to the one described here. Enjoy!

0 Add to favourites0 Bury

30 June, 2020 03:47PM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.19 testing in Groovy Gorilla

Are you running the development release of Kubuntu Groovy Gorilla 20.10, or wanting to try the daily live ISO?

Plasma 5.19 has now landed in 20.10 and is available for testing. You can read about the new features and improvements in Plasma 5.19 in the official KDE release announcement.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

The Kubuntu development release is not recommended for production systems. If you require a stable release, please see our LTS releases on our downloads page.

Getting the release:

If you are already running Kubuntu 20.10 development release, then you will receive (or have already received) Plasma 5.19 in new updates.

If you wish to test the live session via the daily ISO, or install the development release, the daily ISO can be obtained from this link.

Testing:

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or;
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.18 or whatever version you are familiar with?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog in the KDEannouncement:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

Plasma 5.19 has 3 more scheduled bugfix releases in the coming months, so by testing you can help to improve the experience for Kubuntu users and the KDE community as a whole.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

Note: Plasma 5.19 has not currently been packaged for our backports PPA, as the release requires Qt >= 5.14, while Kubuntu 20.04 LTS has Qt 5.12 LTS. Our backports policy for KDE packages to LTS releases is to provide them where they are buildable with the native available stack on each release.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

30 June, 2020 03:07PM

hackergotchi for Tails

Tails

Tails 4.8 is out

This release fixes many security vulnerabilities. You should upgrade as soon as possible.

New features

  • We disabled the Unsafe Browser by default and clarified that the Unsafe Browser can be used to deanonymize you.

    An attacker could exploit a security vulnerability in another application in Tails to start an invisible Unsafe Browser and reveal your IP address, even if you are not using the Unsafe Browser.

    For example, an attacker could exploit a security vulnerability in Thunderbird by sending you a phishing email that could start an invisible Unsafe Browser and reveal them your IP address.

    Such an attack is very unlikely but could be performed by a strong attacker, such as a government or a hacking firm.

    This is why we recommend that you:

    • Only enable the Unsafe Browser if you need to log in to a captive portal.
    • Always upgrade to the latest version of Tails to fix known vulnerabilities as soon as possible.
  • We added a new feature of the Persistent Storage to save the settings from the Welcome Screen.

    This feature is beta and only the additional setting to enable the Unsafe Browser is made persistent. The other settings (language, keyboard, and other additional settings) will be made persistent in Tails 4.9 (July 28).

    new Welcome Screen feature in the Persistent Storage settings

Changes and updates

  • Update Tor Browser to 9.5.1.

  • Update Thunderbird to 68.9.0.

  • Update Linux to 5.6.0. This should improve the support for newer hardware (graphics, Wi-Fi, etc.).

Fixed problems

  • Fix the Find in page feature of Thunderbird. (#17328)

  • Fix shutting down automatically the laptop when resuming from suspend with the Tails USB stick removed. (#16787)

  • Notify always when MAC address spoofing fails and the network interface is disabled. (#17779)

  • Fix the import of OpenPGP public keys in binary format (non armored) from the Files browser.

For more details, read our changelog.

Known issues

  • Only use the following characters in the administration password:

    • a–z
    • A–Z
    • 1–9
    • _@%+=:,./-

    If you use spaces or other accentuated characters, like àéïøů, your will not be able to type your administration password again in your Tails session, unless you type single quotes ' around it.

    For example, if you set the administration password: née entrepôt über señor, you would have to type 'née entrepôt über señor'. (#17792)

  • The keyboard layout is not updated automatically when setting the language. For example, setting the language to French doesn't change the keyboard layout to French (AZERTY). (#17794)

  • Tails fails to start with the toram boot option. (#17800)

See the list of long-standing issues.

Get Tails 4.8

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from Tails 4.2 or later to 4.8.

  • If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.

To install Tails on a new USB stick

Follow our installation instructions:

All the data on this USB stick will be lost if you install instead of upgrading.

To download only

If you don't need installation or upgrade instructions, you can download Tails 4.8 directly:

What's coming up?

Tails 4.9 is scheduled for July 28.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

30 June, 2020 12:34PM

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2020/06

The 6th monthly report of 2020 of the Sparky project:

• Linux kernel updated up to version 5.7.6 & 5.8-rc3
• added to repos: Popcorn-Time, eDEX-UI, Visual Studio Code, VSCodium, Bitcoin-Qt, Litecoin-Qt
• Sparky 2020.06 of the rolling line released
• a point release of the stable line is on the way, stay tuned

 

30 June, 2020 12:06PM by pavroo

hackergotchi for Qubes

Qubes

Fedora 32 TemplateVMs available

New Fedora 32 TemplateVMs are now available for both Qubes 4.0 and 4.1.

Important: If you wish to use the Qubes Update widget to update a Fedora 32 template, you must first switch the default-mgmt-dvm qube to a Fedora 32 template. (Alternatively, you can create a separate management DisposableVM Template based on a Fedora 32 template for the purpose of updating Fedora 32 templates.) This does not affect updating internally using dnf.

Instructions are available for upgrading Fedora TemplateVMs. We also provide fresh Fedora 32 TemplateVM packages through the official Qubes repositories, which you can get with the following commands (in dom0).

Standard Fedora 32 TemplateVM:

$ sudo qubes-dom0-update qubes-template-fedora-32

Minimal Fedora 32 TemplateVM:

$ sudo qubes-dom0-update qubes-template-fedora-32-minimal

After installing or upgrading a TemplateVM, please remember to update (see important note above) and switch all qubes that were using the old template to use the new one.

30 June, 2020 12:00AM

June 29, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Podcast Ubuntu Portugal: Ep 96 – Ancoradouro

Mais um episódio em que falámos dos nossos setups em casa, autenticação 2FA com Ubuntu Touch, Raspberry Pi, novidades do Ubuntu no que toca a segurança, novas funcionalidades do snapcraft para releases, Ubuntu Core e as appliances Ubuntu.

Já sabem: oiçam, subscrevam e partilhem!

  • https://libretrend.com/specs/librebox

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

29 June, 2020 10:52PM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 637

Welcome to the Ubuntu Weekly Newsletter, Issue 637 for the week of June 21 – 27, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

29 June, 2020 10:11PM by wildmanne39

hackergotchi for Freedombone

Freedombone

Introducing Petnames

Recently I added deployment scripts to Epicyon for hosting on onion or i2p domains. These methods may provide greater independence, since you could easily self-host your own social media presence from your laptop and without a server, but using the long random domain names is very cumbersome. Is there a way to square the triangle of Zooko? It turns out that there is, using something called "petnames".

In Epicyon now if you select the avatar image of a person then you can optionally assign a petname for them. If you subsequently want to send a direct message to them then within the send to field you can just enter @petname. With something like an i2p address this massively simplifies the problem of specifying the intended recipient and keeps the amount of typing or copy-pasting to a minimum. When the post is sent the petname is then looked up and the full ActivityPub handle is substituted. Each account can have its own preferred petnames.

So this should make p2p ActivityPub more of a practical proposition rather than just a proof of concept.

29 June, 2020 08:01PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Data centre automation for HPC

Friction points in HPC DevOps

Many High Performance Computing (HPC) setups are still handcrafted configurations where tuning changes can take days or weeks. This is because the more you tune and optimise something, the more bespoke and unique it is, and the more unique something is, the lower the chances that things will just work out of the box, and HPC is no exception.

A new school of HPC

Now physical servers are a lot easier to set up, provision and configure thanks to tools such as MAAS. For example, connecting servers and selecting which ones will be configured for networking and which for data, is as easy as clicking a button on a web UI. This may seem innocuous but it means that a server farm can be used for one project in the morning and for something completely different in the afternoon. 

In reality, the server configuration is only the start, the base from which everything bubbles up. Re-configuration at the server level allows for use of higher-level tools such as LXD VMsKubernetes and Juju to quickly put together an environment with reusable code without needing to be a DevOps expert or having to wait for an expert to do it for you. 

What we are going to see in the next few years is a growth of HPC with cloud native tools. Or, in other words, bringing cloud software tools and good developer experience into the world of HPC to make the operations easier. 

What next?

A cloud-native experience in HPC is not a new idea [1, 2] but has been thrust into the limelight given the recent need for more scientific work being done in the fight against COVID-19. In these real-life applications what matters is no longer the ‘wall time’ the software takes from start to finish but rather the time the overall project takes to reach a practical conclusion, factoring in human time and operational processes. 

Modern cloud-native software can help with time to delivery. If you are interested and would like to explore this further let us know or watch this webinar from Scania’s Erik Lönroth in the upcoming Ubuntu Masters event.

29 June, 2020 07:15PM

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 May 2020 Software Development Update

Librem 5 May 2020 Software Development Update

This is another incarnation of the software development progress for the Librem 5. This time for May 2020 (weeks 19-22). Some items are covered in more detail in separate blog posts at https://puri.sm/news. The idea of this summary is so you can have a closer look at the coding and design side of things. It also shows how much we’re standing on the shoulders of giants reusing existing software and how contributions are flowing back and forth between upstream and downstream projects. This quickly gets interesting since we’re upstream for some projects (e.g. Calls, Phosh, Chatty) and downstream for others (e.g Debian, kernel, GNOME). So these reports are usually rather link heavy pointing to individual merge requests on https://source.puri.sm/ or to the upstream side (like e.g. GNOME’s gitlab).

Adaptive Apps

This section features improvements on adaptive apps, GTK, and underlying GTK based widget libraries like libhandy:

Short and instant messaging

Chats (aka Chatty) handles SMS and instant messaging via XMPP. It has experimental support for various other formats via libpurple. Cleanups and bug fixes continued during May:

  • Introduce a ChattyMessage class to handle different message types consistently: chatty!326
  • Cleanup ChattyConversations: chatty!332
  • Emit ‘avatar-changed’ if associated buddy avatar changes to handle avatar updates: chatty!333
  • Utils: Format time as per the current user settings: chatty!334
  • API to get and/set encryption and use it to simplify encryption handling: chatty!335
  • Window: Fix selection flicker when chat is updated: chatty!336
  • List-row: Limit message preview to a single line: chatty!338
  • Window: Set selected flag for row only if not folded: chatty!339
  • Chat: Strip client information from get_name(): chatty!340
  • Use ChattyAvatar in main window headerbar and user info dialog: chatty!341
  • pp-buddy: Avoid updating avatar often: chatty!342
  • Silence compiler warnings: chatty!343
  • Tests: Don’t set MALLOC_PERTURB_: chatty!346
  • Window: Show an error dialog if creating SMS with modem missing: chatty!347
  • pp-account: Use purple_core_get_ui() to get ui string: chatty!348
  • Fix various memory leaks: chatty!350 chatty!372
  • Manager: Make sure the user sees errors right away: chatty!353
  • Different UI fixes crammed into one merge request: chatty!354
  • New-chat-dialog: Reset search text when showing dialog: chatty!356
  • New chat dialogs: Handle pressing ‘Enter’: chatty!357
  • Don’t allow messages rows to get the focus. This eases keyboard navigation: chatty!361
  • Use GObject properties and signals more: chatty!374
  • Settings-dialog: Call idle users that (not offline): chatty!375
  • Use GAppliation more. This makes chatty more a regular GTK+ application: chatty!376

Lurch plugin

The lurch plugin is responsible for OMEMO encryption within libpurple:

  • Notify user when a message can’t be decrypted instead of silently dropping it: lurch!5
  • Unbreak the build and run tests during the build: lurch!6

Phone Calls

Calls (the app handling phone calls) now shows notifications on missed calls and emits haptic feedback and saw a long list of translation updates (fa, sv, uk, it, ro, fr, pt_br, jp , thanks Danial Behzadi, Anders Jonsson, Yuri Chornoivan, Antonio Pandolfo, Daniel Șerbănescu, Valéry Febvre, Rafael Fontenelle and Scott Anecito) but there were other small improvements:

  • Build calls against Debian bullseye to make it future proof: calls!112
  • add gbp.conf to make releasing less error-prone: calls!121
  • po: Update po file list and make sure fail CI if we forget to do so in the future: chatty!345
  • Skip i18n for plugins: calls!132
  • Stop busywork for translators: calls!133

Compositor and Shell

This section highlights progress in Librem 5’s GTK based graphical shell name Phosh and its wlroots based compositor Phoc:

Phosh

  • Blank the display on idle: phosh!300. This finally glues the wlr-output-power-management protocol and GNOME Settings daemon’s power plugin together and can be seen in here.
  • Translations were updated for uk and zh_TW – thanks YuriChornoivan and Yi-Jyun Pan!
  • Phosh now triggers more haptic feedback e.g. on button release and when selecting an activity from the overview

Phoc

  • We fixed way too early unblank: phoc!151
  • Nícolas F. R. A. Prado fixed compilation with -Wswitch: phoc!148
  • Phoc now automatically enables new outputs to make them ‘plug and play’ again: phoc!152 (diffs)
  • We made test execution in CI more robust to not frustrate developers: phoc!149

On-Screen Keyboard

Gnome Control Center (Settings) / GNOME Settings daemon

Sadiq enhanced several panels upstream:

Feedbackd

Feedbackd is responsible for haptic, audio (and later) LED-based feedback:

  • Feedbackd now picks up the configured sound theme: feedbackd!18
  • Feedback is now ended/canceled when invoking lfb_uninit: feedbackd!19 This makes sure feedbacks are stopped when an app quits
  • Rasmus Thomsen fixed a compile race that could lead to build failures: feedbackd!15

Linux Kernel

The process of upstreaming our Linux kernel work progress is covered in a separate report. The current one is for Linux 5.7 so this is mostly about downstream improvements:

Releases

These were the releases during May for projects we’re upstream:

Lambda

If you made it down here and want to start contributing join us on matrix. We certainly welcome patches and issue comments on https://source.puri.sm/. If you want to grab an issue and can’t think of a particular problem, check the easy and help wanted tags in our GitLab instance. See you next month.

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the-people stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

 

The post Librem 5 May 2020 Software Development Update appeared first on Purism.

29 June, 2020 05:01PM by Guido Günther

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Installing ROS in LXD Containers

Courtesey Steve Jurveston, reproduced here under Creative Commons License 2.0.

It’s the season for updates. The last few weeks have ushered in ROS 1 Noetic and ROS 2 Foxy, both of which target the recently released Ubuntu 20.04 Focal Fossa. As always, new releases come with trepidation: how can I install new software and test compatibility, yet keep my own environment stable until I know I’m ready to upgrade? This is one of the many good reasons to dive into containers

In this blog post we’ll create a base LXD profile with the ROS software repositories and full graphical capabilities enabled. Launch containers to meet your robotics needs: everything from software development and system testing through robot operations can be covered within containers.

Check out our youtube video to see these instructions in action. Also see the full installation instructions for both ROS 1 and ROS 2 available at ros.org

Getting started

All you need to get started is a Linux workstation with LXD installed. The installation of LXD is not covered here as there are a number of great instructions online. See the getting started guide at linuxcontainers.org for more information.

We’ll cover hereafter the three basic steps to getting set up:

  • Creating a LXD container profile
  • Launching and connecting to the container
  • Installing ROS

Create a profile

All LXD containers have a defined profile. A default profile was created when LXD was first set installed, but we will create a ROS specific profile. This will support running ROS (either version 1 or 2) on an Ubuntu image.

The profile contains four specific configuration features:

  • ROS software repositories
  • Run X apps within the container
  • Networking
  • A disk storage device

Gather data

In order to set up the profile properly, we must first collect your workstation’s network adapter, and the user/group ID for your account.  Find your network adapter using the ip addr show command:

~:$ ip addr show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: wlp2s0:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether a8:00:0b:c0:88:e7 brd ff:ff:ff:ff:ff:ff
3: enx001a98a552d4:  mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:1a:98:a5:52:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.100/24 brd 192.168.1.255 scope global noprefixroute enx001a98a552d4
       valid_lft forever preferred_lft forever

The above example shows three network adapters: loopback, wireless and ethernet (respectively). Select the adapter to use for the container; for this case we will use the wired adapter enx001a98a552d4

To find the id of your non-root account, use the the id command:

~:$ id
uid=1001(sid) gid=1001(sid) groups=1001(sid),4(adm),24(cdrom),27(sudo), ...

In the example above my user id “sid” has a uid and gid of 1001.

Create the ROS Profile

Armed with these two key facts, we can create and edit the LXD profile for our ROS containers:

lxc profile create ros
lxc profile edit ros

This brings up a default profile template for editing within vi. Update the file as follows, then save and exit vi:


config:
  environment.DISPLAY: :0
  raw.idmap: both [your group id] 1000
  user.user-data: |
    #cloud-config
    runcmd:
      - "apt-key adv --fetch-keys 'https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc'"
      - "apt-add-repository 'http://packages.ros.org/ros/ubuntu'"
      - "apt-add-repository 'http://packages.ros.org/ros2/ubuntu'"
description: ROS
devices:
  X0:
    path: /tmp/.X11-unix/X0
    source: /tmp/.X11-unix/X0
    type: disk
  eth0:
    name: eth0
    nictype: macvlan
    parent: [your network adapter]
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: ros

Let’s break down the configuration file and look at each part individually.

Configure the environment

The following line sets the DISPLAY environment variable required by X. The display within the container is always mapped to ”’:0”’.

  environment.DISPLAY: :0

Our linux containers will run under the security context of the current user, but all work within the Ubuntu containers will be done under the default ubuntu user account. The raw.imap setting maps our workstation’s user and group ID (1001 in this example) into the container’s user and group ID (always 1000 for the default user):

  raw.idmap: both 1001 1000

Configure ROS repositories

The ROS software repositories can be added to the container every time a new container is launched. The simplest way to achieve this is to use cloud-init, and add a runcmd to the user-data section of the profile. Each of these commands will be executed whenever a new container is initialized using this profile.

The apt-key command pulls the ROS distribution signing key from github, while the two add-repository commands add the ROS 1 and ROS 2 software repositories.


  user.user-data: |
    #cloud-config
    runcmd:
      - "apt-key adv --fetch-keys 'https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc'"
      - "apt-add-repository 'http://packages.ros.org/ros/ubuntu'"
      - "apt-add-repository 'http://packages.ros.org/ros2/ubuntu'"

Configure devices

Adding the X0 device to the profile allows X data to flow between the container (“path”) and the host (“source”). If you have multiple graphics cards check the contents of /tmp/.X11-unix to make sure you’re mapping to the correct source; the source should mirror the $DISPLAY environment variable from your host.


devices:
  X0:
    path: /tmp/.X11-unix/X0
    source: /tmp/.X11-unix/X0
    type: disk

If you have a separate graphics card with a discrete GPU, you may also find it necessary to add the GPU as a device:

  mygpu:
    type: gpu
   name: gui

To learn more about running graphical apps in LXD containers, including some caveats when working with an NVidia GPU, take a look at this blog post by Simos Xenitellis.

Multiple options exist for networking. Although a container can use LXD’s built in address translation or a bridge device, our profile will use the macvlan network driver. Macvlan creates a simple bridge between the parent network adapter and the container so the container receives its own IP address on the host network. With this setup no additional configuration (e.g., port forwarding) is required for either ROS 1 or ROS 2.

The network interface (nic) device configured here uses macvlan to connect the parent network adapter enx001a98a552d4 to the container’s eth0 interface:

  eth0:
    name: eth0
    nictype: macvlan
    parent: enx001a98a552d4
   type: nic

Take a look at this post if you need more information about LXD networking with macvlan.

A disk image must be connected to the container. This disk device simply uses the lxd default data pool:

root:
path: /
pool: default
type: disk

Launch a container

The lxc launch command is used to launch a container for the first time. Use the following command to create and run an Ubuntu 20.04 container named “ros-foxy”:

lxc launch -p ros ubuntu:20.04 rosfoxy

Once the container is running, logging in is achieved by simply executing a shell within the container:

lxc exec rosfoxy -- bash

This connects to a shell on the container under the root userid; however, our configuration uses the ubuntu user account. In order to connect to a shell as the ubuntu user, execute the su command within the container:

lxc exec ros -- su --login ubuntu

This tends to be a bit cumbersome to type for every connection to the container. LXD aliases provide an easy way to simplify the command, and also make it more generally applicable. Shorten the command and make it more robust by creating an LXD alias using this command:

lxc alias add ubuntu 'exec @ARGS@ --user 1000 --group 1000 --env HOME=/home/ubuntu/ -- /bin/bash --login'

For more information about the options used with this alias, see this blog post from Simos Xenitellis.

Now connecting to an Ubuntu container with the proper context is as simple as the following command:

lxc ubuntu rosfoxy

Install ROS

Since the ROS repositories have been set up, installation is as simple as an apt-install command for the correct software bundle:

sudo apt install ros-foxy-ros-desktop

Since this container will always run ROS, we can source the ROS environment upon every login by adding it to our .bashrc startup script.

echo "source /opt/ros/foxy/setup.bash" >> ~/.bashrc
source ~/.bashrc

Most ROS commands support context-sensitive tab completion. Although not required, installing python3-argcomplete will make typing commands much easier:

sudo apt install python3-argcomplete

Now that the installation is complete, try it out by running rqt or a similar graphical application.

Conclusion

Using this ROS profile, building environments to work with different releases becomes trivial:

lxc launch -p ros ubuntu:20.04 rosnoetic
lxc ubuntu rosnoetic
sudo apt install ros-noetic-desktop-full

LXD provides a number of handy commands for working with containers. For instance we can clone a container by simply using the lxc copy command:

lxc copy rosfoxy rosfoxy-2

When work with the container is complete, simply remove it:

lxc delete rosfoxy-2

Share files between the host and the container, map USB devices–think robotics hardware–to the container with additional LXD configurations.

The possibilities for containers are only limited by your imagination. For some ideas on how to use LXD containers for ROS development, check out this blog post by Ted Kern. We’d appreciate any comments you have on your experiences using LXD containers with ROS!


29 June, 2020 03:00PM

Ubuntu Blog: What is Apache Kafka and will it transform your cloud?

(Want any cloud app managed? Reach out to Canonical now. Watch our webinar onApache Kafka in production, and submit your plan for review by our app engineers.)

Everyone hates waiting in a queue. On the other hand, when you’re moving gigabytes of data around a cloud environment, message queues are your best friend. Enter Apache Kafka.

Apache Kafka enables organisations to create message queues for large volumes of data. That’s about it – it does one simple but critical element of cloud-native strategies, really well. Let’s look at the three significant benefits, challenges and use cases of Apache Kafka, and the easiest way to get it running in production.

Apache Kafka – what is it?

You need to know three things – topics, partitions and replication.

Apache Kafka connects apps that publish data to apps that want to subscribe to that data. It first stores data into a log called a topic. The topic keeps a sense of the order of data it receives, as the publisher appends data to the end of it. Subscribing apps read from the log, based on asking for an offset of the data.

To make sure publishing and subscribing can occur at a speed and the scale the cloud environment needs, Apache Kafka partitions data. This means making a copy of all or a part of a topic to partition it.

Apache Kafka is a queue you will love. Photo by Adrien Delforge on Unsplash

Finally, partitions are replicated to ensure high availability and failure tolerance. Replication means multiple copies of partitions are made and the duplicates are stored in different locations, such as various data centres.

Why use Apache Kafka – 2 ways it transforms clouds

Providing scalability

Kafka solves scalability challenges as partitions of a topic can independently manage read and writes from subscribers and producers. Partitions let Kafka perform multiple reads and writes. Partitions also maintain speed, as the number of subscribers and producers increases. Kafka finally keeps order when various producers “write” to the same topic at the same time, thus not losing data.

Scalability achieved with partitions allows organisations to grow their cloud environment and makes it easier to add new subscribers or publishers to existing topics. Apache Kafka lets a cloud environment scale horizontally, with new nodes easily added to existing infrastructure. Kafka also vertically scales, with higher throughput achieved by making new partitions. 

The reliable choice

Next, Apache Kafka is a robust solution because replicas increase fault tolerance and reliability. Without replication, a Kafka topic would be a single point of failure, and so replicas act as redundancy in emergencies. When a server with a partition goes down, there are separate copies, so the data isn’t lost.

Replicas make Kafka fit for mission-critical workloads. By storing replicas in different availability zones or regions, a cloud environment can reach high availability, and improve their uptime.

How is Apache Kafka used?

So far, we’ve learnt the fundamental concepts of Apache Kafka and why it is advantageous to include in cloud environments. Now, let’s focus on practical applications:

  • Stream processing: It enables you to create real-time data streams. Subscribing apps can process data and transform it, before publishing the data to other subscribing apps.
  • Fast message queue: In a literal sense, it can be used to send and receive messages such as emails and IMs. More generically, it allows message passing in a microservice architecture. Kafka moves messages without knowing the format of the data, and this means it can do so very fast – endpoints decode data with no overhead in the transit process.
  • Data aggregation: Kafka can make a common topic with multiple producers writing to a topic. It solves the complexity of having numerous producers append to a time-sensitive log, and has in-built functionality to arbitrate clashes. It is useful for log data so that multiple sources of log data can be combined.

Challenges of using Apache Kafka

We know Apache Kafka has the features for scalability (partitioning) and reliability (replication). However, to apply the elements to a business use-case takes planning and architecture design.

Primarily, there are physical constraints to how scalable and reliable Kafka can perform. Users need to make sure there is adequate network bandwidth and disk space for clusters. 

Second, careful consideration in selecting the number of partitions made for a topic. If it is too high, naturally this will slow the system down. Too low, and publishers or subscribers stall in getting access to a topic.

Finally, replication will only lead to high availability and improved reliability if replicas are stored in different servers, regions and availability zones. Replication requires physical cloud environments to meet business requirements.

Canonical’s webinar on Kafka in production, discusses these three topics and more. You can watch it on-demand now. 

Optimised Apache Kafka, managed for you

In the spring, we released Managed Apps, and this included Apache Kafka. With Canonical’s managing your app you get the following benefits.

  1. We provide the highest quality deployment, optimised to your business needs and with automation wrapped around critical tasks. Automation, achieved by an operator framework, means reduced response-time and human error during operation.
  2. Quality deployment leads to fewer day-2 errors, the main driver of TCO, and so we can offer app management at the lowest possible price-point. It also means your teams are free from doing day-to-day management and can focus on what matters – your business
  3. We manage on any conformant Kubernetes and virtually any cloud environment. This means you know who is responsible for the management, with fewer grey areas.
  4. Fast and lower-risk start. Our team does the heavy lifting so you can focus on using apps and transforming your cloud.

Summary

Apache Kafka offers a general-purpose backbone for all your cloud’s data needs. It provides practical solutions to get the reliability and scalability needed in any cloud environment. It is flexible enough to be essential for many use-cases. To optimise your deployment, improve quality and economics, speak to our engineers today

To learn more about Canonical’s Managed Apps offerings, check out our webinar on Managed Apps and Apache Kafka in production.

29 June, 2020 01:04PM

Jonathan Riddell: OpenUK Awards Close Tomorrow

OpenUK Awards are nearly closed. Do you know of projects that deserve recognition?
 
Entries close midnight ending UTC tomorrow
 
Individual, young person or open source software, open Hardware or open data project or company
 
The awards are open to individuals resident in the UK in the last year and projects and organisations with notable open source contributions from individuals resident in the UK in the last year.

29 June, 2020 09:52AM

June 27, 2020

hackergotchi for SparkyLinux

SparkyLinux

Visual Studio Code & VSCodium

There are 2 new applications available for Sparkers/Developers: Visual Studio Code & VSCodium

What is Visual Studio Code?

Visual Studio Code is a distribution of the Code – OSS repository with Microsoft specific customizations released under a traditional Microsoft product license.
Visual Studio Code combines the simplicity of a code editor with what developers need for their core edit-build-debug cycle. It provides comprehensive code editing, navigation, and understanding support along with lightweight debugging, a rich extensibility model, and lightweight integration with existing tools.

Installation (amd64 only)

sudo apt update
sudo apt install code

or via APTus-> Office-> Visual Studio Code icon.

Visual Studio Code

Copyright (c) Microsoft Corporation. All rights reserved.
License: MIT
GitHub: github.com/microsoft/vscode

What is VSCodium?

This is not a fork (of Visual Studio Code). This is a repository of scripts to automatically build Microsoft’s vscode repository into freely-licensed binaries with a community-driven default configuration.
This repository contains build files to generate free release binaries of Microsoft’s VSCode. When we speak of “free software”, we’re talking about freedom, not price. Microsoft’s downloads of Visual Studio Code are licensed under this not-FLOSS license and contain telemetry/tracking.

Installation (amd64 and ARMHF)

sudo apt update
sudo apt install codium

or via APTus-> Office-> VSCodium icon.

VSCodium

Copyright (c) Microsoft Corporation. All rights reserved.
License: MIT
GitHub: github.com/VSCodium/vscodium

 

27 June, 2020 12:09PM by pavroo

June 26, 2020

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Adapting To Circumstances

I have written prior that I wound up getting a new laptop. Due to the terms of getting the laptop I ended up paying not just for a license for Windows 10 Professional but also for Microsoft Office. As you might imagine I am not about to burn that much money at the moment. With the advent of the Windows Subsystem for Linux I am trying to work through using it to handle my Linux needs at the moment.

Besides, I did not realize OpenSSH was available as an optional feature for Windows 10 as well. That makes handling the herd of Raspberry Pi boards a bit easier. Having the WSL2 window open doing one thing and a PowerShell window open running OpenSSH makes life simple. PowerShell running OpenSSH is a bit easier to use compared to PuTTY so far.

The Ubuntu Wiki mentions that you can run graphical applications using Windows Subsystem for Linux. The directions appear to work for most people. On my laptop, though, they most certainly did not work.

After review the directions were based on discussion in a bug on Github where somebody came up with a clever regex. The problem is that kludge only works if your machine acts as its own nameserver. When I followed the instructions as written my WSL2 installation of 20.04 dutifully tried to open an X11 session on the machine where I said the display was.

Unfortunately that regex took a look at what it found on my machine and said that the display happened to be on my ISP’s nameserver. X11 is a network protocol where you can run a program on one computer and have it paint the screen on another computer though that’s not really a contemporary usage. Thin clients like actual physical X Terminals from a company like Wyse would fit that paradigm, though.

After a wee bit of frustration where I was initially not seeing the problem I had found it there. Considering how strangely my ISP has been acting lately I most certainly do not want to try to run my own nameserver locally. Weirdness by my ISP is a matter for separate discussion, alas.

I inserted the following into my .bashrc to get the X server working:

export DISPLAY=$(landscape-sysinfo --sysinfo-plugins=Network | grep IPv4 | perl -pe 's/ IPv4 address for wifi0: //'):0

Considering that my laptop normally connects to the Internet via Wi-Fi I used the same landscape tool that the message of the day updater uses to grab what my IP happens to be. Getting my IPv4 address is sufficient for now. With usage of grep and a Perl one-liner I get my address in a usable form to point my X server the right way.

Elegant? Not really. Does it get the job done? Yes. I recognize that it will need adjusting but I will cross that bridge when I approach it.

Since the original bug thread on Github is a bit buried the best thing I can do is to share this and to mention the page being on the wiki at https://wiki.ubuntu.com/WSL. WSL2 will be growing and evolving. I suspect this minor matter of graphical applications will be part of that evolution.

26 June, 2020 10:14PM

Full Circle Magazine: Full Circle Magazine #158

This month:
* Command & Conquer
* How-To : Python, Ubuntu On a 2-in-1 Tablet, and Rawtherapee
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback
* Everyday Ubuntu : Starting Again
Ubports Touch
* Review : Kubuntu, and Xubuntu 20.04
* Ubuntu Games : Into The Breach
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot! https://fullcirclemagazine.org/issue-158/

26 June, 2020 07:04PM

hackergotchi for Freedombone

Freedombone

Without javascript

I've now gotten around to fully removing javascript from Epicyon. Previously it only had a few lines of javascript for the content warning button, dropdown menu and for moving the cursor to the end of some text, and that's now all been replaced by html.

It's easy to see why web systems become bloated with megabytes of javascript. A lot of programming involves web searches to look up how to do things and for web content such as a dropdown menu it's highly probable that the first example you'll encounter will be written in javascript. It's actually quite difficult to avoid ending up adding some amount of javascript. You have to be consciously not wanting to do that, rather than just going with what's easy.

Javascript isn't necessarily always bad, but it can create security problems and in a Tor browser it often involves the inconvenient user experience of having to fiddle with NoScript - which itself has a rather hostile interface - to allow it on particular sites that you trust.

26 June, 2020 11:08AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Hacker Culture Reading List

A friend recently asked me if I could recommend some reading about hacking and security culture. I gave a couple of quick answers, but it inspired me to write a blog post in case anyone else is looking for similar content. Unless otherwise noted, I’ve read all of these books/resources and can recommend them.

Nonfiction

Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World

Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World is a well-researched deep dive into one of the original and most significant hacking groups. Members of the cDc have been involved in many of the early fundamental techniques and tools in the world of hacking. Even now, decades later, they continue to influence the fields of hacking and cybersecurity, through activities like member Beto O’Rourke’s influences in politics, major roles in the cybersecurity industry, and other positions. They’ve had members testify before Congress, involved in running DARPA, and the development of privacy technology Tor. There’s also a great companion talk to go with the book.

Breaking and Entering: The Extraordinary Story of a Hacker Called "Alien"

Breaking and Entering: The Extraordinary Story of a Hacker Called “Alien” covers a story of a hacker who started her foray into exploring the restricted during her time at MIT. The hacking done there was more akin to what we might call urban exploration today, but was called hacking at the time. The inquisitiveness of wanting to explore what was “verboten” is what has lead to generations of great hacks we’ve seen since. Alien takes her interest in the forbidden and brings it to the digital age through her computer exploits and develops into one of the most talented hacking careers. Her skills aren’t limited to the keyboard, however, and she takes things into her own hands and starts her own business.

Ghost in the Wires: My Adventures as the World's Most Wanted Hacker

Love him or hate him, Kevin Mitnick is both one of the best known hackers of all time as well as a significant influence in the hacking and phone phreaking scenes of the 1990s. Ghost in the Wires: My Adventures as the World’s Most Wanted Hacker documents the times he was on the run from federal authorities while labeled as the “world’s most dangerous hacker.” While the book is not very technical at all, it describes some great social engineering exploits and is an enjoyable read to get to understand the actions involved in escaping the Feds. Even though they’re older books, I also enjoy Mitnick’s The Art of Deception: Controlling the Human Element of Security and The Art of Intrusion: The Real Stories Behind the Exploits of Hackers, Intruders and Deceivers.

The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage

The Cuckoo’s Egg goes back to an era of timesharing on mainframe computer systems. Astronomer turned systems administrator turned author Cliff Stoll details how an accounting error turned into a hunt for a hacker that has compromised their timesharing system at Lawrence Berkeley Labs. It’s not just some bored student or phreaker messing with their system – it turns into a major intelligence and criminal investigation, leading to a major bust. Oh, and that accounting error? It was over 75 cents. This may well be one of the earliest hacking investigations to be documented and publicized in this way. It’s a hybrid of detective story and hacking story, and is just the right length to tell the story.

Crypto: How the Code Rebels Beat the Government -- Saving Privacy in the Digital Age

Steven Levy’s Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age describes the first Crypto Wars, in the mid 1990s. It discusses the issues and implications of access to cryptography, why the government wants to control access to cryptography (to control information) and how the issues played out. This may be one of the most timely books in this list given the issues at play with the US legislature, the laws recently passed in Australia, and other issues at hand. Paraphrasing George Santayana, “Those who do not remember their past are condemned to repeat their mistakes.”

Hackers: Heroes of the Computer Revolution

Also by Steven Levy, Hackers: Heroes of the Computer Revolution is a great profile of the original hackers, even before the times of cybersecurity. The book covers the early “hackers” of computers like Bill Gates, Steve Wozniak, and Richard Stallman and the transition from the mainframe computing world to the world of computers in every home. Today we’ve transitioned to computers in every pocket, but the evolution was begun by these early computer enthusiasts. Without their efforts (and their sometimes bending the rules), we wouldn’t have the hacking scene we do today. Steven Levy covers the history in depth and with a great amount of detail.

Fiction (Culturally Influencing)

Neuromancer

William Gibson, author of Neuromancer, the first book in the Sprawl trilogy, is the father of the term “cyberspace”, giving us “cybersecurity”. I’m not sure whether to thank him or hate him for the term “cybersecurity”, but I do know that this book is one of the defining books of the “cyber” culture, including modern hackers, cyberpunks, and a large part of the culture surrounding the realm of hacking. It’s likely that this book (and series) has influenced an entire generation of science fiction writers and the surrounding culture. The book is a literary masterpiece in its own right, winning both a Philip K. Dick Award and a Hugo Award. The other two books in the Sprawl trilogy are Count Zero and Mona Lisa Overdrive. This is, without a doubt, my favorite book trilogy and one of my favorite books of all time.

Snow Crash

Snow Crash by Neal Stephenson is set in the Metaverse and heavily features virtual reality being used as a substitute for, well, reality. In many ways, it’s a 21st century take on Neuromancer but also brings into play a blend of old and new culture and truly makes you think about where society is headed. This book managed to make Time’s list of 100 best English-language novels and is also one of my top 10 books. Neal Stephenson is an imaginative author with an eye for the future that makes you think. His book Cryptonomicon is another of my favorites.

Digital Fortress

Digital Fortress by Dan Brown (author of The Da Vinci Code) is an all-too-real fictional account of a secret NSA supercomputer capable of breaking any encryption system. With malware introduced into the computer, the system is beginning to break down and they must uncover the story of what has happened and how. With the author of the code infecting the machine dead, the members of the NSA cryptography team must work to figure out what’s behind the malware and the code it contained. This novel is deeply engrossing – the first time I read it, I ended up missing a whole night of sleep reading it. (I can’t recommend this approach, especially if you have to go to work the next day.)

Little Brother

Little Brother by Cory Doctorow is a modern day take on Orwell’s 1984, updated for the technologies and organizations of today. Quite frankly, it’s so realistic to me that it’s deeply unsettling – in the uncanny valley of government surveillance. It’s a reminder that we have to be careful of the way we treat our privacy, our rights, and the power of our government. Doctorow has a scary outlook on life, but it’s an important read for anyone concerned about the state of our society. Though written as a “young adult” novel, I found it an engaging and interesting read and thought provoking. In fact, I’ve read it at least twice, along with Homeland. If you’re more the novella type, I can strongly recommend Overclocked, though the story sysadmins is a bit of a horror story. (Though maybe you like that sort of thing!)

Guilty Pleasures

Though I can’t recommend them as “high quality literature”, there are a few books I enjoy reading in the vein of hacking. These include:

26 June, 2020 07:00AM

June 25, 2020

hackergotchi for Cumulus Linux

Cumulus Linux

Kernel of Truth season 3 episode 8: Cumulus Linux in action at Cloudscale

Subscribe to Kernel of Truth on iTunesGoogle PlaySpotifyCast Box and Sticher!

Click here for our previous episode.

If you’ve listened to the podcast before you may have heard us reference our customers from time to time. In this episode we’re switching things up and instead of referencing a customer, you’re going to hear directly from one! Manuel Schweizer, CEO of Cloudscale, joins host Roopa Prabhu, Attilla de Groot and Mark Horsfield to chat about Cloudscale’s first hand experience with open networking and what they hope the near and distant future of open networking will look like.

Guest Bios

Roopa Prabhu: Roopa Prabhu is a Linux Architect at Cumulus Networks, now NVIDIA. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at @__roopa.

Attilla de Groot: Attilla has spent the last 15 years at the cutting edge of networking, having spent time with KPN, Amsterdam Internet Exchange, and HP, with exposure to technology from Cisco, HP, Juniper, and Huawei. He now works for NVIDIA networking, formally Cumulus Networks, the creators of open networking, where he is able to continue his interest in open architecture design and automation. You can find him on Twitter at @packet_ninja.

Manuel Schweizer: For more than 10 years, Manuel has been working as a Network Engineer at various companies. Seeing demand for a self-service IaaS platform based on open-source technology that is entirely based in Switzerland, he basically founded cloudscale.ch “by accident”. He can be found on Twitter at @geitguet. 

Mark Horsfield: Mark is a member of the technical support team at Cumulus Networks, now NVIDIA. Behind the scenes our support team is body slamming packets so you don’t have to.

Episode links

Join our community Slack channel here. We’re also on LinkedInTwitterFacebook and Instagram!

To learn more about our acquisition by NVIDIA, view our resource hub here.

25 June, 2020 04:45PM by Katie Weaver

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S13E14 – Ace of spades

This week we’ve been playing Command & Conquer. We discuss your recent feedback about snaps and Ubuntu rolling release. Then we bring you some command line love and go over the rest of your wonderful feedback.

It’s Season 13 Episode 14 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
  • We discuss all your feedback about snaps and rolling releases.
  • We share a Command Line Lurve:
sudo apt install python3-proselint
proselint text.md

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

25 June, 2020 02:00PM

Ubuntu Blog: Split Personality Snaps

Broadly speaking, most snaps in the Snap Store fall into one of two categories, desktop applications and server daemons. The graphical applications such as Chromium and Spotify use desktop files, which ensure they can be opened on demand by any user via a menu or launcher. The server applications such as NextCloud and AdGuard-Home typically have systemd units, which control their automatic (background) startup.

Taking an existing desktop application and converting it to an always-running appliance leads to some interesting engineering challenges. Applications and games tend to have expectations for what programs and services are accessible at runtime, which need mitigating. Application confinement in snaps on Ubuntu Core means some assumptions about file and device access may no longer apply.  

We will typically need to stand-up a configuration in which the application believes it’s running in a standard desktop environment. The application will also need the startup automated in an appliance setting, but launched on demand when in a desktop environment.

We can be quite creative with snaps and build a “split personality” snap that can run both as a desktop application and as an appliance!

Building Blocks

Ubuntu Core doesn’t ship with Xorg or PulseAudio out of the box. This isn’t a problem if the appliance doesn’t require a connected monitor, nor makes any sounds, as is the case with Plex Media Server and Mosquitto but this can be a problem with applications which require access to the display or sound hardware.

Unlike multi-user desktop environments, appliances tend to only require one local system user under which services automatically start. On a desktop system, the logged-in user launches applications on demand, whereas with an appliance, a system user launches the appliance on boot. 

Our snap needs to service the needs of running on a desktop environment where Xorg and PulseAudio exist, and on an Ubuntu Core system where they don’t. It also needs to be capable of being manually launched in a window, but also auto-start full-screen on an appliance.

Multiple Personality Order

Let’s have a look at ScummVM, where we’ve done this work already.

ScummVM is a program that allows you to run a variety of  classic graphical point-and-click adventure games like The Secret of Monkey Island, provided you already have their data files. This makes it a great asset in the hands of old-game fans.

It’s published as a snap in the Snap Store, and runs on many different desktop Linux distributions. We thought it might be fun to make a “ScummVM Appliance” using this snap on top of Ubuntu Core. The goal is to create a single purpose device, which boots directly to the game selection screen, and functions like a ScummVM console. 

As Ubuntu Core has no Xorg, we need to use mir-kiosk to provide a Wayland-compatible display stack to drive an external monitor. Ubuntu Core also ships without PulseAudio, so  we’ll get no in-game sound. This is resolved by installing the pulseaudio snap. Keyboards & mice will just work as expected.

To enable the snap to function both on-demand on a desktop, and launch automatically on an appliance, we need to create two ‘apps’ stanzas in the snapcraft.yaml file that is used to build the ScummVM snap. 

apps:
scummvm:
command: desktop-launch $SNAP/bin/wayland-if-possible.sh $SNAP/bin/scummvm-launch.sh
daemon:
command: daemon-start.sh $SNAP/bin/scummvm-launch.sh -f
daemon: simple

The daemon: simple directive means the specified command will launch automatically on start.

This command includes logic that detects whether the program is running on an Ubuntu Core appliance, and if it is, it will launch using Mir as the Wayland display system. If the logic determines it’s not running on an Ubuntu Core appliance, the script exits immediately.

#!/bin/sh
if [ "$(id -u)" = "0" ] && [ "$(snapctl get daemon)" = "false" ]
then
# If not configured to run as a daemon we have to stop here
# (There's no "snapctl disable …")
snapctl stop $SNAP_NAME.daemon
exit 0
fi
mkdir -p "$XDG_RUNTIME_DIR" -m 700
if [ -z "${WAYLAND_DISPLAY}" ]
then WAYLAND_DISPLAY=wayland-0
fi
real_wayland=$(dirname "$XDG_RUNTIME_DIR")/${WAYLAND_DISPLAY}
while [ ! -O "${real_wayland}" ]; do echo waiting for Wayland socket; sleep 4; done
ln -sf "${real_wayland}" "$XDG_RUNTIME_DIR"
exec "$@"

That’s essentially all we need! One single yaml and a smattering of scripting is all that’s required to make a dual-personality snap that can run on-demand in desktop contexts, and automatically when run as an appliance.

ScummVM running on a traditional Linux desktop


ScummVM running under Ubuntu Core in a Virtual Machine

The ScummVM snap is built for both x86 and arm based CPUs, we could create a simple self-updating boot-to-game device based on a low-cost device like a Raspberry Pi. Plugin a keyboard, screen and mouse, and we can be gaming like it’s 1990 all over again.

If you have an idea for something we could turn into an official Ubuntu Appliance, please join the discourse and post a suggestion in the proposed new appliances category.

25 June, 2020 01:50PM

Ubuntu Blog: Open source holds the key to autonomous vehicles

A growing number of car companies have made their autonomous vehicle (AV) datasets public in recent years. 

Daimler fueled the trend by making its Cityscapes dataset freely available in 2016. Baidu and Aptiv respectively shared the ApolloScapes and nuScenes datasets in 2018. Lyft, Waymo and Argo followed suit in 2019. And more recently, automotive juggernauts Ford and Audi released datasets from their AV research programs to the public.

Given the potential of self-driving cars to considerably disrupt transportation as we know it, it is worth taking a moment to explore what has motivated these automotive players — otherwise fiercely protective of their intellectual property — to openly share their precious AV datasets with each other and with the wider world.

The idea of AV datasets

AV prototypes come with a bunch of integrated sensors. Cameras, lidars, radars, sonars, GPS, IMUs, thermometers, hygrometers, you name it. Each of these sensors specialises in gathering one specific kind of information about the car’s environment.

Now imagine a fleet of such prototypes driven through different environments under varying traffic, weather and lighting conditions, all the while recording observations from its suite of sensors. The result is an abundant amount of raw data.

Prime up this data through scaling, normalising and removing corrupt values, put it all in one coherent collection, and what you are left with is a nifty AV dataset. The idea of such a dataset is to gather as much information as possible about real world conditions that a self-driving car could find itself in. Why? We’ll get to that in a moment.

For now, let’s talk about data enrichment. Once the dataset is primed, one can go a step further and also label this data with attributes defining the objects perceived by the car. This provides the ground truth for an observation.

This is a vehicle, that is a pole, here is a person, and there’s a manhole [Src: Waymo]

Now a program trying to crunch sensor data and find patterns in it can confirm what it is actually looking at. Kind of like a puzzle that has the correct answers at the end of the book. This renders an AV dataset incredibly useful for machine learning tasks.

The importance of datasets

Simply put, datasets are important because AVs rely heavily on machine learning algorithms, and machine learning (ML) in turn relies heavily on observation data.

Let’s unpack that a little.

ML is a branch of programming that deals with building systems that can automatically learn and improve from experience. That is, they can carry out their tasks without explicitly being programmed to handle individual events occurring in the process.

This is crucial for applications where a system can be exposed to virtually an infinite number of scenarios, like a car driving in everyday life. It is impossible to account for every single case that this car may encounter, so we need a mechanism to have the car make decisions about new scenarios based on prior experience. That is the crux of ML. 

But where does its experience come from? How does a machine learn?

Enter, datasets.

By examining field data, ML algorithms can deduce patterns in a system and continue fine tuning their behaviour until they demonstrate optimal results across a diverse set of use-cases. 

The more data an algorithm crunches, the more it learns, and the better it enables an AV to correctly respond to its environment. Like when to turn, when to stop, when to drive forward and when to give way to other vehicles.

Trained on the right datasets, ML algorithms can be extremely potent in handling even the most unforeseen of circumstances. Stanford professor Andrew Ng famously emphasised this relationship in his lecture series on machine learning: “It’s not who has the best algorithm that wins. It’s who has the most data.”

Which begs the question: If data is indeed the winning factor, why in the world would automotive companies just give away their datasets to the public for free?

The hype machine sputters

To understand this, we must first recognise that AV technology has fallen far short of all the hype and attention it had garnered over the last decade.

Nissan no longer plans to bring its promised driverless vehicle to market this year. Volvo never deployed its much touted fleet of 100 self-driving cars. GM’s ambitions for mass-producing vehicles without steering wheels or pedals are yet to be realised. And I doubt Tesla will have a million robotaxis on the streets by the end of 2020. Even Ford admitted that this technology is “way in the future”, despite having previously announced its own robotaxi rollout by 2021.

As the hype machine sputters, automakers have been left to reckon with the real challenges of developing a practical AV solution. It turns out that building a computer that matches the incredibly nuanced cognitive decisions we make each time we take the driver’s seat is an enormous challenge.

And car companies are slowly realising the limitations of their own resources in tackling these challenges alone.

AVs have proven to be way harder than what auto companies previously thought.

At this point, what can automotive companies do to make meaningful advancements in AV technology?

A plausible answer lies in open source. 

Open source to the rescue

In The Wealth of Networks, Yochai Benkler describes the notion of open source as a mode of production that is “based on sharing resources and outputs among widely distributed, loosely connected individuals who cooperate with each other without relying on either market signals or managerial commands”.

AV companies have spent millions of dollars and thousands of man-hours in accumulating several petabytes of field data. At this scale, it is virtually impossible for a single team to sift through all this data and gainfully apply it to any practical application. Even ignoring the manpower involved, the sheer amount of computing resources that need to execute even the most basic ML techniques on such data exceeds the capacity of a single organisation.

Several prominent organizations have contributed to producing AV datasets in recent years.

On the other hand, sharing these massive datasets with the public allows individual developers and smaller teams from all over the world to target specific problems which they can attempt to solve with a subset of the data. They can independently work on constructing new algorithms, enhance existing ML models and in general make progress towards addressing key problems in AV technology, all without the extra burden of collecting, cleaning and priming their own datasets.

This is especially important because not many teams out there have the ability to gather the kind of high quality experimental data that automotive outfits are able to collect at scale. By releasing their AV datasets, companies are essentially leveling the playing field. A developer sitting in his garage now has the same opportunity to create the next groundbreaking AV algorithm as an R&D engineer working in a well-funded lab.

Open datasets thereby lower the barriers to innovation. They accelerate technology development. Like any work in the open source community, they set up a framework that prioritises mutual advancement over individual copyright.

Leveling the playing field spurs innovation.

Moreover, without well-primed datasets, researchers cannot reasonably be expected to achieve the breakthroughs that are crucial to commercial success in this cutting-edge field. In fact, the criticality of a good dataset to ML research — and by proxy, to AV technology — is tastefully expressed by Google’s adage “Garbage in, garbage out”. The usefulness of an ML model is only as good as the dataset it has been trained on. So high-quality data is needed to create software that can reliably teach autonomous vehicles how to interact with their environments. 

Shared datasets also allow engineers from different companies to collaborate on the same data — or, being in the public domain, even combine their individual datasets — to solve problems that are pertinent to both parties. With a common dataset as a shared foundation, teams at different locations and belonging to different organisations can easily replicate results and share code to further refine programs, all the while inching towards greater levels of autonomy.

Democratising AV development through shared datasets benefits the entire community, and a rising tide lifts all boats.

Harnessing the power of open source

Since open source projects are non-proprietary, researchers and developers ultimately share their findings, solutions, and new knowledge back with the community. Even with their raw field data made public, companies can retain competitive advantage by improving their AV technology based on contributions from open source and supporting it commercially. Thus, the upfront investment that auto companies make in collecting AV datasets eventually pays off when new breakthroughs from the community are integrated back into the companies’ products.

An ecosystem for Autonomous Vehicle development founded on open source.

In the true spirit of open source, a symbiotic relationship is established from sharing AV datasets with the public. Researchers gain recognition for their novel insights. Developers build an industry repute for contributions to open source projects. And companies can integrate these new advancements into their own products, thus strengthening their portfolio and bringing new features to their customers faster.

By allowing more people to contribute to the field, car companies can harness the economics of open source and benefit from faster software cycles, more reliable codebases, and volunteer help from some of the brightest minds in the world. 

Automotive companies are beginning to understand this, and the industry will greatly benefit if this trend becomes the default.


To find out how Canonical can help with your automotive project, get in touch with us today.

25 June, 2020 09:23AM

Ubuntu Blog: Ceph storage on VMware

If you were thinking that nothing will change in your VMware data centre in the following years, think again. Data centre storage is experiencing a paradigm shift. Software-defined storage solutions, such as Ceph, bring flexibility and reduce operational costs. As a result, Ceph storage on VMware has the potential to revolutionise VMware clusters where SAN (Storage Area Network) was the incumbent for many years.

SAN is the default storage for VMware in most people’s minds, but Ceph is gaining momentum. In this blog, we will do a high-level comparison of SAN and Ceph to highlight how Ceph storage on VMware makes sense as the traditional data centre is moving towards a world of low operating costs through automation that leaves space for more R&D and innovation.

Introduction to SAN and Ceph

SAN arrays were introduced to solve block-level data sharing in enterprises. They use high-speed fibre channels and the iSCSI protocol to expose block devices over a network. SAN runs on expensive storage servers, requires one of ethernet and fibre channel network interface cards and can be complex to maintain. On the upside, SAN has well-renowned performance and allows data durability through RAID configuration.

Ceph is an open source project that provides block, file and object storage through a cluster of commodity hardware over a TCP/IP network. It allows companies to escape vendor lock-in without compromising on performance or features. Ceph ensures data durability through replication and allows users to define the number of data replicas that will be distributed across the cluster. Ceph has been a central offering by Canonical since its first version, Argonaut. Canonical’s Charmed Ceph wraps all Ceph components with an operational code – referred to as a charm – to abstract operational complexity, and drive lifecycle management automation. 

Storage Area Network topologyA SAN topology

SAN vs Ceph: Cost

A traditional SAN is usually expensive and leads to vendor lock-in. When it is time to scale, the SAN options are limited to buying more expensive hardware from the original vendor. Similarly, if vendor support reaches its end of life, an expensive migration may be needed. The cost of training, operations and maintenance of a SAN array is also significant. 

Ceph provides software-defined storage allowing users to choose off-the-shelf hardware that matches their requirements and budget. The servers of a Ceph cluster do not need to be of the same type so when it is time to expand soa new breed of servers can be seamlessly integrated into the existing cluster. Moreover, when using Charmed Ceph, software maintenance costs are low. The charms are written by Ceph experts and encapsulate all tasks a cluster is likely to undergo. For example, expanding or contracting the cluster replacing disks, adding an object store or an iSCSI gateway.

SAN vs Ceph: Performance

In the legacy data centre, SAN arrays were prominently used for their performance as storage back-ends for databases. In the modern world, where big data and unstructured data brought new requirements, people have shifted from SAN’s high-cost performance to lower-cost, smarter approaches based on file and object storage strategies.

Ceph clients calculate where the data they require is located rather than having to perform a look-up in a central location. This removes a traditional bottleneck in storage systems where a metadata lookup in a central service is required. This allows a Ceph cluster to be expanded without any loss in performance.

Ceph software defined storage clustering off-the-shelf storage solutionsA Ceph cluster of commodity hardware

Ceph features that make the difference

The trend of SAN as the go-to solution for virtualisation infrastructure is gradually leaving its place to software-defined solutions. These are proven to be cheaper, faster, more scalable and easier to maintain. Ceph provides the enterprise features that are now widely relied upon. It allows scaling the cluster up and down on-demand, caching the data, applying policies on the disks and a lot more. 

Ceph block devices are thin-provisioned, resizable and striped by default. Ceph provides copy-on-write and copy-on-read snapshot support. Volumes can be replicated across geographic regions. Storage can be presented in multiple ways: RBD, iSCSI, filesystem and object, all from the same cluster.

With Ceph, users can set up caching tiers to optimise I/O for a subset of their data. Storage can also be tiered. For example, large slow drives can be assigned to one pool for archive data while a fast SSD pool may be set up for frequently accessed hot data.

With regards to integration with modern infrastructure, Ceph has been a core component of OpenStack from its early days. Ceph can back the image store, the block device service and the object storage in an OpenStack cluster. Ceph also integrates with Kubernetes and containers. 

The Ceph charms provide reusable code for the integration and operations automation for both Openstack and Kubernetes. For example, the charms make it easy to provide durable storage devices from the Ceph cluster to the containers.

Why should you consider Ceph as a VMware storage backend?

SAN has been traditionally considered as the backend for VMware infrastructure. When designing their data centre, enterprises have to do complex TCO calculations for both their compute and storage virtual infrastructure that have to include performance, placement and cost considerations. Software-defined storage can help eliminate some of that complexity, leveraging the aforementioned benefits. 

Charmed Ceph has recently introduced support for the iSCSI gateway that can be deployed alongside the Ceph monitors and OSDs, and provide Ceph storage straight to VMware ESXi hosts. The ESXi hosts have had support for datastores backed by iSCSI for some time so adding iSCSI volumes provided by Ceph is straightforward. The iSCSI gateway even provides actions to set up the iSCSI targets on behalf of the end-user. Once the datastore has been created, virtual machines (VMs) can be created within VMware backed by the Ceph cluster.

One would ask why not use the VMware specific software-defined storage in this case. Ceph’s ability to provide a central place for all of a company’s data is really the key differentiator here. Rather than having a dedicated VMware storage solution and another storage solution for object storage, Ceph provides everything under a single solution. Compared to both SAN and VMware storage costs, Ceph is cost-effective at scale, without compromising on features or performance of SAN.


Learn more about Charmed Ceph or contact us about your datacenter storage needs.

25 June, 2020 07:30AM

David Tomaschik: Stop EARN IT and LAED

Unless you’ve been living under a rock, you know that the Crypto Wars are back. Politicians, seemingly led by Senator Lindsey Graham of South Carolina, seem bound and determined to undermine user’s privacy and security online to strengthen the power of the police state. It will have disproportionate affects on the innocent rather than criminals and will raise operating costs and make it much harder for small businesses and startups to compete in the US.

  1. Much like guns and nuclear weapons, the cryptography genie is already out of the bottle. Inserting backdoors or limiting access to encryption will affect law-abiding citizens, but criminals will be able to continue to use encryption software that already exists. In fact, the Al Qaeda terrorist organization already develops their own encryption software. It’s not like they’ll comply with US laws. While we might succeed in reducing their access to some types of encryption (e.g., encrypted phones), we won’t be able to completely eliminate it for motivated criminal enterprises or terror cells.
  2. There are a lot of legitimate reasons to want to use end-to-end encryption or full device encryption. Do companies want their sensitive data accessible to competitors? Do individuals want their data available to someone who finds their phone in a cab or steals it? Journalists want to be able to communicate with their sources in confidence, and attorneys and doctors should be able to securely encrypt their privileged files. The United States Senate even encourages Senators to use end-to-end encryption, as does the 82nd Airborne Division of the US Army.
  3. There is no such thing as good guy only access. Being good or evil is a matter of perspective and ethics, and technology does not recognize those. Any backdoor, key escrow, or other system designed to comply with these laws is subject to abuse by malicious governments, malicious insiders, or criminals. Cryptographer and professor Matthew Green says so, Bruce Schneier says so, and I say so. We’ve seen providers with stored keys breached before, so it would be pretty surprising if it didn’t happen again. The only way to keep the keys from being compromised is for the provider to not have them at all.
  4. It will decrease trust in American service providers. Look at the way Huawei and ZTE are treated because of potential Chinese backdoors. Why would another country want the US government to have a backdoor into communications they use? Even if you believe intent is good (and stopping child abuse is), the way the US government has used spying capabilities in the past raises serious concerns.

There’s good analysis on both EARN IT and LAED, the two bills introduced by Senator Graham here:

Based on EFF language, I wrote to my Senators and Representative the following:

I write you as both a constituent and in my personal capacity as an expert in cybersecurity. For most of the past decade, I have been employed as a senior security engineer at a large technology company, I have spoken at multiple conferences on information security, and have published articles on the matter..

I strongly urge you to reject both the EARN IT Act (S.3398) and the Lawful Access to Encrypted Data Act. They both pose an existential threat to online privacy and security.

End-to-end encryption protects innocent and law-abiding users against data breaches at their service providers. As we’ve seen time and time again, persons are irreversibly harmed when their communications are leaked, and requiring backdoor access for the government opens that backdoor to abuse by foreign governments and criminals.

The Graham-Blumenthal bill would give the Attorney General far too much power to dictate how Internet companies must operate. Attorney General William Barr has made it clear that he would use that authority to undermine our right to private and secure communications by blocking encryption. Additionally, passing on this power to the Attorney General leaves too much to the whims of each administration, resulting in a great deal of uncertainty regarding the future course of things.

The bill would create a commission tasked with creating “best practices” for owners of Internet platforms to “prevent, reduce, and respond” to child exploitation online. But far from mere recommendations, those “best practices” would be approved by Congress as legal requirements. The EARN IT Act’s structure would let Barr strong-arm the commission to include requirements that tech companies weaken their own encryption systems in order to give law enforcement access to our private communications. Companies could also be required to over-censor speech to comply with the government’s demands, or to bend to future governments’ political agendas in other ways.

Regulations relating to restrictions on speech must reflect a careful balance of competing policy goals and protections for civil liberties. Congress can only strike that balance through an open, transparent lawmaking process. It would be deeply irresponsible for Congress to offload that duty to an unelected commission, and especially not a commission controlled by unelected government officials.

Please publicly oppose the EARN IT Act and the Lawful Access to Encrypted Data Act.

I encourage you to do the same.

25 June, 2020 07:00AM

June 24, 2020

hackergotchi for Grml developers

Grml developers

grml development blog: Grml - new stable release 2020.06 available

Long time no see, but there we are - we just released Grml 2020.06 - Ausgehfuahangl!

This Grml release provides fresh software packages from Debian testing (AKA bullseye). As usual it also incorporates current hardware support and fixes known bugs from the previous Grml release.

More information is available in the release notes of Grml 2020.06.

Grab the latest Grml ISO(s) and spread the word!

Thanks everyone, stay healthy and happy Grml-ing!

24 June, 2020 07:49PM by Michael Prokop (nospam@example.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Data science workflows on Kubernetes with Kubeflow pipelines: Part 1

Kubeflow Pipelines are a great way to build portable, scalable machine learning workflows. It is one part of a larger Kubeflow ecosystem that aims to reduce the complexity and time involved with training and deploying machine learning models at scale.

In this blog series, we demystify Kubeflow pipelines and showcase this method to produce reusable and reproducible data science. 🚀

We go over why Kubeflow brings the right standardization to data science workflows, followed by how this can be achieved through Kubeflow pipelines.

In part 2, we will get our hands dirty! We’ll make use of the Fashion MNIST dataset and the Basic classification with Tensorflow example, and take a step-by-step approach to turn the example model into a Kubeflow pipeline so that you can do the same.

Why use Kubeflow?

A machine learning workflow can involve many steps and keeping all these steps in a set of notebooks or scripts is hard to maintain, share and collaborate on, which leads to large amounts of “Hidden Technical Debt in Machine Learning Systems”.

In addition, it is typical that these steps are run on different systems. In an initial phase of experimentation, a data scientist will work at a developer workstation or an on-prem training rig, training at scale will typically happen in a cloud environment (private, hybrid, or public), while inference and distributed training often happens at the Edge.

Containers provide the right encapsulation, avoiding the need for debugging every time a developer changes the execution environment, and Kubernetes brings scheduling and orchestration of containers into the infrastructure.

However, managing ML workloads on top of Kubernetes is still a lot of specialized operations work which we don’t want to add to the data scientist’s role. Kubeflow bridges this gap between AI workloads and Kubernetes, making MLOps more manageable.

What are Kubeflow Pipelines?

Kubeflow pipelines are one of the most important features of Kubeflow and promise to make your AI experiments reproducible, composable, i.e. made of interchangeable components, scalable, and easily shareable.

A pipeline is a codified representation of a machine learning workflow, analogous to the sequence of steps described in the first image, which includes components of the workflow and their respective dependencies. More specifically, a pipeline is a directed acyclic graph (DAG) with a containerized process on each node, which runs on top of argo.

Each pipeline component, represented as a block, is a self-contained piece of code, packaged as a Docker image. It contains inputs (arguments) and outputs and performs one step in the pipeline. In the example pipeline, above, the transform_data step requires arguments that are produced as an output of the extract_data and of the generate_schema steps, and its outputs are dependencies for train_model

Your ML code is wrapped into components, where you can:

  • Specify parameters – which become available to edit in the dashboard and configurable for every run.
  • Attach persistent volumes – without adding persistent volumes, we would lose all the data if our notebook was terminated for any reason. 
  • Specify artifacts to be generated – graphs, tables, selected images, models – which end up conveniently stored on the Artifact Store, inside the Kubeflow dashboard.

Finally, when you run the pipeline, each container will now be executed throughout the cluster, according to Kubernetes scheduling, taking dependencies into consideration.

This containerized architecture makes it simple to reuse, share, or swap out components as your workflow changes, which tends to happen.

After running the pipeline, you are able to explore the results on the pipelines interfaces, debug, tweak parameters, and run experiments by executing the pipeline with different parameters or data sources.

Kubeflow dashboard – observing logs from pipeline run.

This way, experiments allow you to save and compare the results of your runs, keep your best performing models, and version control your workflow.

That’s it for this part!

In the next post, we will create the pipeline you see on the last image using the Fashion MNIST dataset and the Basic classification with Tensorflow example, taking a step-by-step approach to turn the example model into a Kubeflow pipeline, so that you can do the same to your own models.

This blog series is part of the joint collaboration between Canonical and Manceps.
Visit our AI consulting and delivery services page to know more.

24 June, 2020 06:02PM

Ubuntu Blog: MAAS 2.8 – new features

What’s new?

This new release of MAAS brings three key new benefits:

  1. Virtual machines with LXD (Beta)
  2. Tighter, more responsive UX
  3. External/remote PostgreSQL database

If you know what you want, go to maas.io/install, otherwise let’s dive in and explore these further.

Virtual machines (VMs) with LXD (Beta)

MAAS 2.8 can set up L XD-based VM hosts and virtual machines. This is an additional option to the existing libvirt-based VM hosts/VMs functionality. 

LXD VMs are manageable without requiring SSH access to the VM host, unlike libvirt KVMs.

As a system administrator, using LXD VMs for other staff members means that you don’t have to give them SSH access to the bare metal servers meaning better permission control of the estate. 

Finally, LXD has a clear API making it easy to deploy and manage. 

Tighter, more responsive UX


Machine listings have been improved, some of the most visible changes involve the way that long lists are presented within categories, as shown below.

Among those other changes are 

  • persisting UI state for grouping
  • new grouping options
  • bookmarkable URLs with filter and search parameters, and many other performance improvements

This was achieved by building the interface from the ground up in React and Redux. If you’re interested in more details, see these blogs on the framework used and speed improvements.

External/remote PostgreSQL database


The MAAS 2.8 snap now has a separate database to allow for scalability. This means that the MAAS database can be located outside the snap either on localhost, or on a remote server altogether. 

This will be the approach going forward so we have prepared guides covering set up, management and configuration. If you are testing MAAS we provide a test DB configuration that embeds the database in a separate snap that can easily be connected to MAAS. To learn more please go to maas.io/docs/install-from-a-snap

Other improvements

This release also includes many fixes to ensure a high quality user experience and operations. If you’d like to read more you can read more here.


24 June, 2020 05:20PM

Ubuntu Blog: Ubuntu Masters 3: the community expands

What is Ubuntu Masters?

The Ubuntu Masters conference stemmed from a vision to bring the engineering community together to freely exchange innovative ideas, in the spirit of open source. After two hugely successful conferences, connecting IT teams across industries and countries, and featuring speakers from innovators such as Adobe, Netflix, Roblox, and more, Ubuntu Masters returns on June 30th.

Register for the event

Rather than letting the current limitations on travel and social interaction get in the way, we are hosting Ubuntu Masters 3 as a virtual event, taking this opportunity to grow the community more than ever before, and offer live access to thousands of attendees for free.

As always, we made sure to invite some of the most creative technology minds to lead the conversations throughout the day. This time, we have the honour of welcoming speaker from Scania, Domotz, and Plus One Robotics.

Here’s what you can expect on the 30th:

Ubuntu Masters 3 schedule

2-3pm BST: Domotz – Streamlined provisioning of IoT devices; creating a reliable management platform

Giancarlo Fanelli, CTO & Andrea Rossali, Senior DevOps Engineer

As a leading provider of RMM solutions (Remote Monitoring and Management), Domotz’s business relies on the Agent component to scan the network, communicate data to the cloud, and act as a remote access point for that network. After careful consideration, they chose to develop custom hardware based on Ubuntu Core OS to host their Agent software. To overcome challenges along the way, Domotz built their own management platform to provision and orchestrate their fleet.

In this presentation, they explain how you can build a similar platform and highlight the key decisions they made throughout the process.

Register now

3:30-4:40pm BST: Plus One Robotics – The open (source) road to smart robots: supplementing AI with Human Intelligence

Zachary Keeton, Yonder Group Lead

When three roboticists started a company around cloud-connected robots, they needed to get to market quickly while preserving precious startup capital. Learn how they leveraged Ubuntu, Kubernetes, full-stack JavaScript, and other open source offerings to evolve their cloud architecture from MVP into the scalable, cloud-agnostic system it is today while on a startup budget.

Register now

5-6pm BST: Scania – Mastering multicloud with SLURM and Juju

Erik Lönroth, Tech Lead HPC & Open Source Forum Chairman

Erik Lönroth has been technical manager for HPC data centres for about a decade now and will share his knowledge within the field with you here. During the session, he will go through the various components of an HPC cluster and how he utilises Juju and SLURM to achieve a flexible setup in multiple clouds. He will also be available for an open discussion around the rationale and values that comes out of his approach.

Register now

Ubuntu Masters community benefits 

1. Unlimited educational videos

The aim of Ubuntu Masters is solely to contribute to the community and advance innovative thinking in the tech space. That’s why all the presentations of the conference are geared towards educating and inspiring one another with our stories, rather than pitching products and brands. It’s also why the event is free and open to all, and speaker session recordings are openly available on the Ubuntu Youtube channel.

2. Interaction with industry leaders

On the day of the event, you can expect to interact with both the guest speakers and the Ubuntu team. After their presentations, speakers will be interviewed by technical experts of the Ubuntu team, and questions coming in live from the audience will be incorporated into the conversation.

3. Community networking

Additionally, you’re already able to connect and begin discussion with your peers and fellow attendees on the dedicated Ubuntu Masters Telegram channel. On the day of the event, members of the Ubuntu team will also be available to provide you with valuable information and answer your questions.

4. Free helpful resources

All speaker sessions will be accompanied by free resources (such as ebooks, webinars, helpful links) relevant to the topic at hand. These will remain accessible to all live and on-demand attendees even after the sessions have aired.

How can I contribute to the community?

The Ubuntu Masters conference is seasonal, but our aim to share ideas is not. We encourage anyone who wishes to contribute a useful tutorial to publish it freely on the Ubuntu Tutorials Library so that everyone can access and learn from it. 

See you all on the 30th!

24 June, 2020 04:55PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).

Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

24 June, 2020 01:03PM