August 22, 2019

hackergotchi for Kali Linux

Kali Linux

Major Metapackage Makeover

With our 2019.3 Kali release imminent, we wanted to take a quick moment to discuss one of our more significant upcoming changes: our selection of metapackages. These alterations are designed to optimize Kali, reduce ISO size, and better organize metapackages as we continue to grow.

Before we get into what’s new, let’s briefly recap what a metapackage is. A metapackage is a package that does not contain any tools itself, but rather is a dependency list of normal packages (or other metapackages). This allows us to group related tools together. For instance, if you want to be able to access every wireless tool, simply install the kali-tools-wireless metapackage. This will obtain all wireless tools in one download. As always, you can access the full list of metapackages available in Kali on tools.kali.org. If you prefer to use the command line, the following command will list out the packages that will be installed via a specific metapackage:

root@kali:~# apt update
Hit:1 http://http.kali.org/kali kali-rolling InRelease
Reading package lists... Done                            
Building dependency tree      
Reading state information... Done
All packages are up to date.
root@kali:~#
root@kali:~# apt depends kali-tools-wireless
kali-tools-wireless
  Depends: kali-tools-802-11
  Depends: kali-tools-bluetooth
  Depends: kali-tools-rfid
  Depends: kali-tools-sdr
  Depends: killerbee
  Depends: rfcat
  Depends: rfkill
    rfkill:i386
  Depends: sakis3g
  Depends: spectools
  Depends: wireshark
root@kali:~#

We took the time to create new metapackages and rename existing ones, and we did the same with the tools listed inside of them. As a result of these changes, we’ve implemented a new naming convention for simplicity and improved granular control. At the end of the post there is a table displaying the relationships between previous and new names moving forward, along with a description of the metapackage purpose.

If you have made it this far, you are likely wondering “how does this affect me”?

  • If you are using a version of Kali older than 2019.3, if and when you upgrade, you will still have the same set of tools (just newer)!
  • However, if you do a fresh install of Kali with a version higher than either weekly W34 or 2019.3 ISO, you will notice some of the tools that get installed by DEFAULT have changed (we have put Kali on a diet!)

Previously, kali-linux-full was the default metapackage, which has been renamed to kali-linux-large with a redirect put in place. We have introduced a new default metapackage called kali-linux-default, which serves as a slimmed-down version of the tools from kali-linux-large.

Depending on how you use Kali will determine which metapackage would suit you best. This is the power of metapackages. For example:

  • If you want a core set of tools, stick with kali-linux-default (designed for assessments that are straightforward ).
  • If you want a more general and wider range of tools, select kali-linux-large (useful if Internet access is permitted but slow).
  • If you want to be prepared for anything, go with kali-linux-everything (great if you are going to be doing air-gap/offline work)

Note: You can install multiple metapackages at once and are not limited to just one, so mix and match!

Each of these metapackages depends on the one above. That means, when we add a new essential tool to kali-linux-default, it is automatically part of kali-linux-large and thus kali-linux-everything. Otherwise, when we add a new tool that may not be useful to everyone, it will be placed into either kali-linux-large or kali-linux-everything – depending on our tool policy. More information about the new tool policy will be made public towards the end of the year. Stay tuned for some very exciting news!

How Kali is being used today has changed since when Kali (and even BackTrack) was first born. Not everyone needs all the tools at once – but they are still available when required. We have opted for a new default set of tools to match the majority of today’s current network environments, by removing edge cases and legacy tools which are rarely used.

Upon doing a system upgrade (apt -y full-upgrade) on a version of Kali older than 2019.3, you will see the old metapackage name being removed. This is safe.
If you have tried to remove a tool before, you may have run into this (when the tool is part of a metapackage). This is also safe to remove, as it doesn’t remove any other tools. It simply means that when a new tool is added into that metapackage, you won’t receive it.

If you are running 2019.3 and want the old default set of tools, you can do either apt -y install <tool> for a one-off package installation or apt -y install kali-linux-large to get the old tool set back. For the 2019.3 release, we will be doing a one-off extra image, which is based on kali-linux-large to help with the transition.

Below are the tables with a complete breakdown of previous metapackages names, along with their new respective names:

Systems

These metapackages are used when generating our images.

Old New Notes
kali-linux kali-linux-core Base Kali Linux System – core items that are always included
new kali-linux-default “Default” desktop (AMD64/i386) images include these tools
new kali-linux-light Kali-Light images use this to be generated
new kali-linux-arm All tools suitable for ARM devices
kali-linux-nethunter kali-linux-nethunter (same) Tools used as part of Kali NetHunter

These entries are based around the Kali menu.

Old New Notes
new kali-tools-information-gathering Used for Open Source Intelligence (OSINT) & information gathering
new kali-tools-vulnerability Vulnerability assessments tools
kali-linux-web kali-tools-web Designed doing web applications attacks
new kali-tools-database Based around any database attacks
kali-linux-pwtools kali-tools-passwords Helpful for password cracking attacks – Online & offline
kali-linux-wireless kali-tools-wireless All tools based around Wireless protocols – 802.11, Bluetooth, RFID & SDR
new kali-tools-reverse-engineering For reverse engineering binaries
new kali-tools-exploitation Commonly used for doing exploitation
new kali-tools-social-engineering Aimed for doing social engineering techniques
new kali-tools-sniffing-spoofing Any tools meant for sniffing & spoofing
new kali-tools-post-exploitation Techniques for post exploitation stage
kali-linux-forensics kali-tools-forensics Forensic tools – Live & Offline
new kali-tools-reporting Reporting tools

Tools

These are tool listing based on the category and type.

Old New Notes
kali-linux-gpu kali-tools-gpu Tools which benefit from having access to GPU hardware
new kali-tools-hardware Hardware hacking tools
new kali-tools-crypto-stego Tools based around Cryptography & Steganography
new kali-tools-fuzzing For fuzzing protocols
new kali-tools-802-11 802.11 (Commonly known as “Wi-Fi”)
new kali-tools-bluetooth For targeting Bluetooth devices
kali-linux-rfid kali-tools-rfid Radio-Frequency IDentification tools
kali-linux-sdr kali-tools-sdr Software-Defined Radio tools
kali-linux-voip kali-tools-voip Voice over IP tools
new kali-tools-windows-resources Any resources which can be executed on a Windows hosts

Misc

Useful metapackages which are “one off” groupings

Old New Notes
kali-linux-full kali-linux-large Our previous default tools for AMD64/i386 images
kali-linux-all kali-linux-everything Every metapackage and tool listed here
kali-linux-top10 kali-tools-top10 The most commonly used tools
kali-desktop-live kali-desktop-live (same) Used during a live session when booted from the image

Courses

Tools used for Offensive Security’s courses

Old New Notes
new offsec-awae Advanced Web Attacks and Exploitation
new offsec-pwk Penetration Testing with Kali

Desktop Managers

Desktop Environment (DE) & Window Manager (WM)

Old New Notes
kali-desktop-common kali-desktop-core Any key tools required for a GUI image
new kali-desktop-e17 Enlightenment (WM)
kali-desktop-gnome kali-desktop-gnome (same) GNOME (DE)
new kali-desktop-i3 i3 (WM)
kali-desktop-kde kali-desktop-kde (same) KDE (DE)
kali-desktop-lxde kali-desktop-lxde (same) LXDE (WM)
new kali-desktop-mate MATE (DE)
kali-desktop-xfce kali-desktop-xfce (same) XFCE (WM)

If you wish to create your own metapackage, see how we do it here, before you create your own package.

22 August, 2019 03:01PM by g0tmi1k

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S12E20 – Outrun

This week we’ve been experimenting with lean podcasting and playing Roguelikes. We discuss what goes on at a Canonical Roadmap Sprint, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 20 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been creating a lean podcast – TeleCast with popey.
    • Mark has been playing Roguelikes.
  • We discuss what goes on at a Canonical Product Roadmap Sprint.

  • We share a Command Line Lurve:

    • Ctrl+X – Expand a character
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image taken from Outrun arcade machine manufactured in 1986 by Sega.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

22 August, 2019 02:00PM

Ubuntu Blog: Useful security software from the Snap Store

Overall, most Linux distributions offer sane, reasonable defaults that balance security and functionality quite well. However, most of the security mechanisms are transparent, running in the background, and you still might require some additional, practical software to bolster your security array. Back in July, we talked about handy productivity applications available in the Snap Store, and today we’d like to take a glimpse at the security category, and review several cool, interesting snaps.

KeePassXC

Once upon a time, password management was a simple thing. There were few services around, the Internet was a fairly benign place, and we often used the same combo of username and password for many of them. But as the Internet grew and the threat landscape evolved, the habits changed.

In the modern Web landscape, there are thousands of online services, and many sites also require logins to allow you to use their full functionality. With data breaches a common phenomenon nowadays, tech-savvy users have adopted a healthier practice of avoiding credentials re-use. However, this also creates a massive administrative burden, as people now need to memorize hundreds of usernames and their associated passwords.

The solution to this fairly insurmountable challenge is the use of secure, encrypted digital password wallets, which allow you to keep track of your endless list of sites, services and their relevant credentials.

KeePassXC does exactly that. The program comes with a simple, fairly intuitive interface. On first run, you will be able to select your encryption settings, including the ability to use KeePassXC in conjunction with a YubiKey. Once the application is configured, you can then start adding entries, including usernames, passwords, any notes, links to websites, and even attachments. The contents are stored in a database file, which you can easily port or copy, so you also gain an element of extra flexibility – as well as the option to back up your important data.

BitWarden

Side by side with KeepPassXC, BitWarden is a free, open-source password manager. After the application is installed, you need to create an account. Then, you can populate your database (vault) with the entries, including login names, passwords and other details, like card numbers and secure notes. BitWarden uses strong security, and the encrypted vault is synced across devices. This gives you additional portability, as well as an element of necessary redundancy, which is highly important for something like a password database.

BitWarden also includes a Premium version, which offers 1 GB encrypted storage and support for YubiKey and other 2FA hardware devices. The application also allows you to use PIN locking, and arrange your items into folders.

Secrethub-cli

Given that we’ve discussed password management, the next logical step is to talk about collaborative development, configuration files and passwords (secrets) that sometimes need to be used or shared in projects. If you use public repositories (or even private ones), there is always some risk in keeping credentials out in the open.

Secrethub-cli is designed to provide a workaround to this issue by allowing developers to store necessary credentials (like database usernames and passwords) inside encrypted vaults, and then inject them into configuration files only when necessary.

You start by signing up for an account, after which you can use the command-line interface to populate your vault. The next step is to create template files (.tpl) with specifically defined secrets placeholders, and then pass the files to secrethub-cli, which will inject the right credentials based on the provided placeholders (username and password), and then print out the result to the standard output, or if you like, into a service configuration file for your application.

cat example.config.tpl | secrethub inject

This way, the command will run correctly if the right secrethub-cli account is used, but it won’t work for anyone else, allowing reliable sharing of project work. The application is available for free for personal projects.

Wormhole

This software might very well be familiar to you, as we’ve discussed Wormhole in greater detail several months ago. It is an application designed to allow two end systems to exchange files in a safe, secure manner. Rather than using email or file sharing services, you can send content to your friends and colleagues directly, using Wormhole codes, which allow the two sides to identify one another and exchange data. Wormhole is a command-line program, but it is relatively simple to use. It also offers unlimited data transfers, and can work with directories too (and not just individual files).

Livepatch

System restarts can be a nuisance, and might lead to a (temporary) loss of productivity. Sometimes though, they are necessary, especially if your machine has just received a slew of security updates. Livepatch is a Canonical tool, offering rebootless kernel patching. It runs as a service on a host and occasionally applies patches to the kernel, which will be used until a full kernel update and the subsequent restart. It is a convenient and practical solution, especially in the mission-critical server environment.

However, home users can benefit from this product too. Livepatch is available for free to Ubuntu users on LTS releases (like 16.04 or 18.04). The only additional requirement is that you do have to register for an Ubuntu SSO account, which will provide you with a token, which you can then use to enable the livepatch service on up to three systems (for free).

snap install canonical-livepatch
canonical-livepatch enable "token"

Once Livepatch is installed and enabled, it will run in the background, doing its job. As a technology, Livepatch fixes cannot be created for every single kernel vulnerability, but a large number of them can be mitigated, dispensing the need for frequent reboots. You can always check the status of the service on the command line, to see that it is working:

canonical-livepatch status

Summary

We hope you enjoyed this piece. Software security often has a somber angle, but we’d like to believe that today’s blog post dispels that notion. The exercise of practicality, data integrity and the ability to protect your important information does not have to be an arduous and difficult task. In fact, you might even enjoy yourself.

We would also suggest you visit the Snap Store and explore; who knows, you might find some rather useful applications that you haven’t really thought of or known before. If you have any comments, please join our forum for a discussion.

Photo by Jason Blackeye on Unsplash.

22 August, 2019 09:35AM

August 21, 2019

hackergotchi for Cumulus Linux

Cumulus Linux

Best practices: MLAG backup IP

Recently there was a conversation in the Cumulus community (details in the debriefing below) about the best way to build a redundant backup IP link for multi-chassis link aggregation (MLAG). Like all good consulting-led blogs, we have a healthy dose of pragmatism that goes with our recommendations and this technology is no different. But if you’re looking for the short answer, let’s just say: it depends.

The MLAG backup IP feature goes by many names in the industry. In Cisco-land you might call this the “peer keepalive link,” in Arista-ville you might call this the “peer-address heartbeat” and in Dell VLTs it is known as the “backup destination.” No matter what you call it, the functionality offered is nearly the same.

What does it do?

Before we get into the meat of the recommendation, let’s talk about what the backup IP is designed to do. The backup IP link provides an additional value for MLAG to monitor, so a switch knows if its peer is reachable. Most implementations use this backup IP link solely as a heartbeat, meaning that it is not used to synchronize MAC addresses between the two MLAG peers. This is also the case with Cumulus Linux. In the Cumulus Linux MLAG implementation, UDP port 5342 is used by default for the MLAG backup IP configuration, which you can see by running the net show clag params command below.

Note that the backup IP is not a required configuration[a][b][c][d] today (that might change in the future); it is used simply to provide added redundancy for the classic “split-brain” MLAG failure scenario. In this case — the loss of a peerlink — each MLAG peer is unaware that the other is still alive and will continue to forward traffic. However, the forwarding of return traffic is suboptimal as the MAC address tables are no longer being synchronized and the return traffic may need to be flooded to all participants in a destination VLAN.

How can I build it?

With Cumulus Linux, there are four main approaches to deploying an MLAG backup IP.

1. Use the loopback. Another option is to leverage the loopback IP addresses as targets for the backup IP link. Dynamic routing is a must for this option and requires the stable advertisement of loopback IP addresses from one MLAG peer to another. We now document this option as it can be appropriate for many environments.

2. Use a dedicated dataplane link. One of the least common yet still valid methods to configure a backup IP link is to dedicate an interface to it. In this case you could dedicate one or more low speed interfaces, possibly in an LACP bond, which is built in a dedicated VRF to connect directly to the peer switch. This is very similar to adding more links to your peerlink bond; however, these links are not used for synchronizing MAC addresses. We don’t document this method currently, but it is supported.

3. Use the management network. The most common method of implementing the backup IP is through the use of the management plane. The existing isolation of the management VRF provides an ideal shared segment between both MLAG peers. This segment is generally via a totally different path than the loopback option described above and provides redundancy in a way that does not depend on the health of layer 3 protocols and dynamic routing.

4. Use back-to-back management ports. This approach like #3 above is not very common. There are very particular use cases where it makes sense which we will discuss below. The main idea is that you connected the eth0 port from one MLAG peer directly to the eth0 port of another MLAG peer. We don’t document this method currently either as it’s not something we see all that often, although it too is supported.

How SHOULD I build it?

You can leverage multiple options but what is the right choice for your environment? If you can use option #1 or #2 above either of those would be preferred over options #3 or #4 but all of them will serve the intended purpose. Here we’ll discuss the pros and cons of each approach while identifying some of the logical use cases for each.

When to use a loopback interface for the MLAG backup IP
This is a great approach to use when deploying MLAG. If you’re deploying MLAG in an all Layer 2 environment it may not be possible to leverage this technique. If you’ve never worked in detail with routing policy this approach has more complexity but on the flipside it is the most robust and does not sacrifice any links to be dedicated to this purpose.

Logical Use Cases
  • Each of your MLAG devices has routing relationships with a series of common devices (leafs to spines, spines to superspines, and so forth)
  • L3 uplinks across a routed fabric that connects MLAG peers has rich multipathing
Don’t Use It When
  • You have back-to-back/layered/nested/2-tier MLAG environments
  • You’re concerned about the stability of the routing domain between MLAG peers
  • You don’t want to deal with ASN concerns in BGP and the possibility of using “allowas-in 1” (see the note in our documentation)

When to use a dedicated dataplane link for the MLAG backup IP
This approach, although not seen very often also offers a significant amount of robustness, and unlike the option above it can be used in any kind of environment. If you have a network which can use this method, and not all can, you should. This option is another great choice but due to port density concerns is not always considered.

Logical Use Cases
  • Can be used in ANY network type
  • There is no management network (like there are with MLAG in OOB environments)
  • Switches are not fully populated and extra ports remain available for use
Don’t Use It When
  • Port density is of primary importance or switches are at full port capacity

When to use the management network for the MLAG backup IP
This has been our tried and true approach to deploying MLAG for a long time now. It’s still used very often because many environments typically have a management network and this is dead simple to setup with very low configuration complexity and without the need to sacrifice ports specifically for the task. This approach works great for dataplane networks but OOB networks may not have their own OOB network to leverage so it is not as versatile of a solution.

Logical Use Cases
  • Typically used with switches in the dataplane of the network (not out-of-band switches)
  • You’re already using management VRF
  • Layer 2 (L2) uplinks prevent you from using a loopback interface
Don’t Use It When
  • There are concerns regarding the stability of the management network

When to use back-to-back management ports for the MLAG backup IP
This method is used infrequently but it works just fine. It is a great way to setup MLAG for an OOB network that does not have it’s own OOB network and needs every single port for switch-to-switch connectivity, leaving you with unutilized Eth0 ports which could be dedicated to the backup IP link.

Logical Use Cases
  • There is no management network (like there are with MLAG in OOB environments)
  • There are no available dataplane (front-panel) ports
  • There are L2 uplinks or you have a back-to-back/layered/nested/2-tier MLAG environment
Don’t Use It When
  • The distribution tier of OOB switches is in different racks and cross-rack cabling is not allowed.

Debriefing a recent issue

Some of the recent discussion around the MLAG backup IP arose from an incident where a hardware issue was causing the dataplane of a switch to become stuck. When that happened, the peerlink went down on the secondary MLAG switch but the backup IP was still accessible to the management plane (eth0) of the primary MLAG peer. This peculiar failure caused the secondary to think the primary was still alive and not promote itself, which led to all host-facing links being down.

This particular issue would have been caught if the dataplane was integrated into the forwarding of the backup IP link via the use of the loopback IP as the backup IP. Ironically, the issue would not have manifested itself if NO backup IP was configured!

While using a loopback IP as the backup IP would have prevented this specific issue, this was a pretty anomalous case and not typical of other commonly seen failures.

Which method is the best though?

Option 1 or 2 from the above list are the most robust and are the recommended way to deploy MLAG today. However, nothing is free in life, and you must be willing to accept the added complexity of making the reachability of the backup IP part of the routing domain, or in the case of option 2, you must be willing to sacrifice a dedicated link(s) to the purpose.

If the tradeoffs mentioned above are not desirable or not possible based on the constraints of your environment consider using the management network or a back-to-back connection over the management port. Without a doubt, the most general use case for the backup IP continues to be the use of the management network, as it is the least complex and the most widely applicable method, since you can use it in either all-L2 or mixed L2/L3 environments.

21 August, 2019 06:00PM by Eric Pulvino

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 August Update

Hi Everyone! The Librem 5 team has been hard at work again, and we want to update you all on our software progress.

We are preparing everything for the Librem 5 to be delivered soon, and its software will focus on the most critical applications a phone needs: calls, messages and web browsing. There are supporting projects that will be delivered too, like GNOME Settings, the shell, GNOME Initial Setup, and GNOME Contacts. So without further ado, let’s take a tour through the software we will deliver–as well as some other applications that have seen some major changes.

Applications

Libhandy

We have made some adaptive dialog improvements to HdyHeaderBar’s back button. There is a really nice new pagination widget for the app drawer. A general overhaul of the app drawer is almost finished–thanks so much, Alexander Mikhaylenko, for all of your hard work on this!

Also, be sure to check out the newly packaged demo app.

hdy demo

And Libhandy 0.0.10 has been uploaded to Debian and to PureOS.

Calls

We have worked on a few recent main efforts on Calls: adding a calls history, allowing the Contacts app to dial numbers, and enabling the system to receive calls when the shell is locked.

To lay the foundation for the calls history, the records have to be recorded in an SQLite database. Then, to complete the work, the database was connected to the UI.

Calls history

In order to allow Contacts–or any other application–to dial calls, a tel url handler was added to Calls.

Calls now starts up in a new daemon mode when GNOME starts, so that incoming calls can always be received.

Messaging

The team fixed several crashes, and the welcome screen was reworked; there is also an ongoing effort to integrate with libfolks, which is used by Contacts.

We continue to improve the SMS plugin, too, and fixed an issue with multipart SMS reception: all SMS fields are initialized as soon as the first part is received (thanks a lot, Aleksander Morgado, for the patch). There is also handling for SMS messages that were received by the modem when Chatty isn’t running, support for delivery reports, and phone number formatting according to E164.

The conversation view was improved by introducing lazy loading for pulling the chat history patch, which gradually loads the chat log into the conversation view as the user scrolls up. Thanks, Leland Carlye, for the awesome patch!

GTK

The team added many mobile tweaks: from file chooser dialogs to about dialogs, message dialogs, adaptive presentation dialogs, dialog maximization, and info bars.

Web Browsing

We have backported many mobile improvements, which we also included on the devkit image. The Epiphany “new tab” page and several other in-viewport pages have been made adaptive, and there is a continued effort to push for Epiphany to adopt HdyPreferencesWindow.

web

Soon, you will be able to edit CSS from Epiphany’s preferences; and the search engine management dialog has been ported.

In order to address the application manager overflow issue, the about: applications now has improved CSS for responsiveness.

Initial Setup

We have refactored adaptive changes for some long-needed cleanups, which will be submitted upstream eventually.

gis

Contacts

We have some brand new functionalities, such as new buttons, added for making a call and sending sms.

In preparation for Contacts integration with Calls and Chatty, we have been doing some investigation into libfolks, gnome-contacts-search-provider, and evolution-data-server. This led us to a major refactoring of GNOME Contacts, so as to reduce complexity.

We have added some fixes to avoid crashing when taking a webcam picture, using GNOME 3.32 avatar styles for fallback–and the avatar is no longer cut off. A long press for selecting contacts was also implemented.

We are still working on fake persona.

Contacts

Clocks

We are working hard to redesign GNOME Clocks for mobile/adaptiveness–and to get the Alarm UI to use new list patterns.

Help

We did it–GNOME Help now works on the devkit!

GNOME help

Settings

We are focusing a lot of effort on the WWAN panel, where locked SIM cards are now handled (and there’s a dialog to enter a PIN to unlock the SIM), data can be enabled and APN can be set, and auto-connect for default APN is also enabled so that it is persistent across device restarts. The UX has been improved too, by using HdyColumn to center align the panel and porting to HdyDialog. Finally, the WWAN panel now also detects multiple modems!

But that’s not everything: other areas of GNOME Settings have seen adaptive changes too, such as the background panel, search locations dialog, and notifications dialog, which have been made adaptive; the GNOME Online Accounts has also been made adaptive, by reducing the account widget margins and setting a minimum and natural size–which required the account dialog to be adapted. Plus, we are currently updating the format dialog for the Region panel (in GNOME Online Accounts).

Settings

There’s a new design for the WiFi panel being discussed upstream, which will need to be implemented once consensus is reached.

Additional adaptive fixes are still under review upstream, and include fixing HiDPi scaling issue of background images, region panel, and privacy panel dialogs.

System

We have a shiny, new, user-friendly terminal for mobile screens called Kings Cross, which is now default on the Librem 5. Thank you so much, Zander Brown, for all of your hard work on this!

We have also set a default background image. In order to help debugging efforts, debug symbol packages have been added by default. We’re now shipping a patched UPower that detects the devkit’s charger and power supply.

Support for the Librem 5 has been upstreamed in Debian’s flash-kernel.

Keyboard

Our team fixed several keyboard crashes, too: keyboard visibility on DBus is properly toggled now, for example, and a text-input issue preventing the OSK from showing up automatically in the correct windows is fixed. We also made lots of cleanups across the code base (see some cleanups and imservice cleanups for more detail) as well as getting tests added, error-checking made stricter, and many other fixes.

Some scaling improvements were made by calculating the scale factor instead of pre-scaling; honoring the widget scale factor, and setting a constant font size.

Additional rendering upgrades included avoiding infinitely redrawing the keyboard (since this was making the keyboard blurry, as well as eating up battery and CPU cycles), fixing the blurry text and icons and making the widget easier to style. We also added frame rendering, in order to make the keyboard match the design.

To avoid hiding content behind the keyboard, LayerSurface improvements were made–and newer layer shell code from phosh implemented–to hide/show the window, instead of destroying and redrawing it every time. This helped us make squeekboard our default keyboard.

Sound support is being added in the keyboard.

And, thanks to Piotr Tworek, we fixed an out-of-bounds memory-read bug!

XKB keymaps are being generated from XML instead of using premade ones, to allow for more keymap flexibility, so we have also decided to make some keyboard geometry adjustments to make the XML simpler.

The navigation between keyboard views was significantly improved, and landscape orientation was added so the keyboard no longer takes up the full screen, being centered instead. Similarly, the keyboard is now centered horizontally. We have also started working on improving symbol input, and adding support for non-ASCII languages.

The text-input protocol has been updated; it now supports notifying when no OSK is needed.

Compositor + Shell

The compositor has seen many fixes by now–although at first you may hardly notice them. Stack handling works better now, and unmapped surfaces won’t be raised in the stack. In order to mitigate any accidental rendering bugs when, for instance, focus rules cause the function to return early, the view damage in set_focus, to where the drawing list is handled, has been moved. Additional work has been done to move the focus back to first shell surface when unfocusing layer surface. To make recent GTK dialog fixes behave properly, maximize/fullscreen state is now taken into account on view init.

The team has also made a few layer surface changes: a layer shell crash was fixed and unused protocols were removed.The system modal dialogs now match the design much better; the ability to unmaximize auto-maximized layers was removed to avoid a broken state; we fixed the layer shell show/hide, and now have the ability to use enums as types. Some protection was put in place to guard against negative exclusive zone when surfaces set negative margins.

Other noticeable changes are that you can now close an app from the overview, and the keyboard button is hidden when the keyboard is unfolded.

App overview

We have also added touch support in X11 backend!

We were worried about a few compositor crashes, which led us to make some input grab fixes for xdg_popups and remove input method’s resource from the list on destroy.

Other changes we made include dropping the pointer emulation on touch and auto-maximizing before mapping the surface, to avoid flicker for example when starting new applications.

Phosh has seen the addition of PhoshToplevelManager and PhoshToplevel classes for managing and representing toplevel surfaces; this switches from a private protocol to wlr-foreign-toplevel-management, which is more complete than our previous private protocol and makes phosh usable with other compositors that implements the new protocol. Reporting the surface’s parent is still pending upstream review.

As you boot your devkit now you’ll notice that you see your list of favorite apps immediately. This is the result of our recent effort to move the favorites to home screen–once again, thanks to Alexander Mikhaylenko, in this case for fixing the sizing of the activities! You’ll also notice our new animated arrows when folding/unfolding the home screen, and fix favorites changing via gsettings.

Kernel

If you haven’t already, take a moment to read our blog post that details the Librem 5 team’s contributions to the 5.2 kernel.

But a few things have happened since: support has been added for our accelerometer and gyroscope, and it’s been submitted upstream. In order to make IIO-sensor-proxy work correctly, we mainlined an accelerometer driver bugfix–meaning we will soon be able to use IIO-sensor-proxy by default and auto-rotate so that we can remove the “Rotation” switch in the top bar.. and rely on the sensors to decide the orientation that should be displayed!

We have been working very hard to improve the graphics stack too. MXSFB support has been added into mesa, and several patches are in review upstream: v1 and v2 of the NWL MIPI DSI driver, v2 of the LCD panel patches to make it work embedded in a panel_bridge(which is used by the NWL driver), v1 of the MXSFB patch to handle NWL timing requirements. Some tests with MXSFB were fixed.

A couple of minor patches were made to fix a typo in i.MX8MQ reset names and IPUV3 kconfig.

Power Management

The team is trying very hard to better manage the power consumption of the phone and reduce the overall temperature: to make sure we don’t lose basic kernel support, we now check for cpuidle sysfs nodes and DRM render node. We are also working on helping NXP to mainline thermal-idle to cool the CPU by idle-injection; to ease kernel updates, we improved kernel tests–and the CPUs now slow down when hot, instead of overheating and shutting down.

Also, thermal management investigations have led us to a focused effort on S3 suspend/resume.

Builds

The mailing list now receives build status mails–if you’re interested, you can sign up for librem5-builds@lists.community.puri.sm and receive them.

And the images will soon include our patched version of gnome-settings-daemon.

Documentation

We have made several updates to the existing documentation: the low-level touchscreen reading hints, GNOME platform section, and application settings have all been updated, for example. We have also made many one-line updates to be able to use recent links, a more recent version of GNOME, etc.

As always, a big “Thanks!” to everyone that has helped review and merge changes into upstream projects; your time and contribution are much appreciated. That’s all for now, folks–stay tuned for more exciting updates to come!

The post Librem 5 August Update appeared first on Purism.

21 August, 2019 02:36PM by Heather Ellsworth

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Jupyter looks to distro-agnostic packaging for the democratisation of installation

When users of your application range from high school students to expert data scientists, it’s often wise to avoid any assumptions about their system configurations. The Jupyter Notebook is popular with a diverse user base, enabling the creation and sharing of documents containing live code, visualisations, and narrative text. The app uses processes (kernels) to run interactive code in different programming languages and send output back to the user. Filipe Fernandes has a key responsibility in the Jupyter community for its packaging and ease of installation. At the 2019 Snapcraft Summit in Montreal, he gave us his impressions of snaps as a tool to improve the experience for all concerned.

“I’m a packager and a hacker, and I’m also a Jupyter user. I find Jupyter to be great as a teaching tool. Others use it for data cleaning and analysis, numerical simulation and modelling, or machine learning, for example. One of the strengths of Jupyter is that it is effectively language agnostic. I wanted Jupyter packaging to be similar, distro-agnostic, if you like.”

Filipe had heard about snaps a while back, but only really discovered their potential after he received an invitation to the Snapcraft Summit and noticed that Microsoft Visual Studio Code had recently become available as a snap. The ease of use of snaps was a big factor for him. “I like things that just work. I often get hauled in to sort out installation problems for other users – including members of my own family! It’s great to be able to tell them just to use the snap version of an application. It’s like, I snap my fingers and the install problems disappear!”

At the Summit, getting Snapcraft questions answered was easy too. “Every time I hit a snag, I raised my hand, and someone helped me.” Filipe was able to experiment with packaging trade-offs for Jupyter snaps. “I made a design choice to make the overall Jupyter package smaller by not including the Qt console. Most people just want the browser interface anyway. Similarly, I excluded the dependency for converting Jupyter Notebooks to other formats via pandoc. The size of the Jupyter snap then decreased from about 230 MB to just 68 MB”. 

What would he like to see in the Snapcraft of tomorrow? “There are some technical tasks to be done for each Jupyter snap, like declaring features of plug-ins and setting different permissions. It would be nice to find a way for automating these tasks, so that they do not have to be done manually each time a snap is built. Also, it’s not always easy to see which parts of the Snapcraft documentation are official and which are from enthusiastic but unsanctioned users.” Filipe suggests that creating a ‘verified publisher’ status or certification could be an answer, helping other users to decide how they want to consider different contributions to the documentation.  

A stable Jupyter snap is now available from the Snap Store providing the Jupyter users another option to install beyond the official sources. Filipe and the Jupyter community have been working on promoting it via banners, and blogs. “Some people get overwhelmed by the amount of information out there, especially when they start Googling options. I think snaps is a way to shortcut that,” adds Filipe. He recommends that other developers who want to get to this level should also come to the Summit. “The interactions here are so quick, to the point that I felt very productive within a really small amount of time, like I’d accomplished weeks of work. It’s awesome to be here and I’m looking forward to the next one.”

Install the community managed Jupyter snap here

21 August, 2019 09:53AM

Ubuntu Blog: How to add a linter to ROS 2

A well configured linter can catch common errors before code is even run or compiled. ROS 2 makes it easy to add linters of your choice and make them part of your package’s testing pipeline.

We’ll step through the process, from start to finish, of adding a linter to ament so it can be used to automatically test your projects. We’ll try to keep it generic, but where we need to lean on an example we’ll be referring to the linter we recently added for mypy, a static type analyzer for Python. You can view the finished source code for ament_mypy and ament_cmake_mypy.

Design

We’ll need to make sure our linter integrates into ament‘s testing pipeline. Namely, this means writing CMake scripts to integrate with ament_cmake_test and ament_lint_auto.

We need to be able to generate a JUnit XML report for the Jenkins build farm to parse, as well as handle automatically excluding directories with AMENT_IGNORE files, so we’ll need to write a wrapper script for our linter as well.

Overall, we’ll need to write the following packages:

  • ament_[linter]
    • CLI wrapper for linter
      • Collect files, ignore those in AMENT_IGNORE directories
      • Configure and call linter
      • Generate XML report
  • ament_cmake_[linter]
    • Set of CMake scripts
      • ament_[linter].cmake
        • Function to invoke linter wrapper
      • ament_cmake_[linter]-extras.cmake
        • Script to hook into ament_lint_auto
        • Registered at build as the CONFIG_EXTRA argument to ament_package
      • ament_[linter].cmake
        • Hook script for ament_lint

Getting Started – Python

We’ll start with making the ament_[linter] package.

We’ll be using Python to write this package, so we’ll add a setup.py file, and fill out some required fields. It’s easiest to just take one from an existing linter and customize it. What it ends up containing will be specific to the linter you’re adding, but for mypy it looks like this:

from setuptools import find_packages
from setuptools import setup

setup(
    name='ament_mypy',
    version='0.7.3',
    packages=find_packages(exclude=['test']),
    package_data={'': [
        'configuration/ament_mypy.ini',
    ]},
    install_requires=['setuptools'],
    zip_safe=False,
    author='Ted Kern',
    author_email='<email>',
    maintainer='Ted Kern',
    maintainer_email='<email>',
    url='https://github.com/ament/ament_lint',
    download_url='https://github.com/ament/ament_lint/releases',
    keywords=['ROS'],
    classifiers=[
        'Intended Audience :: Developers',
        'License :: OSI Approved :: Apache Software License',
        'Programming Language :: Python',
        'Topic :: Software Development',
    ],
    description='Check Python static typing using mypy.',
    long_description="""\
The ability to check code for user specified static typing with mypy.""",
    license='Apache License, Version 2.0',
    tests_require=['pytest', 'pytest-mock'],
    entry_points={
        'console_scripts': [
            'ament_mypy = ament_mypy.main:main',
        ],
    },
)

We’ll of course need a package.xml file. We’ll need to make sure it has an <exec_depend> on the linter’s package name in ROSDistro. If its not there, you’ll need to go through the process of adding it. This is required in order to actually install the linter itself as a dependency of our new ament linter package; without it any tests using it in CI would fail. Here’s what it looks like for mypy:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>ament_mypy</name>
  <version>0.7.3</version>
  <description>Support for mypy static type checking in ament.</description>
  <maintainer email="me@example.com">Ted Kern</maintainer>
  <license>Apache License 2.0</license>
  <author email="me@example.com">Ted Kern</author>

  <exec_depend>python3-mypy</exec_depend>

  <export>
    <build_type>ament_python</build_type>
  </export>
</package>

The Code

Create a python file called ament_[linter]/main.py, which will house all the logic for this linter. Below is the sample skeleton of a linter, again attempting to be generic where possible but nonetheless based on ament_mypy:

#!/usr/bin/env python3

import argparse
import os
import re
import sys
import textwrap
import time
from typing import List, Match, Optional, Tuple
from xml.sax.saxutils import escape
from xml.sax.saxutils import quoteattr

# Import your linter here
import mypy.api  # type: ignore

def main(argv: Optional[List[str]] = None) -> int:
    if not argv:
        argv = []

    parser.add_argument(
    'paths',
    nargs='*',
    default=[os.curdir],
    help='The files or directories to check. For directories files ending '
          'in '.py' will be considered.'
    )
    parser.add_argument(
        '--exclude',
        metavar='filename',
        nargs='*',
        dest='excludes',
        help='The filenames to exclude.'
    )
    parser.add_argument(
        '--xunit-file',
        help='Generate a xunit compliant XML file'
    )

    # Example of a config file specification option
    parser.add_argument(
        '--config',
        metavar='path',
        dest='config_file',
        default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'),
        help='The config file'
    )

    # Example linter specific option
    parser.add_argument(
        '--cache-dir',
        metavar='cache',
        default=os.devnull,
        dest='cache_dir',
        help='The location mypy will place its cache in. Defaults to system '
             'null device'
    )

    args = parser.parse_args(argv)

    if args.xunit_file:
        start_time = time.time()

    if args.config_file and not os.path.exists(args.config_file):
        print("Could not find config file '{}'".format(args.config_file), file=sys.stderr)
        return 1

    filenames = _get_files(args.paths)
    if args.excludes:
        filenames = [f for f in filenames
                     if os.path.basename(f) not in args.excludes]
    if not filenames:
        print('No files found', file=sys.stderr)
        return 1

    normal_report, error_messages, exit_code = _generate_linter_report(
        filenames,
        args.config_file,
        args.cache_dir
    )

    if error_messages:
        print('mypy error encountered', file=sys.stderr)
        print(error_messages, file=sys.stderr)
        print('\nRegular report continues:')
        print(normal_report, file=sys.stderr)
        return exit_code

    errors_parsed = _get_errors(normal_report)

    print('\n{} files checked'.format(len(filenames)))
    if not normal_report:
        print('No errors found')
    else:
        print('{} errors'.format(len(errors_parsed)))

    print(normal_report)

    print('\nChecked files:')
    print(''.join(['\n* {}'.format(f) for f in filenames]))

    # generate xunit file
    if args.xunit_file:
        folder_name = os.path.basename(os.path.dirname(args.xunit_file))
        file_name = os.path.basename(args.xunit_file)
        suffix = '.xml'
        if file_name.endswith(suffix):
            file_name = file_name[:-len(suffix)]
            suffix = '.xunit'
            if file_name.endswith(suffix):
                file_name = file_name[:-len(suffix)]
        testname = '{}.{}'.format(folder_name, file_name)

        xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time)
        path = os.path.dirname(os.path.abspath(args.xunit_file))
        if not os.path.exists(path):
            os.makedirs(path)
        with open(args.xunit_file, 'w') as f:
            f.write(xml)

    return exit_code


def _generate_linter_report(paths: List[str],
                          config_file: Optional[str] = None,
                          cache_dir: str = os.devnull) -> Tuple[str, str, int]:
    """Replace this section with code specific to your linter"""
    pass


def _get_xunit_content(errors: List[Match],
                       testname: str,
                       filenames: List[str],
                       elapsed: float) -> str:
    xml = textwrap.dedent("""\
        <?xml version="1.0" encoding="UTF-8"?>
        <testsuite
        name="{test_name:s}"
        tests="{test_count:d}"
        failures="{error_count:d}"
        time="{time:s}"
        >
    """).format(
                test_name=testname,
                test_count=max(len(errors), 1),
                error_count=len(errors),
                time='{:.3f}'.format(round(elapsed, 3))
    )

    if errors:
        # report each linter error/warning as a failing testcase
        for error in errors:
            pos = ''
            if error.group('lineno'):
                pos += ':' + str(error.group('lineno'))
                if error.group('colno'):
                    pos += ':' + str(error.group('colno'))
            xml += _dedent_to("""\
                <testcase
                    name={quoted_name}
                    classname="{test_name}"
                >
                    <failure message={quoted_message}/>
                </testcase>
                """, '  ').format(
                    quoted_name=quoteattr(
                        '{0[type]} ({0[filename]}'.format(error) + pos + ')'),
                    test_name=testname,
                    quoted_message=quoteattr('{0[msg]}'.format(error) + pos)
                )
    else:
        # if there are no mypy problems report a single successful test
        xml += _dedent_to("""\
            <testcase
              name="mypy"
              classname="{}"
              status="No problems found"/>
            """, '  ').format(testname)

    # output list of checked files
    xml += '  <system-out>Checked files:{escaped_files}\n  </system-out>\n'.format(
        escaped_files=escape(''.join(['\n* %s' % f for f in filenames]))
    )

    xml += '</testsuite>\n'
    return xml


def _get_files(paths: List[str]) -> List[str]:
    files = []
    for path in paths:
        if os.path.isdir(path):
            for dirpath, dirnames, filenames in os.walk(path):
                if 'AMENT_IGNORE' in filenames:
                    dirnames[:] = []
                    continue
                # ignore folder starting with . or _
                dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']]
                dirnames.sort()

                # select files by extension
                for filename in sorted(filenames):
                    if filename.endswith('.py'):
                        files.append(os.path.join(dirpath, filename))
        elif os.path.isfile(path):
            files.append(path)
    return [os.path.normpath(f) for f in files]


def _get_errors(report_string: str) -> List[Match]:
    return list(re.finditer(r'^(?P<filename>([a-zA-Z]:)?([^:])+):((?P<lineno>\d+):)?((?P<colno>\d+):)?\ (?P<type>error|warning|note):\ (?P<msg>.*)$', report_string, re.MULTILINE))  # noqa: E501


def _dedent_to(text: str, prefix: str) -> str:
    return textwrap.indent(textwrap.dedent(text), prefix)

if __name__ == 'main':
    sys.exit(main(sys.argv[1:]))

We’ll break this down into chunks.

Main Logic

We write the file as an executable and use the argparse library to parse the invocation, so we begin the file with the shebang:

#!/usr/bin/env python3

and end it with the main logic:

if __name__ == 'main':
    sys.exit(main(sys.argv[1:]))

to forward failure codes out of the script.

The main() function will host the bulk of the program’s logic. Define it, and make sure the entry_points argument in setup.py points to it.

def main(argv: Optional[List[str]] = None) -> int:
    if not argv:
        argv = []

Notice the use of type hints, mypy will perform static type checking where possible and where these hints are designated.

Parsing the Arguments

We add the arguments to argparse that ament expects:

parser.add_argument(
    'paths',
    nargs='*',
    default=[os.curdir],
    help='The files or directories to check. For directories files ending '
          'in '.py' will be considered.'
)
parser.add_argument(
    '--exclude',
    metavar='filename',
    nargs='*',
    dest='excludes',
    help='The filenames to exclude.'
)
parser.add_argument(
    '--xunit-file',
    help='Generate a xunit compliant XML file'
)

We also include any custom arguments, or args specific to the linter. For example, for mypy we also allow the user to pass in a custom config file to the linter, with a pre-configured default already set up:

# Example of a config file specification option
parser.add_argument(
    '--config',
    metavar='path',
    dest='config_file',
    default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'),
    help='The config file'
)

# Example linter specific option
parser.add_argument(
    '--cache-dir',
    metavar='cache',
    default=os.devnull,
    dest='cache_dir',
    help='The location mypy will place its cache in. Defaults to system '
            'null device'
)

Note: remember to include any packaged non-code files (like default configs) using a manifest or package_data= in setup.py.

Finally, parse and validate the args:

args = parser.parse_args(argv)

if args.xunit_file:
    start_time = time.time()

if args.config_file and not os.path.exists(args.config_file):
    print("Could not find config file '{}'".format(args.config_file), file=sys.stderr)
    return 1

filenames = _get_files(args.paths)
if args.excludes:
    filenames = [f for f in filenames
                    if os.path.basename(f) not in args.excludes]
if not filenames:
    print('No files found', file=sys.stderr)
    return 1

Aside: _get_files

You’ll notice the call to the helper function _get_files, shown below. We use a snippet from the other linters to build up an explicit list of files to lint, in order to apply our exclusions and the AMENT_IGNORE behavior.

def _get_files(paths: List[str]) -> List[str]:
    files = []
    for path in paths:
        if os.path.isdir(path):
            for dirpath, dirnames, filenames in os.walk(path):
                if 'AMENT_IGNORE' in filenames:
                    dirnames[:] = []
                    continue
                # ignore folder starting with . or _
                dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']]
                dirnames.sort()

                # select files by extension
                for filename in sorted(filenames):
                    if filename.endswith('.py'):
                        files.append(os.path.join(dirpath, filename))
        elif os.path.isfile(path):
            files.append(path)
    return [os.path.normpath(f) for f in files]

Note that in the near future this and _get_xunit_content will hopefully be de-duplicated into the ament_lint package.

This function, when given a list of paths, expands out all files recursively and returns those .py files that don’t belong in directories containing an AMENT_IGNORE file.

We exclude those files that are in the exclude argument list, and we return a failure from main if no files are left afterwards.

filenames = _get_files(args.paths)

if args.excludes:
    filenames = [f for f in filenames
                 if os.path.basename(f) not in args.excludes]

if not filenames:
    print('No files found', file=sys.stderr)
    return 1

Otherwise we pass those files, as well as relevant configuration arguments, to the linter.

Invoking the Linter

We call the linter using whatever API it exposes:

normal_report, error_messages, exit_code = _generate_linter_report(
    filenames,
    args.config_file,
    args.cache_dir
)

abstracted here with the following method signature:

def _generate_linter_report(paths: List[str],
                          config_file: Optional[str] = None,
                          cache_dir: str = os.devnull) -> Tuple[str, str, int]:

Recording the Output

Any failures the linter outputs are printed to stdout, while any internal linter errors go to stderr and return the (non-zero) exit code:

if error_messages:
    print('linter error encountered', file=sys.stderr)
    print(error_messages, file=sys.stderr)
    print('\nRegular report continues:')
    print(normal_report, file=sys.stderr)
    return exit_code

We collect each warning/error/note message emitted individually:

errors_parsed = _get_errors(normal_report)

We then report the errors to the user with something like:

print('\n{} files checked'.format(len(filenames)))
if not normal_report:
    print('No errors found')
else:
    print('{} errors'.format(len(errors_parsed)))

print(normal_report)

print('\nChecked files:')
print(''.join(['\n* {}'.format(f) for f in filenames]))

Generating JUnit XML Output

Here we generate an xml report write the file to disk in the requested location.

if args.xunit_file:
        folder_name = os.path.basename(os.path.dirname(args.xunit_file))
        file_name = os.path.basename(args.xunit_file)
        suffix = '.xml'
        if file_name.endswith(suffix):
            file_name = file_name[:-len(suffix)]
            suffix = '.xunit'
            if file_name.endswith(suffix):
                file_name = file_name[:-len(suffix)]
        testname = '{}.{}'.format(folder_name, file_name)

        xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time)
        path = os.path.dirname(os.path.abspath(args.xunit_file))
        if not os.path.exists(path):
            os.makedirs(path)
        with open(args.xunit_file, 'w') as f:
            f.write(xml)

An example of a valid output XML to the schema is shown below

<?xml version="1.0" encoding="UTF-8"?>
<testsuite
name="tst"
tests="4"
failures="4"
time="0.010"
>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py:0:0)"
      classname="tst"
  >
      <failure message="error message:0:0"/>
  </testcase>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py:0)"
      classname="tst"
  >
      <failure message="error message:0"/>
  </testcase>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py)"
      classname="tst"
  >
      <failure message="error message"/>
  </testcase>
  <testcase
      name="warning (/tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py)"
      classname="tst"
  >
      <failure message="warning message"/>
  </testcase>
  <system-out>Checked files:
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py
  </system-out>
</testsuite>

Aside: _get_xunit_content

We write a helper function, _get_xunit_content, to format the XML output to the schema . This one is a bit specific to mypy, but hopefully it gives you a good idea of what’s needed:

def _get_xunit_content(errors: List[Match],
                       testname: str,
                       filenames: List[str],
                       elapsed: float) -> str:
    xml = textwrap.dedent("""\
        <?xml version="1.0" encoding="UTF-8"?>
        <testsuite
        name="{test_name:s}"
        tests="{test_count:d}"
        failures="{error_count:d}"
        time="{time:s}"
        >
    """).format(
                test_name=testname,
                test_count=max(len(errors), 1),
                error_count=len(errors),
                time='{:.3f}'.format(round(elapsed, 3))
    )

    if errors:
        # report each mypy error/warning as a failing testcase
        for error in errors:
            pos = ''
            if error.group('lineno'):
                pos += ':' + str(error.group('lineno'))
                if error.group('colno'):
                    pos += ':' + str(error.group('colno'))
            xml += _dedent_to("""\
                <testcase
                    name={quoted_name}
                    classname="{test_name}"
                >
                    <failure message={quoted_message}/>
                </testcase>
                """, '  ').format(
                    quoted_name=quoteattr(
                        '{0[type]} ({0[filename]}'.format(error) + pos + ')'),
                    test_name=testname,
                    quoted_message=quoteattr('{0[msg]}'.format(error) + pos)
                )
    else:
        # if there are no mypy problems report a single successful test
        xml += _dedent_to("""\
            <testcase
              name="mypy"
              classname="{}"
              status="No problems found"/>
            """, '  ').format(testname)

    # output list of checked files
    xml += '  <system-out>Checked files:{escaped_files}\n  </system-out>\n'.format(
        escaped_files=escape(''.join(['\n* %s' % f for f in filenames]))
    )

    xml += '</testsuite>\n'
    return xml

Return from main

Finally, we return the exit code.

return exit_code

The CMake Plugin

Now that our linting tool is ready, we need to write an interface for it to attach to ament.

Getting Started

We create a new ros2 package named ament_cmake_[linter] in the ament_lint folder, and fill out package.xml. As an example, the one for mypy looks like this:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>ament_cmake_mypy</name>
  <version>0.7.3</version>
  <description>
    The CMake API for ament_mypy to perform static type analysis on python code
    with mypy.
  </description>
  <maintainer email="<email>">Ted Kern</maintainer>
  <license>Apache License 2.0</license>
  <author email="<email>">Ted Kern</author>

  <buildtool_depend>ament_cmake_core</buildtool_depend>
  <buildtool_depend>ament_cmake_test</buildtool_depend>

  <buildtool_export_depend>ament_cmake_test</buildtool_export_depend>
  <buildtool_export_depend>ament_mypy</buildtool_export_depend>

  <test_depend>ament_cmake_copyright</test_depend>
  <test_depend>ament_cmake_lint_cmake</test_depend>

  <export>
    <build_type>ament_cmake</build_type>
  </export>
</package>

CMake Configuration

We write the installation and testing instructions in CMakeLists.txt, as well as pass our extras file to ament_package. This is the one for mypy, yours should look pretty similar:

cmake_minimum_required(VERSION 3.5)

project(ament_cmake_mypy NONE)

find_package(ament_cmake_core REQUIRED)
find_package(ament_cmake_test REQUIRED)

ament_package(
  CONFIG_EXTRAS "ament_cmake_mypy-extras.cmake"
)

install(
  DIRECTORY cmake
  DESTINATION share/${PROJECT_NAME}
)

if(BUILD_TESTING)
  find_package(ament_cmake_copyright REQUIRED)
  ament_copyright()

  find_package(ament_cmake_lint_cmake REQUIRED)
  ament_lint_cmake()
endif()

Then we register our extension with ament in ament_cmake_[linter]-extras.cmake. Again, this one is for mypy, but you should be able to easily repurpose it.

find_package(ament_cmake_test QUIET REQUIRED)

include("${ament_cmake_mypy_DIR}/ament_mypy.cmake")

ament_register_extension("ament_lint_auto" "ament_cmake_mypy"
  "ament_cmake_mypy_lint_hook.cmake")

We then create a CMake function in cmake/ament_[linter].cmake to invoke our test when needed. This will be specific to your linter and the wrapper you wrote above, but here’s how it looks for mypy:

#
# Add a test to statically check Python types using mypy.
#
# :param CONFIG_FILE: the name of the config file to use, if any
# :type CONFIG_FILE: string
# :param TESTNAME: the name of the test, default: "mypy"
# :type TESTNAME: string
# :param ARGN: the files or directories to check
# :type ARGN: list of strings
#
# @public
#
function(ament_mypy)
  cmake_parse_arguments(ARG "" "CONFIG_FILE;TESTNAME" "" ${ARGN})
  if(NOT ARG_TESTNAME)
    set(ARG_TESTNAME "mypy")
  endif()

  find_program(ament_mypy_BIN NAMES "ament_mypy")
  if(NOT ament_mypy_BIN)
    message(FATAL_ERROR "ament_mypy() could not find program 'ament_mypy'")
  endif()

  set(result_file "${AMENT_TEST_RESULTS_DIR}/${PROJECT_NAME}/${ARG_TESTNAME}.xunit.xml")
  set(cmd "${ament_mypy_BIN}" "--xunit-file" "${result_file}")
  if(ARG_CONFIG_FILE)
    list(APPEND cmd "--config-file" "${ARG_CONFIG_FILE}")
  endif()
  list(APPEND cmd ${ARG_UNPARSED_ARGUMENTS})

  file(MAKE_DIRECTORY "${CMAKE_BINARY_DIR}/ament_mypy")
  ament_add_test(
    "${ARG_TESTNAME}"
    COMMAND ${cmd}
    OUTPUT_FILE "${CMAKE_BINARY_DIR}/ament_mypy/${ARG_TESTNAME}.txt"
    RESULT_FILE "${result_file}"
    WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
  )
  set_tests_properties(
    "${ARG_TESTNAME}"
    PROPERTIES
    LABELS "mypy;linter"
  )
endfunction()

This function checks for the existence of your linting CLI, prepares the argument list to pass in, creates an output directory for the report, and labels the test type.

Finally, in ament_cmake_[linter]_lint_hook.cmake, we write the hook into the function we just defined. This one is for mypy but yours should look almost identical:

file(GLOB_RECURSE _python_files FOLLOW_SYMLINKS "*.py")
if(_python_files)
  message(STATUS "Added test 'mypy' to statically type check Python code.")
  ament_mypy()
endif()

Final Steps

With both packages ready, we build our new packages using colcon:

~/ros2/src $ colcon build --packages-select ament_mypy ament_cmake_mypy --event-handlers console_direct+ --symlink-install

If all goes well, we can now use this linter just like any other to test our Python packages!

It’s highly recommended you write a test suite to go along with your code. ament_mypy lints itself with flake8 and mypy, and has an extensive pytestbased suite of functions to validate its behavior. You can see this suite here.

Check out our other article on how to use the mypy linter if you’d like to learn more about how to invoke linters from your testing suite for other packages.

21 August, 2019 09:30AM

August 20, 2019

hackergotchi for Netrunner

Netrunner

Netrunner 19.08 – Indigo released

The Netrunner Team is happy to announce the immediate availability of Netrunner 19.08 Indigo – 64bit ISO. This version is based upon Debian Buster (10) and comes with a few new updated software versions. KDE Plasma 5.14.5 KDE Frameworks 5.54 KDE Applications 18.08 Qt 5.11.3 Linux Kernel 4.19.0~5 Firefox-ESR 60.8.1 Thunderbird 60.7.2   Switching Firefox […]

20 August, 2019 08:29PM by llectronics

hackergotchi for ARMBIAN

ARMBIAN

La Frite

20 August, 2019 03:28PM by Igor Pečovnik

hackergotchi for Maemo developers

Maemo developers

Migrating to a new Mastodon instance

The wonders of improvised Mastodon instances: one node disappears after an outage caused by a summer heatwave, leaving its users no way to migrate their data or to notify their followers.

After about one month of waiting for the node to come up or give some signals of life, I've decided to create a new account on another instance. If you use Mastodon and you were following me, please forgive me for the annoyance and follow me again here.

0 Add to favourites0 Bury

20 August, 2019 02:39PM by Alberto Mardegan (mardy@users.sourceforge.net)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: How to integrate Ubuntu with Active Directory

Ubuntu and Active Directory

Ubiquitous use of Microsoft tools coupled with increasing popularity of open source Linux software for enterprise presents new challenges for non-Microsoft operating systems that require seamless integration with Active Directory for authentication and identity management. This is because Active Directory was never designed as a cross-platform directory service.

Integrating Ubuntu Desktop 18.04 LTS into an existing Active Directory architecture can be an automated and effortless process when using the Powerbroker Identity Service Open tool (PBIS Open), a derivative of BeyondTrust’s Open-Source Active Directory Bridging.

Integrating Ubuntu Desktop into an existing Active Directory architecture can be an automated and effortless process

This whitepaper provides detailed insights and step-by-step instructions for using PBIS Open to integrate Ubuntu Desktop into Active Directory and suggests alternative solutions in cases where it is not a suitable option.

What can I learn from this whitepaper?

  • Overview, benefits and drawbacks of using PBIS Open to integrate third party operating systems into an existing Microsoft Active Directory architecture.
  • Detailed steps for PBIS Open set up and integrating Ubuntu into Active Directory.
  • Alternative tools to integrate Ubuntu into Active Directory.

To download the whitepaper, complete the form below:

20 August, 2019 02:23PM

Raphaël Hertzog: Promoting Debian LTS with stickers, flyers and a video

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 August, 2019 10:45AM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).

Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 August, 2019 09:38AM

August 19, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 592

Welcome to the Ubuntu Weekly Newsletter, Issue 592 for the week of August 11 – 17, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

19 August, 2019 10:50PM

Jono Bacon: Announcing my new book: ‘People Powered: How communities can supercharge your business, brand, and teams’

I am absolutely thrilled to announce my brand new book, ‘People Powered: How communities can supercharge your business, brand, and teams’ published by HarperCollins Leadership.

It will be available in hard cover, audiobook, and e-book formats, available from Amazon, Audible, Walmart, Target, Google Play, Apple iBooks, Barnes and Noble, and other great retailers.

The book is designed for leaders, founders, marketing and customer success staff, community managers/evangelists, and others who want to build a more productive, more meaningful relationship with your users, customers, and broader audience.

‘People Powered’ covers three key areas:

  1. The value and potential of building a community inside and outside a business, how it can create a closer relationship with your users and customers, and deliver tangible value such as improved support, technology development, advocacy, and more.
  2. I present the strategic method that I have used with hundreds of clients and companies I consult with and advise. This guides you how to create a comprehensive, productive, and realistic community strategy that scales up, build cross-departmental skin in the game, create incentives, run events, measure community success, and deliver results.
  3. Finally, I walk you through how to to integrate this strategy into a business, covering hiring staff, building internal skills and capabilities, measuring this work with a series of concrete maturity models, and much more.

The book covers a comprehensive range of topics within these areas:

The book features a forward from New York Times bestseller Peter Diamandis, founder of XPRIZE and Singularity University.

It also features contributions from Joseph Gordon-Levitt (Emmy-award winning actor), Jim Whitehurst (CEO, Red Hat), Mike Shinoda (Co-Founder, Linkin Park), Ali Velshi (Anchor, MSNBC), Jim Zemlin (Executive Director, The Linux Foundation), Noah Everett (Founder, TwitPic), Alexander van Engelen (Contributor, Fractal Audio Systems), and others.

The book has also received a comprehensive range of endorsements, including Nat Friedman (CEO, GitHub), Jim Whitehurst (CEO, Red Hat), Whitney Bouck (COO, HelloSign), Jeff Atwood (Founder, StackOverflow/Discourse), Juan Olaizola (COO, Santander Espana), Jamie Hyneman (Co-Creator and Presenter, Mythbusters), and many others:

Here are a few sample endorsements:

“If you want to tap into the power that communities can bring to businesses and teams, there is no greater expert than Jono Bacon.”

Nat Friedman, CEO of GitHub

“If you want to unlock the power of collaboration in communities, companies, and teams, Jono should be your tour guide and ‘People Powered’ should be your map.”

Jamie Smith, Former Advisor to President Barack Obama

“If you don’t like herding cats but need to build a community, you need to read ‘People Powered’.”

Jamie Hyneman, Co-Creator/Host of Mythbusters

“In my profession, building networks is all about nurturing relationships for the long term. Jono Bacon has authored the recipe how to do this, and you should follow it.”

Gia Scinto, Head of Talent at YCombinator

“When people who are not under your command or payment eagerly work together towards a greater purpose, you can move mountains. Jono Bacon is one of the most accomplished experts on this, and in this book he tells you how to it’s done.”

Mårten Mickos, CEO of HackerOne

“Community is fundamental to DigitalOcean’s success, and helped us build a much deeper connection with our audience and customers. ‘People Powered’ presents the simple, pragmatic recipe for doing this well.”

Ben Uretsky, Co-Founder of DigitalOcean

“Technology tears down the barriers of collaboration and connects our communities – globally and locally. We need to give all organizations and developers the tools to build and foster this effort. Jono Bacon’s book provides timely insight into what makes us tick as humans, and how to build richer, stronger technology communities together.”

Kevin Scott, CTO of Microsoft

People Powered Preorder Package

‘People Powered’ is released on 12th November 2019 but I would love you wonderful people to preorder the book.

Preordering will give you access to a wide range of perks. This includes early access to half the book, free audio book chapters, an exclusive six-part, 4-hour+ ‘People Powered Plus’ video course, access to a knowledge base with 100+ articles, 2 books, and countless videos, exclusive webinars and Q&As, and sweepstakes for free 1-on-1 consulting workshops.

All of these perks are available just for the price of buying the book, there are no additional costs.

To unlock this preorder package, you simply buy the book, fill in a form with your order number and these perks will be unlocked. Good times!

To find out more about the book and unlock the preorder package, click here

The post Announcing my new book: ‘People Powered: How communities can supercharge your business, brand, and teams’ appeared first on Jono Bacon.

19 August, 2019 03:00PM

Ubuntu Blog: Design and Web team summary – 16 August 2019

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

19 August, 2019 09:39AM

Ubuntu Blog: Design and Web team summary – 16 August 2019

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

The post Design and Web team summary – 16 August 2019 appeared first on Ubuntu Blog.

19 August, 2019 09:39AM

Ubuntu Blog: Issue #2019.08.19 – Kubeflow at CERN

  • Replicating Particle Collisions at CERN with Kubeflow – this post is interesting for a number of reasons. First, it shows how Kubeflow delivers on the promise of portability and why that matters to CERN. Second, it reiterates that using Kubeflow adds negligible performance overhead as compared to other methods for training. Finally, the post shows another example of how images and deep learning can replace more computationally expensive methods for modelling real-word behaviour. This is the future, today.
  • AI vs. Machine Learning: The Devil Is in the Details – Need a refresh on what the difference is between artificial intelligence, machine learning and deep learning? Canonical has done a webinar on this very topic, but sometimes a different set of words are useful, so read this article for a refresh. You’ll also learn about a different set of use cases for how AI is changing the world – from Netflix to Amazon to video surveillance and traffic analysis and predictions.
  • Making Deep Learning User-Friendly, Possible? – The world has changed a lot in the 18 months since this article was published. One of the key takeaways from this article is a list of features to compare several standalone deep learning tools. The exciting news? The output of these tools can be used with Kubeflow to accelerate Model Training. There are several broader questions as well – How can companies leverage the advancements being made within the AI community? Are better tools the right answer? Finding a partner may be the right answer.
  • Interview spotlight: One of the fathers of AI is worried about its future – Yoshua Bengio is famous for championing deep learning, one of the most powerful technologies in AI. Read this transcript to understand some of his concerns with the direction of AI, as well as the exciting developments in AI. Research that is extending deep learning into things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

The post Issue #2019.08.19 – Kubeflow at CERN appeared first on Ubuntu Blog.

19 August, 2019 08:00AM

David Tomaschik: Hacker Summer Camp 2019: CTFs for Fun & Profit

Okay, I’m back from Summer Camp and have caught up (slightly) on life. I had the privilege of giving a talk at BSidesLV entitled “CTFs for Fun and Profit: Playing Games to Build Your Skills.” I wanted to post a quick link to my slides and talk about the IoT CTF I had the chance to play.

I played in the IoT Village CTF at DEF CON, which was interesting because it uses real-world devices with real-world vulnerabilities instead of the typical made-up challenges in a CTF. On the other hand, I’m a little disappointed that it seems pretty similar (maybe even the same) year-to-year, not providing much variety or new learning experiences if you’ve played before.

19 August, 2019 07:00AM

August 17, 2019

hackergotchi for Xanadu developers

Xanadu developers

Curso de VIM

Para los que no conocen a VIM, es una versión mejorada del editor de texto VI y es uno de los editores mas populares en las distribuciones Linux y es una herramienta que todo sysadmin debe conocer.

Un amigo me pidió hace unos días que lo ayudara a aprender esta herramienta, y en mi búsqueda de material para ofrecerle encontré esta joya y decidí compartirla.

https://anderrasovazquez.github.io/curso-de-vim/

Es un pequeño curso realizado por Ander Raso Vazquez con la finalidad de enseñar lo básico del uso de esta gran herramienta.

Espero que esta información les sea útil, saludos.

17 August, 2019 05:36PM by sinfallas

August 15, 2019

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Linting ROS 2 Packages with mypy

One of the most common complaints from developers moving into large Python codebases is the difficulty in figuring out type information, and the ease by which type mismatch errors can appear at runtime.

Python 3.5 added support for a type annotation system, described in PEP 484. Python 3.6+ expands this with individual variable annotations (PEP 526). While purely decorative and optional, a tool like mypy can use it to perform static type analysis and catch errors, just like compilers and linters for statically typed languages.

There are limitations to mypy, however. It only knows what it’s explicitly told. Functions and classes without annotations are by default not checked, though they can be configured to default to Any or raise mypy errors.

The ROS 2 build farm is essentially only set up to run colcon test. As a result, any contributor wishing to use mypy currently needs to do so manually and hope that no other changes were made by someone not using annotations, or incorrectly annotating their code. This leads to many packages that are partially annotated, or with incorrect annotations ignored when by falling back to Any.

Seeking a fix that 1) helped us remember to check our contributions and 2) maintains a guarantee that packages that are annotated correctly stay so, we created a mypy linter for ament that can be integrated with the rest of the package test suite, allowing for mypy to be run automatically in the ROS 2 build farm and as part of the CI process. Now we can guarantee type correctness in our python code, and avoid the dreaded type mismatch errors!

ament_lint in action

The ament_lint metapackage defines many common linters that can integrate into the build/test pipeline for ROS 2. The package ament_mypy within handles mypy integration.

To add it as a test within your test suite, you’ll need to make a few changes to your package:

  • Add ament_mypy as a test dependency in your package.xml
  • Add pytest as a test requirement in setup.py
  • Write a test case that invokes ament_mypy and fails accordingly
  • Add ament_mypy as a testing requirement to CMakeLists.txt, if using CMake

package.xml

For the first, find the section of your package.xml after the name/author/license information, where the dependencies are declared. Alongside the other depend blocks, add an entry

<test_depend>ament_mypy</test_depend>

setup.py

For setup.py, add the keyword argument

tests_require=['pytest']

if its not already present.

Test Case

Finally, we add a file test/test_mypy.py, that contains a call to ament_mypy.main()

from ament_mypy.main import main

import pytest


@pytest.mark.mypy
@pytest.mark.linter
def test_mypy():
    rc = main()
    assert rc == 0, 'Found code style errors / warnings'

If ament_mypy.main() returns non-zero, our test will fail and the error messages will display.

CMake

For configuring CMake, there are two options: manually list out each individual linter and run them, or use the ament_lint_auto convenience package to run all ament_lint dependencies.

In either case, package.xml needs to be configured as above, with an additional dependency of

<buildtool_depend>ament_cmake</buildtool_depend

To manually add ament_mypy, add the following code to your CMakeLists.txt file:

find_package(ament_cmake REQUIRED)
if(BUILD_TESTING)
  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy()
endif()

To use ament_lint_auto, add it as a test dependency to package.xml

<test_depend>ament_lint_auto</test_depend>

And add the following to CMakeLists.txt, before the ament_package() call

# this must happen before the invocation of ament_package()
if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  ament_lint_auto_find_test_dependencies()
endif()

(Optional) Configuring mypy

To pass custom configurations to mypy, you can specify a ‘.ini’ configuration file (documented here) in the arguments to main.

setup.py

Create a config directory under test, and a mypy.ini file within. Fill the file with your custom configuration, e.g.:

# Global options:

[mypy]
python_version = 3.5
warn_return_any = True
warn_unused_configs = True

# Per-module options:

[mypy-mycode.foo.*]
disallow_untyped_defs = True

[mypy-mycode.bar]
warn_return_any = False

[mypy-somelibrary]
ignore_missing_imports = True

In setup.py, pass in the --config option with the path to your desired file.

from pathlib import Path

from ament_mypy.main import main

import pytest


@pytest.mark.mypy
@pytest.mark.linter
def test_mypy():
    config_path = Path(__file__).parent / 'config' / 'mypy.ini'
    rc = main(argv=['--exclude', 'test', '--config', str(config_path.resolve())])
    assert rc == 0, 'Found code style errors / warnings'

CMake

When using CMake, you’ll need to pass the CONFIG_FILE arg. In the manual invocation example, that means changing the BUILD_TESTING block as follows (assuming your mypy.ini file is in the same directory as above):

find_package(ament_cmake REQUIRED)
if(BUILD_TESTING)
  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy(CONFIG_FILE "${CMAKE_CURRENT_LIST_DIR}/test/config/mypy.ini")
endif()

The additional argument means ament_cmake_mypy cannot be auto invoked by ament_lint_auto. If you’re already using ament_lint_auto for other packages, you’ll need to exclude ament_mypy.

To exclude ament_cmake_mypy, set the AMENT_LINT_AUTO_EXCLUDE variable and then manually find and invoke it:

# this must happen before the invocation of ament_package()
if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  list(APPEND AMENT_LINT_AUTO_EXCLUDE
    ament_cmake_mypy
  )
  ament_lint_auto_find_test_dependencies()

  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy(CONFIG_FILE "${CMAKE_CURRENT_LIST_DIR}/test/config/mypy.ini")
endif()

Running the Test

To run the test and get output to the console, run the following in your workspace:

colcon test -event-handlers console_direct+

To test only your package:

colcon test --packages-select <YOUR_PACKAGE> --event-handlers console_direct+

15 August, 2019 09:29PM

Ubuntu Blog: Linting ROS 2 Packages with mypy

One of the most common complaints from developers moving into large Python codebases is the difficulty in figuring out type information, and the ease by which type mismatch errors can appear at runtime.

Python 3.5 added support for a type annotation system, described in PEP 484. Python 3.6+ expands this with individual variable annotations (PEP 526). While purely decorative and optional, a tool like mypy can use it to perform static type analysis and catch errors, just like compilers and linters for statically typed languages.

There are limitations to mypy, however. It only knows what it’s explicitly told. Functions and classes without annotations are by default not checked, though they can be configured to default to Any or raise mypy errors.

The ROS 2 build farm is essentially only set up to run colcon test. As a result, any contributor wishing to use mypy currently needs to do so manually and hope that no other changes were made by someone not using annotations, or incorrectly annotating their code. This leads to many packages that are partially annotated, or with incorrect annotations ignored when by falling back to Any.

Seeking a fix that 1) helped us remember to check our contributions and 2) maintains a guarantee that packages that are annotated correctly stay so, we created a mypy linter for ament that can be integrated with the rest of the package test suite, allowing for mypy to be run automatically in the ROS 2 build farm and as part of the CI process. Now we can guarantee type correctness in our python code, and avoid the dreaded type mismatch errors!

ament_lint in action

The ament_lint metapackage defines many common linters that can integrate into the build/test pipeline for ROS 2. The package ament_mypy within handles mypy integration.

To add it as a test within your test suite, you’ll need to make a few changes to your package:

  • Add ament_mypy as a test dependency in your package.xml
  • Add pytest as a test requirement in setup.py
  • Write a test case that invokes ament_mypy and fails accordingly
  • Add ament_mypy as a testing requirement to CMakeLists.txt, if using CMake

package.xml

For the first, find the section of your package.xml after the name/author/license information, where the dependencies are declared. Alongside the other depend blocks, add an entry

<test_depend>ament_mypy</test_depend>

setup.py

For setup.py, add the keyword argument

tests_require=['pytest']

if its not already present.

Test Case

Finally, we add a file test/test_mypy.py, that contains a call to ament_mypy.main()

from ament_mypy.main import main

import pytest


@pytest.mark.mypy
@pytest.mark.linter
def test_mypy():
    rc = main()
    assert rc == 0, 'Found code style errors / warnings'

If ament_mypy.main() returns non-zero, our test will fail and the error messages will display.

CMake

For configuring CMake, there are two options: manually list out each individual linter and run them, or use the ament_lint_auto convenience package to run all ament_lint dependencies.

In either case, package.xml needs to be configured as above, with an additional dependency of

<buildtool_depend>ament_cmake</buildtool_depend

To manually add ament_mypy, add the following code to your CMakeLists.txt file:

find_package(ament_cmake REQUIRED)
if(BUILD_TESTING)
  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy()
endif()

To use ament_lint_auto, add it as a test dependency to package.xml

<test_depend>ament_lint_auto</test_depend>

And add the following to CMakeLists.txt, before the ament_package() call

# this must happen before the invocation of ament_package()
if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  ament_lint_auto_find_test_dependencies()
endif()

(Optional) Configuring mypy

To pass custom configurations to mypy, you can specify a ‘.ini’ configuration file (documented here) in the arguments to main.

setup.py

Create a config directory under test, and a mypy.ini file within. Fill the file with your custom configuration, e.g.:

# Global options:

[mypy]
python_version = 3.5
warn_return_any = True
warn_unused_configs = True

# Per-module options:

[mypy-mycode.foo.*]
disallow_untyped_defs = True

[mypy-mycode.bar]
warn_return_any = False

[mypy-somelibrary]
ignore_missing_imports = True

In setup.py, pass in the --config option with the path to your desired file.

from pathlib import Path

from ament_mypy.main import main

import pytest


@pytest.mark.mypy
@pytest.mark.linter
def test_mypy():
    config_path = Path(__file__).parent / 'config' / 'mypy.ini'
    rc = main(argv=['--exclude', 'test', '--config', str(config_path.resolve())])
    assert rc == 0, 'Found code style errors / warnings'

CMake

When using CMake, you’ll need to pass the CONFIG_FILE arg. In the manual invocation example, that means changing the BUILD_TESTING block as follows (assuming your mypy.ini file is in the same directory as above):

find_package(ament_cmake REQUIRED)
if(BUILD_TESTING)
  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy(CONFIG_FILE "${CMAKE_CURRENT_LIST_DIR}/test/config/mypy.ini")
endif()

The additional argument means ament_cmake_mypy cannot be auto invoked by ament_lint_auto. If you’re already using ament_lint_auto for other packages, you’ll need to exclude ament_mypy.

To exclude ament_cmake_mypy, set the AMENT_LINT_AUTO_EXCLUDE variable and then manually find and invoke it:

# this must happen before the invocation of ament_package()
if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  list(APPEND AMENT_LINT_AUTO_EXCLUDE
    ament_cmake_mypy
  )
  ament_lint_auto_find_test_dependencies()

  find_package(ament_cmake_mypy REQUIRED)
  ament_cmake_mypy(CONFIG_FILE "${CMAKE_CURRENT_LIST_DIR}/test/config/mypy.ini")
endif()

Running the Test

To run the test and get output to the console, run the following in your workspace:

colcon test -event-handlers console_direct+

To test only your package:

colcon test --packages-select <YOUR_PACKAGE> --event-handlers console_direct+

The post Linting ROS 2 Packages with mypy appeared first on Ubuntu Blog.

15 August, 2019 09:29PM

hackergotchi for Whonix

Whonix

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S12E19 – Starglider

This week we’ve been fixing floors and playing with the new portal HTML element. We round up the Ubuntu community news including the release of 18.04.3 with a new hardware enablement stack, better desktop integration for Livepatch and improvements in accessing the latest Nvidia drivers. We also have our favourite picks from the general tech news.

It’s Season 12 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

15 August, 2019 02:00PM

Julian Andres Klode: APT Patterns

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples

apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)

the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

15 August, 2019 01:55PM

Ubuntu Blog: 8 Ways Snaps are Different

Depending on the audience, the discussion of software packaging elicits very different responses. Users generally don’t care how software is packaged, so long as it works. Developers typically want software packaging as a task to not burden them and just magically happen. Snaps aren’t magic, but aim to achieve both ease of maintenance and transparency in use.

Most software packaging systems differ only a little in file format, tools used in their creation and methods of discovery and delivery. Snaps come with a set of side benefits beyond just delivering bytes in a compressed file to users. In this article, we’ll cover just 8 of the ways in which snaps improve upon existing Linux software packaging.

Easy publishing on your timescales

Getting software in officially blessed Linux distribution archives can be hard. This is especially true where the software archives impose strict adherence to a set of distribution rules. For the leading Linux brands, there can be a lengthy delay between a request for inclusion, submission and the package landing in a stable release.

External repositories can be setup and hosted by software developers. However, these software archives are often difficult for users to discover, and not straightforward to enable, especially for novices. The developer has the added overhead for to maintain the repository.

On the other hand, snaps are published in a central store, which is easily updated and straightforward to search and install from. Within a single day (and often faster), a developer can go from snapcraft register to claim the name of their application to snapcraft push to upload, and snapcraft release to publish their application.

Developers can publish builds for multiple processor architectures at their own pace without having to wait for distribution maintainers to rebuild, review, sponsor and upload their packages. Developers are in control of the release cadence for their software.

Automatic updates

With the best will in the world, most users don’t install software updates. Sure, that doesn’t mean you, dear reader. We’re confident you’re on top of apt upgrade, dnf update or pacman -Syyu each day, perhaps numerous times every day. A significant proportion of users do not update their systems regularly, though. Estimates place this anywhere between 40% and 70%. This can be even worse for unattended devices such as remote servers, or Raspberry Pis tucked away running an appliance

Modern Linux distributions have sought to mitigate this with background tasks to automate critical security updates, or graphical notifications to remind users. However, many users switch these off, or simply ignore the notification, leaving themselves at risk.

Snaps solves this by enabling automatic updates by default on all installations. When the developer publishes a new release of software to the store, they can be confident that users will automatically get those updates soon after. By default, the snapd daemon will check in for updates in the store multiple times a day

However, some users do not wish to have their software updated immediately. Perhaps they’re giving a presentation and want to use the current version they’ve prepared for, or maybe their Internet connectivity or allowance is limited. Snaps enable users to control when updates are delivered. Users can postpone them to update outside the working day, overnight, or later in the month

Users are also able to snap refresh to force all snaps to update, or individually with snap refresh (snapname). Auto refreshing ensures users get the latest security updates, bug fixes and feature improvements, while still retaining control where required.

One package for everyone

It’s commonly known that there are (probably) more Linux distributions than there are species of beetle on Planet Earth. Upon releasing a package in one format, users of all other distros will rise up and demand packages for their specific spin of Linux. For each additional package to be created, published and maintained, there is extra work for the developer(s). There’s a diminishing return on investment for every additional packaging format supported.

With one snap package, a developer can hit a significant proportion of users across more than 40 distributions, saving time on packaging, QA, and release activities. The same snap will work on Arch Linux, Debian, Ubuntu, Fedora and numerous other distributions built upon those bases, such as Manjaro, Linux Mint, elementary OS and CentOS. While not every distribution is covered, a large section of the Linux-using community is catered to with snaps.  

Changing channels, tracks and branches

When publishing software in traditional Linux distribution repositories, usually there is only one supported version or release of that software available at a time. While distributions may have separate ‘stable’, ‘testing’ and ‘unstable’ branches, these are typically entire repositories. 

As a result, it’s not usually straightforward or even possible to granularly populate the individual release with multiple versions of the same application. Moreover, the overhead of maintaining one package in those standard repositories is enough that uploading multiple versions would be unnecessarily onerous.

Usually a developer will build beta releases of their software for experts, QA testing or enthusiasts to try out before a release candidate is published ahead of a stable release. As Linux distributions don’t easily support multiple releases of the same application in their repository, the developer has to maintain separate packages out of band. Maintaining these repositories of beta, candidate and stable releases is further overhead.

The Snap Store has this built in for all the snaps. By default there are four risk levels called ‘channels‘ named ‘stable’, ‘candidate’, ‘beta’ and ‘edge’. Developers can optionally publish different builds of the same application to those channels. For example, the VLC developers use the ‘stable’ channel for their final releases and the ‘edge’ channel for daily builds, directly from their continuous integration system

Users may install the stable release, but upon hearing of new features in the upcoming beta may choose to snap refresh (snapname) --channel=beta to test them out. They can later snap refresh (snapname) --channel=stable to revert back to the stable channel. Users can elect to stick to a particular risk level they’re happy with on a per-application basis. They don’t need to update their entire OS to get the ‘testing’ builds of software, and don’t have to opt-in en-masse for all applications either

Furthermore, the Snap Store supports tracks, which enables developers to publish multiple supported release streams for their application in the store. By default, there is only one implied track – ‘latest’, but developers may request additional tracks for each supported release. For example, at the time of writing, the node snap contains separate tracks for Node 6, 8, 9, 10, 11 and 12 and the default ‘latest’ track which contains Node 13 nightly builds

Branches are useful for developers to push short-lived ‘hidden’ builds of their software. This can often be useful when users report a bug with the software, and the developer wishes to produce a temporary test build specifically for that user, and anyone else affected by the bug. The developer can snapcraft push (snapname) --release=candidate/fix-1234 to push a candidate build to the fix-1234 branch.

Delta uploads and downloads

With most of the traditional Linux packaging systems when an update gets published, all users get the entire package every time. As a result, when a new update to a large package is released, there’s a significant download for every user. This places a load – and cost – on the host of the repository, and time and bandwidth on that of the user.

The Snap Store supports delta updates for both uploads and downloads. The snapcraft tool used for publishing snaps to the Snap Store will determine if it’s more efficient to upload a full snap or a delta each time. Similarly the snapd daemon, in conjunction with the Snap Store, will calculate whether it’s better to download a delta or the full size snap. Users do not need to specify, this is automatic

Snapshot on removal

Traditional Linux packaging systems don’t associate data with applications directly. When a software package is installed, it may create databases, configuration files and application data in various parts of the filesystem. Upon application removal, this data is usually left behind, all over the filesystem. It’s an exercise for the user or system administrator to clean up after software is removed.

Snaps seek to solve this as part of the application confinement. When a snap is installed, it has access to a set of directories in which configuration and application data may be stored. When the snap is removed, the associated data from those directories is also removed. This ensures snaps can be atomically added and removed, leaving the system in a consistent state afterwards.

Starting in snapd 2.37, it’s possible to take a snapshot of application data prior to removal. The snap save (snapname) command will create a compressed snapshot of the application data in /var/lib/snapd/snapshots. The list of saved snapshots can be seen with snap saved and can be restored via snap restore (snapshot). Snapshots can be removed with snap forget (snapshot) to reclaim disk space

In addition, starting in snapd 2.39, an automatic snapshot is taken whenever a snap is removed from the system. These snapshots are kept for 31 days by default. The retention period may be configured as low as 24 hours, or raised to a longer duration. Alternatively, the snapshot feature can be disabled completely

Parallel installs

Existing packaging systems on Linux don’t cater well to having multiple versions of the same application installed at once. In some cases, developers are well catered for. Multiple versions of a small selection of tools are available such as gcc-6 and gcc-7 in the repositories, which can be installed simultaneously. However, this is only for specific packages. It’s not a universal constant however, that any packaged application could be installed multiple times with different versions

Snaps solve this with an experimental parallel install feature. Users can install multiple versions of the same snap side-by-side. Each can be given its own ‘instance key’ – which is a unique name to refer to the install. They can then choose which instance to launch, or indeed launch both. For example, a user may want both the stable and daily builds of VLC installed at once, to allow them to test both upcoming features while still being able to play videos on the stable release when the daily build is unstable.

Application discovery

Traditional software packaging is typically combined with a graphical package manager to make it easier for users to install software. For a long time, many of these graphical tools have languished in design and features, serving as a predominantly technical frontend to the underlying console package management tools.

For some years developers have been publishing their software in external repositories, PPAs, in GitHub releases pages or their own website download page.

The default tools don’t expose applications that aren’t part of the default repositories.  While some have had visual refreshes and featured updates, they still don’t enable users to discover brand new software hosted externally. This makes it difficult for developers to get their software in front of modern Linux distribution users.

The Snap Store solves this in multiple ways. The graphical desktop package managers GNOME Software and KDE Discover both feature plugins that can search the Snap Store. Moreover, a web frontend to the Snap Store enables users to browse and search for new applications by category or publisher

Making it easier to publish software in the Snap Store means that delivering a snap can become part of the standard release process for applications. Once developers publish their snap to the Snap Store, it’s immediately visible to users both in the graphical storefronts and on the web.

Developers can link directly to their Snap Store page as a permanent storefront for their application. The storefront pages show screenshots, videos, descriptions along with currently published versions and details of how to install the application. The store features buttons and cards, which can be embedded in pages and blog posts to promote the snap. Users are able to share these pages with friends and colleagues who may appreciate the application, which will drive other users to these snaps.

Furthermore, the Snap Advocacy team regularly highlight new applications on social media and via blog posts to draw user attention to appealing, up-to-date and useful new software. The team also regularly updates the list of ‘Featured Apps’ presented in both the graphical desktop package managers, and on the front page of the Snap Store web frontend.

Developers are encouraged to ensure their store page looks great with screenshots, videos, a rich application description along with links to support avenues. Application publishers can reach the Snap Advocacy team via the snapcraft forum to request their app is included in a future social media or blog update, or to be considered for inclusion as a featured entry in the Snap Store.

Conclusion

In this article I picked eight of the features that set snapcraft, snap and the Snap Store apart from other traditional and contemporary packaging systems. For many people a lot of the technical details of software packaging and delivery are of little interest. What most people care about is getting fresh software with security updates, in a timely fashion. That’s exactly what snaps aim to do. The rest is icing on the cake.

As always we welcome comments and suggestions on our friendly forum. Until next time.

Photo by Samuel Zeller on Unsplash

The post 8 Ways Snaps are Different appeared first on Ubuntu Blog.

15 August, 2019 10:44AM

Ubuntu Blog: Why multi-cloud has become a must-have for enterprises: six experts weigh in

Remember the one-size-fits-all approach to cloud computing? That was five years ago. Today, multi-cloud architectures that use two, three, or more providers, across a mix of public and private platforms, are quickly becoming the preferred strategy at most companies.

Despite the momentum, pockets of hesitation remain. Some sceptics are under the impression that deploying cloud platforms and services from multiple vendors can be a complex process. Others worry about security, regulatory, and performance issues.

To get a snapshot of current thinking about multi-cloud, we asked six industry experts the following question: In your opinion, why should organisations adopt a multi-cloud approach, and what will happen if they don’t?

First, here are our thoughts. There are extremely compelling reasons that multi-cloud is proving itself to be the correct end game. Multi-cloud provides huge agility and cost efficiency with its flexibility to separate different workloads into different environments depending on their specific requirements. This includes a compelling and economically comparable on-premise strategy for cloud and opens up new opportunities for innovation and accelerated roll out of new services to customers. Multi-cloud also means companies can avoid vendor lock-in and dependence on a single provider. While lingering concerns about multi-cloud are understandable, a host of newer technologies like Kubernetes and increasing consolidation of container runtimes are making life easier for companies that need to make multi-cloud their de facto strategy.

What we heard from other experts:

Sean Michael Kerner, Freelance technology journalist

“Is the Earth flat? Is the Earth the center of the solar system? Reality is of course that the universe is a big place, and Earth is just a small part of it. Similarly, no one cloud should be the center of any enterprises data strategy on its own. The thinking that there is “only one true cloud” is a type of zealotry that could have non-trivial consequences. It creates a single point of failure, dependencies and lock-in that an organization might not want.

Multi-cloud is about choice, it’s about the ability to deploy workloads on any number of different combinations of on-premises and public cloud providers. With multi-cloud comes the promise of agility and freedom and at the end of the day who doesn’t want their data to be free?”

James Sanders,TechRepublic

Multi-cloud empowers developers to pick and choose the best components for their use cases—imagine a situation where you want to use AWS Lambda for client-facing event handling, but want to take those event logs and analyze them in Google Cloud Platform’s data analytics services. These types of multi-cloud deployments allow developers to leverage the strengths in different cloud platforms, as well as the larger ecosystem of third-party integrations for AWS Lambda that have not yet made it to GCP Cloud Functions. By taking an active view of what is available—not just what features one public cloud platform offers—you retain more ownership of your platform.

Carl Brooks,451 Research

“Enterprises should use a multi-cloud strategy because this approach offers a range of options for efficiently leveraging the benefits of different platforms. For instance, a legacy application trapped in a traditional LAMP stack is a wasted opportunity if the outcome could be achieved more efficiently in a public cloud. Likewise, private clouds offer their own set of advantages (dedicated resources, customization), combined with flexibility and agility of Infrastructure-as-a-Service. Mixing and matching different platforms and services is readily achievable thanks to APIs and widespread virtualization, and will only get more so. Multi-cloud is also a rapidly normalizing part of IT.  Today, about two-thirds of enterprises have more than one IaaS provider or platform, and the same number consider that a unified hybrid IT strategy is the best path forward for both legacy and cloud-native workloads. Enterprises that fail to modernize their IT within a multi-cloud/hybrid framework will get left behind, plain and simple.”

Dale Vile, Freeform Dynamics

“Use of multiple cloud platforms and services is a fact of life for most mainstream businesses, but accumulating clouds in an ad-hoc manner leads to costly and risky fragmentation. It becomes harder and harder to control overheads, manage security and more generally meet business and operational requirements. This is why a more coordinated multi-cloud approach is so important. With the right combination of strategy, process, discipline and supporting technology, you can deal with many of the challenges and at the same time enjoy the kind of choice and flexibility needed in today’s ever-changing digital environment.”

Devan Adams, IHS Markit

“Use of multi-clouds are critical for organizations who want to avoid cloud service provider (CSP) lock-in. A multi-cloud strategy enables companies to use best of breed cloud services from the plethora of service providers in the market e.g. compute from AWS, containers services from Google cloud, and SaaS from SAP.

Multi-cloud use is a fast growing trend we highlighted in our recently conducted survey – Cloud Service Strategies & Leadership North American Enterprise Survey – 2018, where respondents reported that they were using 10 different CSPs for SaaS (growing to 14 by 2020) and 10 for infrastructure (growing to 13 by 2020). CSPs are helping to ease the use of multi-clouds by releasing multi-cloud management tools for organizations to use for viewing and managing several clouds from one dashboard tool.

Not using a multi-cloud approach makes enterprises totally dependent on one CSP for all their cloud related services, some of which they may not offer the best value or service. Also, organizations not using cloud services from more than one provider are limited in the advancement of their services, as new technologies and trends occur within the market, companies may be delayed in their use of new tools if the CSP they use are slow adopters.”

Jean Bozman, Hurwitz & Associates

“There are many paths to multi-cloud – not just one bold move to forge a new multi-cloud strategy. Most large enterprises have on-premises computer infrastructure that is the result of many waves of investments and deployments over decades of spending. The journey to multi-cloud is evolving, as enterprises are adopting a range of cloud services to meet specific business challenges, while reducing their IT costs for on-premises data centers. In large enterprises, some business units may have chosen to re-host large numbers of systems on Amazon’s AWS or Microsoft Azure. Others may have chosen Google Cloud Platform (GCP) for its expertise in AI/ML via TensorFlow-based analytics. By adopting a multi-cloud approach, enterprise customers gain choice, operational flexibility — and the freedom to pick CSP services for the optimal mix of business, security and cost considerations.”

Based on what we heard from these experts, multi-cloud is here to stay, and deservedly so. From both a business agility and cost-efficiency standpoint, multi-cloud’s advantages are undeniable. The key is how to properly implement multi-cloud. Enterprises must leverage modern technologies to avoid risky, expensive silos. If they get that right, companies can make sure they’re part of the future, and increasingly the present, of cloud computing.

This article was originally posted on CloudPost.

The post Why multi-cloud has become a must-have for enterprises: six experts weigh in appeared first on Ubuntu Blog.

15 August, 2019 08:57AM

August 14, 2019

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Entity extraction for threat intelligence collection

Introduction This research project is part of my Master’s program at the University of San Francisco, where I collaborated with the AT&T Alien Labs team. I would like to share a new approach to automate the extraction of key details from cybersecurity documents. The goal is to extract entities such as country of origin, industry targeted, and malware name. The data is obtained from the AlienVault Open Threat Exchange (OTX) platform: Figure 1: The website otx.alienvault.com   The Open Threat Exchange is a crowd-sourced platform where, where users upload “pulses” which contain information about a recent cybersecurity threat. A pulse consists of indicators of compromise and links to blog posts, whitepapers, reports, etc. with details of the attack. The pulse normally contains a link to the full content (a blog post), together with key meta-data manually extracted from the full content (the malware...

Sankeerti Haniyur Posted by:
Sankeerti Haniyur

Read full post

       

14 August, 2019 01:00PM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

On August 12th 2019, Emma DE2 1.05 in the footsteps of Emma DE3 !

On August 12th 2019, the Emmabuntüs Collective is happy to announce the release of the new Emmabuntüs Debian Edition 2 1.05 (32 and 64 bits), based on Debian 9.9 stretch distribution and featuring the XFCE desktop environment.

This distribution was originally designed to facilitate the reconditioning of computers donated to humanitarian organizations, starting with the Emmaüs communities (which is where the distribution’s name obviously comes from), to promote the discovery of GNU/Linux by beginners, as well as to extend the lifespan of computer hardware in order to reduce the waste induced by the over-consumption of raw materials.

This update of our distribution is taking over the improvements we implemented in our recent Emmabuntüs DE 3 RC release, based on Debian 10 Buster, and brings an improvement in the size of the ISO by the rationalization of the current software, and the removal of unsupported languages. This version brings some new fixes, like the improvement in the dark/light theme handling, support of the language and localization in live mode, etc.

Post-installation screenshot of EmmaDE2 1.05 with the dark/clear theme selection window

This Debian Edition 2-1.05 version includes the following fixes and enhancements:

  • Based on Debian 9.9 Stretch
  • Added Redshift
  • Added the ClipIt utility
  • Added the management of a dark or light theme
  • Added support for languages and localization in the GRUB live mode
  • Fixed automatic partitioning issue during installation
  • Fixed the disconnection by user change, within the XFCE action buttons
  • Fixed boot UEFI mode under VMWare Workstation
  • Updated of HPLip 3.19.6, Multisystem 1.0432, TurboPrint 2.48-2, Firefox ESR 60.8.0

See more information about this release on our Wiki.

14 August, 2019 11:26AM by Patrick Emmabuntüs

Le 12 août 2019, Emma DE2 1.05 sur les traces de l’Emma DE3 !

Le Collectif Emmabuntüs est heureux d’annoncer la sortie pour le 12 août 2019, de la nouvelle Emmabuntüs Debian Édition 2 1.05 (32 et 64 bits) basée sur la Debian 9.9 Stretch et XFCE.

Cette distribution a été conçue pour faciliter le reconditionnement des ordinateurs donnés aux associations humanitaires, notamment, à l’origine, aux communautés Emmaüs (d’où son nom). L’objectif est de favoriser la découverte de GNU/Linux par les débutants, de prolonger la durée de vie du matériel pour finalement limiter le gaspillage lié à la surconsommation de matières premières.

Cette mise à jour de notre distribution reprend les bases des améliorations que nous avons apportées à notre version de l’Emmabuntüs DE 3 basée sur Debian 10 Buster. Elle réalise une diminution notable de la taille de l’ISO par rationalisation d’une partie des logiciels présents et par la suppression des langues non supportées. Cette version apporte quelques correctifs, comme les améliorations de la gestion d’un thème sombre/clair, le support en mode live de langues et la localisation, etc.

Présentation d’Emmabuntüs par Elias de l’Espace Ubuntu/RAP2S à des associations
dans la ville de Tori-Bossito au Bénin

Pour cette version 1.05, les correctifs et améliorations suivants ont été apportés :

  • Basée sur Debian 9.9 Stretch
  • Ajout de la gestion d’un thème sombre ou clair
  • Ajout de la prise en charge des langues et localisation en mode live dans GRUB
  • Ajout de Redshift
  • Ajout de l’utilitaire ClipIt
  • Correction du partitionnement automatique lors de l’installation
  • Correction déconnexion par changement d’utilisateur dans boutons actions XFCE
  • Correction du démarrage en mode UEFI sous VMWare Workstation
  • Mise à jour : HPLip 3.19.6, Multisystem 1.0432, TurboPrint 2.48-2, Firefox ESR 60.8.0

Voir l’annonce originale sur notre Wiki.

14 August, 2019 11:20AM by Patrick Emmabuntüs

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: OpenStack Charms 19.07 – Percona Cluster Cold Start, DVR SNAT and more

Canonical is proud to announce the availability of OpenStack Charms 19.07. This new release introduces a range of exciting features and several improvements which enhance Charmed OpenStack across various areas. Let’s talk about a few notable ones.

Percona cluster cold start

The percona-cluster charm now contains a new logic and actions to assist with operational tasks surrounding a database shutdown scenario. However, user interaction is still required.

In the event of an unexpected power outage and cold boot, the cluster will be unable to re-establish itself without manual intervention. In such situations, users should determine the node with the highest sequence number, and bootstrap the node by running the following action:

juju run-action –wait percona-cluster/<unit-number> bootstrap-pxc

In order to notify the cluster of the new bootstrap UUID, run the following action:

juju run-action –wait percona-cluster/<unit-number> notify-bootstrapped

The percona-cluster application will then return back to a clustered and healthy state.

DVR SNAT

The neutron-openvswitch charm now supports deployment of DVR (Distributed Virtual Routing) based routers with combined SNAT (Source Network Address Translation) functionality, removing the need to use the neutron-gateway charm in some types of deployment.

This implicitly requires that ‘external’ networks are routable to all hypervisors within the deployment to allow effective load balancing of SNAT routers and DVR services across the deployment.

In order to turn the feature on, run:

juju config neutron-openvswitch use-dvr-snat=True

Octavia image lifecycle management

This release introduces the octavia-diskimage-retrofit charm which provides a tool for retrofitting cloud images for use as Octavia Amphora.

One of the problems with Octavia was that it needs a method for generating base images to be deployed as load balancing entities. The octavia-diskimage-retrofit charm solves this problem by providing an action which, upon request, downloads the most recent Ubuntu Server or Minimal Cloud image from Glance, applies OpenStack Diskimage-builder elements from OpenStack Octavia and turns it into an image suitable for use by Octavia.

The charm can be deployed as a subordinate application as follows:

juju deploy glance-simplestreams-sync \
  –config source=ppa:simplestreams-dev/trunk

juju deploy octavia-diskimage-retrofit \
  –config amp-image-tag=octavia-amphora

juju add-relation glance-simplestreams-sync keystone
juju add-relation glance-simplestreams-sync rabbitmq-server 
juju add-relation octavia-diskimage-retrofit glance-simplestreams-sync 
juju add-relation octavia-diskimage-retrofit keystone

Once deployed the retrofitting process can be triggered as follows:

juju run-action octavia-diskimage-retrofit/leader retrofit-image

Nova live migration: Streamline SSH host key handling

This release of the nova-cloud-controller charm has improved the host key discovery and distribution algorithm. The net effect being that the addition of a nova-compute unit will be faster than before and the nova-cloud-controller upgrade-charm hook will be significantly improved for large deployments.

The Nova compute service uses direct (machine-to-machine) SSH connections to perform instance migrations. Each compute host must therefore be in possession of every other compute host’s SSH host key via the known hosts file. This release introduces a new boolean configuration option – cache-known-hosts – which allows any given host lookup to be performed just once.

In order to turn the feature on, run:

juju config nova-cloud-controller cache-known-hosts=True

In order to clear the cache, run:

juju run-action nova-cloud-controller clear-unit-knownhost-cache

For more information about OpenStack Charms 19.07, please refer to the official release notes.

The post OpenStack Charms 19.07 – Percona Cluster Cold Start, DVR SNAT and more appeared first on Ubuntu Blog.

14 August, 2019 09:44AM

hackergotchi for Tails

Tails

Tails report for July, 2019

Releases

Documentation and website

  • We updated most of our documentation to Tails based on Debian 10 (Buster). (#16282)

  • We simplified and updated our description of the system requirements. (#11663 and #16810).

  • We fixed the display of the "Tor check" button on the homepage of Tor Browser. (#15312)

    This "Tor check" button is used by around 10% of users.

  • We removed the "% translated" indication from our website because it was misleading. (#16867)

User experience

Hot topics on our help desk

  1. Many people are still having graphic card problems, specially #16815 Error starting GDM with [AMD/ATI] Carrizo.

  2. We got a lot of support requests about 'Tails not being able to delete images'. After a while we realised it was because of a confusing part of our documentation. We will try to fix that soon: #16975 Users get confused at our documentation and think Tails does not delete images at all.

  3. Users keep trying to use Electrum even when, at the moment, it is not easy in Tails.

Infrastructure

  • We finished fixing the description of the mechanism for the revocation of the Tails signing key after an external review. (#15604)

  • We discussed additions of new people to the Tails signing key revocation mechanism. (#16665)

  • The new backups system for our entire infrastructure is live. (#15071)

  • We upgraded our Puppet master (sic) to Debian 10 (Buster), which supports PuppetDB out of the box. This allowed us to drop a bunch of hackish workarounds and it was a great way to fast-track the onboarding of zen, our new sysadmin. (#16460)

  • We made great progress on our web translation platform:

    • We fixed a number of bugs identified since we submitted the platform to a production workload.
    • We modified in depth the permissions model to address issues identified by a security review.
    • We sent a public call for testing.
    • We kept working on documentation for translators.
    • We adjusted the resources allocated to the VM that runs this platform and deployed Apache mod_security to make it a bit less scary.

Funding

Outreach

Past events

  • A few Tails contributors attended DebConf19, the annual Debian Developers and Contributors Conference.

    intrigeri and nodens ran a skill-sharing session about AppArmor.

  • Ulrike, anonym, and sajolida attended Tor Meeting in Stockholm, Sweden.

Upcoming events

On-going discussions

Translations

All the website

  • de: 40% (2292) strings translated, 9% strings fuzzy, 37% words translated
  • es: 53% (3002) strings translated, 5% strings fuzzy, 45% words translated
  • fa: 32% (1803) strings translated, 11% strings fuzzy, 34% words translated
  • fr: 89% (5025) strings translated, 2% strings fuzzy, 88% words translated
  • it: 34% (1947) strings translated, 7% strings fuzzy, 30% words translated
  • pt: 26% (1465) strings translated, 9% strings fuzzy, 22% words translated

Total original words: 59619

Core pages of the website

  • de: 69% (1216) strings translated, 14% strings fuzzy, 71% words translated
  • es: 83% (1453) strings translated, 8% strings fuzzy, 84% words translated
  • fa: 35% (624) strings translated, 13% strings fuzzy, 32% words translated
  • fr: 96% (1680) strings translated, 2% strings fuzzy, 96% words translated
  • it: 65% (1150) strings translated, 16% strings fuzzy, 66% words translated
  • pt: 47% (823) strings translated, 14% strings fuzzy, 48% words translated

Total original words: 16505

Metrics

  • Tails has been started more than 759 660 times this month. This makes 24 505 boots a day on average.

How do we know this?

14 August, 2019 07:09AM

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Splash Two

Well, I just finished up closing out the remaining account that I had on Tumblr. I hadn't touched it for a while. The property just got sold again and is being treated like nuclear waste. I did export my data and somehow had a two gigabyte export. I didn't realize I used it that much.

My profile on Instagram was nuked as well. As things keep sprouting the suffix of "--by Facebook" I can merrily shut down those profiles and accounts. That misbehaving batch of algorithms mischaracterizes me 85% of the time and I get tired of dealing with such messes. The accretions of outright non-sensical weirdness in Facebook's "Ad Interests" for me get frankly quite disturbing.

Remember, you should take the time to close out logins and accounts you don't use. Zombie accounts help nobody.

14 August, 2019 02:22AM

August 13, 2019

hackergotchi for Purism PureOS

Purism PureOS

The Librem 5 Smartphone in Forbes

Todd Weaver helps Moira Vetter answer the question “Is America Finally Ready For A Surveillance-Free Smartphone?” in a recent article in Forbes.

The article begins by pointing out that several companies have tried to release private, secure smartphones–and most have failed. Does that mean privacy and security are impossible to achieve? Well, not really, because:

One company wants to change the privacy-focused technology landscape

And that company is Purism. Not depending on the traditional Silicon Valley Venture Capital marketplace, and being a Social Purpose Company, Purism will never compromise its users security, or their privacy, for profit.

Purism’s crowdfunding campaigns on the Crowd Supply platform consistently achieved more than their funding goal. The latest, concerning the Librem 5 smartphone, raised over $2 million. And what makes the Librem 5 smartphone different from other phones? Several factors, such as the business model, an engaged community, and the fact that privacy and security are starting to be a great concern– and not just for everyday smartphone users, but for the government as well.

While the world continues to “opt-in” and share their every move, thought, comment, viewing whim, personal home climate preference, and family behavioral profile with the 2 or 3 companies running the world, there are people that find this repugnant.

Ultimately, desiring privacy does not mean having to go off the grid: a privacy-enhancing smartphone both empowers and enables its user.

 

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the people—stand up for our digital rights, where you place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post The Librem 5 Smartphone in Forbes appeared first on Purism.

13 August, 2019 05:40PM by Purism

hackergotchi for Cumulus Linux

Cumulus Linux

Exploring Batfish with Cumulus – Part 2

In Part 1 of our look into navigating Batfish with Cumulus, we explored how to get started with communicating with the pybatfish SDK, as well as getting some basic actionable topology information back. With the introduction out of the way, we’re going to take a look at some of the more advanced use cases when it comes to parsing the information we get back in response to our queries. Finally, we’re going to reference an existing CI/CD pipeline, where templates are used to dynamically generate switch configuration files, and see exactly where and how Batfish can fit in and aid in our efforts to dynamically test changes.

For a look under the covers, the examples mentioned in this series of posts are tracked in “https://gitlab.com/permitanyany/cldemo2

Enforcing Policy

As you may remember, in Part 1 we gathered the expected BGP status of all our sessions via the bgpSessionStatus query and added some simple logic to tell us when any of those sessions would report back as anything but “Established”. Building on that type of policy expectation, we’re going to add a few more rules that we want to enforce in our topology.

For example:

  • “A leaf switch should only peer with a spine switch”
  • “All spine switches should use the same BGP AS”
  • “Leaf switches that are part of an MLAG pair should use the same BGP AS”

This list can easily grow or change based the topology design and how granular you want to get, but it drives home the point that we can make sure that any change to the environment will not violate these expectations.

As a refresher, here is what our BGP information looks like and the data we’ll be parsing from it.

Looking at the first requirement, we want to start with the Node column and view what the corresponding Remote_Node is, when querying for bgpSessionStatus.

After importing the pybatfish libraries and initializing the Batfish snapshot, we iterate through the Node column. We’re looking for values that start with “leaf” and checking to see if the corresponding Remote_Node value contains “spine” in the name. If there’s no matching spine neighbor, we raise an exception. The reason we choose to go the exception route (instead of just printing the message) is because it later helps us properly identify the script’s exit code in our pipeline, and whether the result is a success or failure based on the data we were looking for.

Moving towards our next requirement, we want to make sure all of our spine switches are configured with the same AS number (65020 in this case).

Taking a similar approach, we focus on the nodes that have “spine” in the name. Once those are identified, we iterate through them and make sure their corresponding Local_AS values match 650220.

Looking at the third requirement, we want to be able to make sure that leaf nodes in the same rack (aka MLAG peers) have the same AS number. To do that, we need to jump through a couple of hoops to figure out how to identify that 2 leaf switches are a pair. Our logic here while parsing the nodes, is if a switch ends with an even number we’ll assume his peer we’ll be the same switch number minus one. Likewise, with switches ending in odd numbers, the peer will be assumed to be the switch number plus one.

As seen in the below output, we’re able to confirm that the leaf AS numbers match our expectations.

Bringing all 3 of these tests together, we can now lay the foundation of what we’d like to run as a test with every change to our pipeline.

Continuous Integration Pipeline

Now that we’re ready to handle the testing aspect of our network changes, before we jam our script into a pipeline, it might be worthwhile to review what a typical continuous integration workflow looks like. Keep in mind however, that this is a broad topic which would take numerous separate articles to fully setup and walk through, so I’ll stick to the high-level explanation here and defer to existing online resources for the rest. If the Gitlab repository link at the top of the post doesn’t make much sense to you and you need help getting started, leave a comment at the bottom of this post.

Using Gitlab as the CI tool of choice and Ansible to push out configurations, the overall goal of this pipeline is to define variable files per device or group and have a template that will read those variables in and render out switch configurations, which are finally pushed to the devices themselves. Below is a snippet of the leaf01 specific variables.

These variables are then fed into a switch configuration template that will generate a config specific to a device it is reading variables in from.

An Ansible playbook ultimately sends these rendered configurations to the switches and configuration changes are pushed.

All of this is code is version controlled and Gitlab reacts any time someone pushes a change to the repository, so that the Ansible playbook runs and pushes out the necessary changes automatically. This behavior is controlled by a Gitlab specific configuration file, “.gitlab-ci.yml”. which displays one stage so far, “deploy”.

When one of these deploys occurs, they can be tracked in Gitlab to find out whether the latest commit and code push succeeded or failed.

Integrating Batfish

Now that you have an idea of what a pipeline workflow looks like, let’s add the Batfish testing component. Overall, the “gitlab-ci” file will dictate what scripts we’ll be running to test our changes and in what order they will be executed. The important aspect of this order is that if a certain stage fails, the workflow will be interrupted and subsequent stages will not be executed.

Our proposed workflow should look like the following:

  1. Change is made to a device variable and configuration is generated.
  2. Batfish looks at the “new” configuration and runs its set of tests against it
  3. Assuming those tests pass, the configuration is then pushed to the switches via Ansible

As you may remember from Part 1, Batfish currently only supports looking at Cumulus configurations in an NCLU format, which poses a problem for us, since the templatized configurations we’re generating are using Linux flat files. While Batfish will likely support parsing flat files soon, in the interim we have to think of a way to convert these configurations from flat files to NCLU on the fly. To do this, we’re going to create a Cumulus VX test switch in this topology that we’ll use for the sole purpose of converting configs from flat files to NCLU and sending them back to Batfish for analysis.

Our updated workflow will now be the following:

  1. Change is made to a device variable and configuration is generated.
  2. Configuration is sent to test switch and applied (one switch at a time).
  3. Test switch sends NCLU generated configuration back to Batfish host.
  4. Batfish looks at the “new” configuration and runs its set of tests against it.
  5. Assuming those tests pass, the configuration is then pushed to the switches via Ansible.

As you can see in the updated “gitlab-ci” file, we introduced 2 new stages in our pipeline. We have also split up the configuration generation portion of the playbook from the configuration deployment portion and inserted the testing phase in the middle. Our testing phase contains 2 python scripts, one which we came up with in the beginning of this post and the second from the end of part 1.

Let’s now go ahead and make a sample config change that we do not expect to violate any of our testing policies. We’ll change the description of swp1 on leaf01 and commit it to the Gitlab repository.

We can see below that our stages are in the process of running.

Looking at the specific job in the Gitlab pipeline, we see that Ansible detected a change on leaf01 when generating the interfaces config file.

Let’s now introduce a failure that should violate our testing policies. We’re going to change the BGP AS number of leaf02 to 65050 (leaf01 is 65011).

Looking at the pipeline run status, we see a failure.

Digging further into the reason, the failure occurred at the testing phase (as we’d expect). The logs of the run point us exactly to where we expect.

Our script threw up an exception and the config push stage of the pipeline was not reached, due to a failure in the test stage.

Conclusion

Hopefully, seeing Batfish’s place in a real-life workflow helped connect some dots about how a testing methodology fits into a modern datacenter network. You can now envision how this type of pre-change testing can add a level of repeatability and confidence to the practice of treating your network infrastructure as code.

13 August, 2019 04:26PM by Anthony Miloslavsky

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: KDE.org Applications Site

I’ve updated the kde.org/applications site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.

 

13 August, 2019 02:00PM

Colin King: Monitoring page faults with faultstat

Whenever a process accesses a virtual address where there isn't currently a physical page mapped into its process space then a page fault occurs.  This causes an interrupt so that the kernel can handle the page fault.  

A minor page fault occurs when the kernel can successfully map a physically resident page for the faulted user-space virtual address (for example, accessing a memory resident page that is already shared by other processes).   Major page faults occur when accessing a page that has been swapped out or accessing a file backed memory mapped page that is not resident in memory.

Page faults incur latency in the running of a program, major faults especially so because of the delay of loading pages in from a storage device.

The faultstat tool allows one to easily monitor page fault activity allowing one to find the most active page faulting processes.  Running faultstat with no options will dump the page fault statistics of all processes sorted in major+minor page fault order.

Faultstat also has a "top" like mode, inoking it with the -T option will display the top page faulting processes again in major+minor page fault order.


The Major and Minor  columns show the respective major and minor page faults. The +Major and +Minor columns show the recent increase of page faults. The Swap column shows the swap size of the process in pages.

Pressing the 's' key will switch through the sort order. Pressing the 'a' key will add an arrow annotation showing page fault growth change. The 't' key will toggle between cumulative major/minor page total to current change in major/minor faults.

The faultstat tool has just landed in Ubuntu Eoan and can also be installed as a snap.  The source can is available on github.  

13 August, 2019 11:14AM by Colin Ian King (noreply@blogger.com)

August 12, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 591

Welcome to the Ubuntu Weekly Newsletter, Issue 591 for the week of August 4 – 10, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

12 August, 2019 11:07PM

Ubuntu Blog: Provisioning ESXi with MAAS: An overview

MAAS has supported provisioning ESXi starting from MAAS 2.5. However, MAAS 2.6 has expanded its support and provides new features that significantly improves the provisioning experience.

What is supported?

The support MAAS provides for provisioning an operating system varies depending on the operating system in use. Even though MAAS tries to support all its features on all operating systems, it is sometimes difficult for various technical reasons. In the case of ESXi we can only support what’s detailed below.

Networking

For networking, MAAS supports configuring for the following:

  • Name the interfaces with the same name in MAAS.
  • Interfaces will be configured based on the IP assignment mode in MAAS, namely:
    • DHCP
    • Auto assign
    • Static
  • MAAS also supports creating:
    • Aliases
    • VLAN interfaces
    • Bonds – MAAS maps bonds to NIC teaming as follows:
      • balance-rr – portid
      • active-backup – explicit
      • 802.3ad – MAAS ignores iphash, LACP rate and XMIT hash policy settings.
    • MTU – Allows modifying the MTU which is helpful when setting up jumbo frames.
  • Static routes

Storage

Storage support for ESXi allows for the selection of the root disk and the creation of datastore for physical disk. Other than this, MAAS cannot partitions the disk as it does with other Linux based operating systems.

Growing the default datastore

The MAAS ESXi image (as created by the Packer image generation scripts) is a DD image that comes with a partition table. MAAS will use the selected boot disk to copy the image to, during the deployment phase, and copy such image into the file system.

Once its copied, the only thing MAAS will do is to grow the size of the default datastore to occupy the whole disk. This is available starting from MAAS 2.5.

Creating new datastores

In MAAS 2.6, we expanded the storage support to allow the creation/modification of datastores. MAAS now allows deleting the default datastore, and/or creating/modifying new ones on other disks of the physical system.

This allows MAAS to better integrate and deploy VMWare’s ESXi reducing the need of having to perform post-installation configuration or customisation of the storage capabilities.

Post-installation customisation

As any other supported OS, MAAS allows the post-installation customisation of the operating system. Unlike the other supported OS’, which rely on cloud-init (cloudbase-init for Windows) for post-installation customisation, ESXi customisation can only be done with a Shell, Python or Perl scripts.

The customisation will continue to be available over ‘user_data’ when deploying ESXi over the API.

ESXi vCenter registration

MAAS now adds the ability to automatically register ESXi deployments into a vCenter. This is done in two ways: by baking the credentials into the images, or by providing the credentials to the machine via MAAS.

By baking the credentials inside the image, administrators can simply generate images that have special set of credentials that will be used to register to the vCenter provided host.

On the other hand, MAAS can now store vCenter Credentials (which should be scoped down to only have permissions to register new ESXi hosts). MAAS provides the machine with the credentials via the meta-data during the deployment process, and the hosts will use this to register themselves into the provided vCenter endpoint.

The post Provisioning ESXi with MAAS: An overview appeared first on Ubuntu Blog.

12 August, 2019 09:28PM

Ubuntu Blog: Julia and Jeff discover the ease of snaps at the Snapcraft Summit

Julia is an open source, high-level, general-purpose, dynamic programming language designed for numerical analysis and computational science, launched in 2012. It solves the “two language” problem: developers can use Julia for both computational and interactive work, instead of needing to work with two different languages which can often slow down development times. Use cases include machine learning and other branches of artificial intelligence. Julia’s Jeff Bezanson was at the 2019 Snapcraft Summit in Montreal and told us about Julia’s involvement with snaps and other package managers. 

Packages are an important part of the integrated environment that Julia offers with ease of integration and performance optimisation being key features. An invitation to the Snapcraft Summit was how Jeff discovered snaps which corresponded to a key goal for Julia of using standard distribution channels and multiple Linux distributions. Snaps offered a solution to the problems that arose when using the package managers of different distributions, because of Julia’s numerous dependencies on specific versions of other software. “Snaps seemed like exactly the answer as it lets us use whatever dependencies we need. It’s a perfect distribution mechanism for us,” Jeff states. 

Naturally, before meeting snaps, Julia already had packaging solutions in place. Tarballs, git clones, and source tarballs are still packaging options that Julia continues to use. Binary tarballs work well, although it has taken the Julia team several years to make them reliable. As Jeff puts it, “making the snap was incredibly easy since we had already done a lot of work to make our builds relocatable. For any project that does that, adding a snap is no problem.” Integration of snaps into the Julia build process has yet to be done. However, Jeff feels confident that this should be straightforward, thanks to the simple yaml file format of snaps.

Jeff also thinks that snaps can help increase Julia’s reach and discoverability, bringing Julia to people looking for technical or machine learning related software via the Snap Store. Initially, all Julia snaps were built in the edge channel as part of a conservative approach but have since moved to the stable channel.

We asked Jeff what improvements he would make to the snap system, based on his experience at the Summit. His answer was greater facility in snaps for handling multiple versions of applications for simultaneous installation. Julia users will often want multiple versions of the software installed, for example, because they want to test their libraries against each version. However to summarise the ease of starting with snaps, Jeff explains, “the build.snapcraft and Snapcraft website are so easy to use – you just click a button – and snaps are generated for multiple architectures. There’s really no reason not to try it.” 

While Jeff already appreciates the advantages of snaps, he pointed out that developers still need to be convinced that a snap is the best way to make their software easy to use and to increase its reach. The Snapcraft developer community has an important role in this, he thinks. This strong community support complements the inherent advantages of snaps of simplicity and flexibility. “Snaps are a simple system, it’s very flexible and understands the needs of different projects which helps accommodate and attract a wide variety of applications,” comments Jeff. 

Interacting with the developer community was also a high point of the Summit for Jeff – along with the “impressive supply of food and coffee.” He found friendly and helpful support from other developers to be available “around the clock”, and that they gave their time unstintingly to answer all of Julia’s questions. 

Install Julia as a snap here.

The post Julia and Jeff discover the ease of snaps at the Snapcraft Summit appeared first on Ubuntu Blog.

12 August, 2019 10:52AM

Ubuntu Blog: Issue #2019.08.12 – The Kubeflow Machine Learning Toolkit

  • Kubeflow — a machine learning toolkit for Kubernetes – An introduction to Kubeflow from the perspective of a data scientist. This article quickly runs through some key components – Notebooks, Model Training, Fairing, Hyperparameter Tuning (Katib), Pipelines, Experiments, and Model Serving. If you are looking for a quick overview, give this article a go. Here’s a key diagram from the article:
  • Why is it So Hard to Integrate Machine Learning into Real Business Applications? – For teams just getting started, getting a trained model with sufficient accuracy is success. But that is just the starting point. There are many engineering and operational considerations that remain to be done. There are components that need to be built, tested and deployed. This post presents a real customer AI-based application, explaining some of the challenges, and suggests ways to simplify the development and deployment.
  • Further afield – Techniques to improve the accuracy of your Predictive Models – a look at few techniques to improve the accuracy of your predictive models. The code base is in R, but the principles are applicable to a variety of code bases and algorithms.
  • Use case spotlight – https://www.technologyreview.com/s/614043/instead-of-practicing-this-ai-mastered-chess-by-reading-about-it/. Instead of practicing, this AI mastered chess by reading about it. The chess algorithm, called SentiMATE, was developed by researchers at University College London. It evaluates the quality of chess moves by analysing the reaction of expert commentators. These learning techniques could have many other applications beyond chess – for instance, analysing sports, predicting financial activity, and making better recommendations.

The post Issue #2019.08.12 – The Kubeflow Machine Learning Toolkit appeared first on Ubuntu Blog.

12 August, 2019 08:00AM

August 11, 2019

Stephen Michael Kellat: Waiting On Race Judging

Previously I produced podcasts for almost six years in the early days of podcasting. I've had to step aways from that for almost six years by dint of being a working fed. With as crazy as things have gotten being part of the civil service I have been having to assess making changes in life. One way to go would be to pick back up things I have had to set aside such as media production like what was done under the aegis of Erie Looking Productions.

This weekend has been KCRW's Radio Race. Soundcloud has an entire playlist of 2018's participant tracks posted that can be listened to. The submission from Erie Looking Productions is posted to Soundcloud now. We were supposed to use Otter.ai as part of the competition as they happened to be a sponsor using machine learning for transcription services. I can't easily link to that and frankly was not amused with what is spit out in terms of machine recognition of my voice. How many different ways do you think the place name of Ashtabula could be mis-transcribed?

What are the next steps? The judges in California will be listening to three hundred some odd entries this week. Finalists will be announced next week. In two weeks we'll know who the winners are. Although placing would be great I'm just glad we were able to show that we could do what was essentially a cold restart after way too long in mothballs.

Between now and the end of September we have two short film projects we have to finish up. One will be going to the Dam Short Film Festival while one will go to MidWest WeirdFest. These are cold restart efforts as well. A documentary short is in the works for the call for WeirdFest while what is essentially an experimental piece is being finished up for Dam Short Film Festival in Boulder City. It is not as if we'll be shooting for a showing at the Ely Central Theatre on the single screen there but Boulder City is a suburb of Las Vegas with a wee bit more population than Ely.

We've also done some minor support work to back up a vendor presenting at the Music Along The River 2019 festival by helping them create nice marketing collateral.

A former Secretary of State and former Chief Justice of the United States, John Marshall, is quoted as saying that the power to tax is the power to destroy. That's still very true in the USA today. Slowly but surely I am trying to transition out of a job rooted in Marshall's view of destruction to something a bit more constructive.

Xubuntu and Ubuntu MATE have been there to make these recent efforts happen far more easily than I otherwise thought possible. I need to give more back to the team. There are just a few more barriers that have to be knocked down first.

11 August, 2019 06:43PM

August 10, 2019

hackergotchi for Xanadu developers

Xanadu developers

Corregir error 1146 y controluser en phpmyadmin

Cuando PhpMyAdmin nos muestra el error 1146 y/o nos muestra una advertencia relacionada con controluser solo basta abrir el archivo /etc/phpmyadmin/config.inc.php con su editor favorito y modificar lo siguiente:

$cfg['Servers'][$i]['controluser'] = $dbuser;
$cfg['Servers'][$i]['controlpass'] = $dbpass;

Para que quede de la siguiente forma:

$cfg['Servers'][$i]['controluser'] = ''; //$dbuser;
$cfg['Servers'][$i]['controlpass'] = '': //$dbpass;

Guardamos el archivo, reiniciamos el servidor web y listo.

Espero que esta información les sea útil, saludos…

10 August, 2019 03:13PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: On The Eve of Radio Race 2019

Coming up this weekend is KCRW's 7th Annual 24-Hour Radio Race. The last communication that I have had from organizers is that nearly 300 teams have registered. The event is not limited to the United States and historically teams from Australia, New Zealand, Canada, and the United Kingdom have also participated. Teams will be judged on their creativity, storytelling skills, technical skills, and incorporation of the theme in producing a radio feature that does not exceed four minutes in length. What makes this a race is that the theme will not be announced to participants until 1700 UTC on Saturday, August 10, 2019. Participants would have 24 hours from that point to then produce their pieces and then submit them through the designated means for consideration. KCRW is an affiliate of National Public Radio based in Santa Monica.

I have put together a squad from Erie Looking Productions to participate in the event. We are as ready as we can be. I will be conducting preaching on Sunday morning as part of the church's domestic mission outreach but that shouldn't cut into production. I am used to have to prepare on short notice various production pieces such as sermons so I have been operating for some time in the paradigm this sort of an event requires, I hope.

What tools will be in play? More than like a mix of LaTeX, Mousepad, Audacity, ffmpeg, Firefox, and countless others from the repositories while we may end up trying out the imagniary-teleprompter snap this time around. Also in play will be more traditional microphones, mixers, and quite possibly honest to goodness tape recorders depending upon how we structure the production plan.

For now, I get to wait. I originally started writing this blog post Friday night and apparently fell asleep at the keyboard. I won't be nodding off during this event.

10 August, 2019 01:37PM

August 09, 2019

hackergotchi for Grml developers

Grml developers

Michael Prokop: Bookdump 01/2019

Foto vom Buchregal

Mein Ziel für 2019 war es, mindestens 24 Bücher, also im Schnitt 2 Bücher pro Monat, zu lesen. Das Ziel habe ich mit Anfang/Mitte August früher als erwartet erreicht. Mir fehlen leider ein wenig Zeit/Motivation und Praxis für ausführlichere Bücher-Review, und wenn ich es mir nicht gleich aufschreibe, verliere ich auch den Überblick über die bereits gelesenen Bücher. Daher hiermit der Versuch in Form eines Bookdumps, angelehnt an jene wie ich sie von z.B. Julius Plenz kenne und schätze.

Hiermit also ein Zwischenstand über jene Bücher, die ich bisher in diesem Jahr gelesen habe:

Der Preis der Macht, von Lou Lorenz-Dittlbacher. Die Autorin kennt man u.a. aus ZiB2/ORF, ihr Buch gibt einen interessanten Einblick in das Leben von Spitzenpolitikerinnen. Spannende Interviews, gut und flüssig geschrieben, lesenswert.

Neujahr, von Juli Zeh. Die Geschichte von Henning und seinen Panikattacken war mein Einstieg in den Kosmos von Juli Zeh, in dessen Folge ich mir gleich ein weiteres Buch von ihr besorgt habe. Absolut lesenswert.

Die 5 Dysfunktionen eines Teams, von Patrick Lencioni. Eine Leihstellung eines Nachbarn (das einzige Buch das in obigem Foto fehlt). Es geht um Vertrauen, Konflikte, Engagement, Verantwortung, Ergebnis-Orientierung,… Viel no na ned und amerikanischer Stil, ist ein wenig an einen Roman gehalten, kann aber z.B. nicht mit dem “Phoenix Project” von Gene Kim mithalten. Es liest sich aber schnell an einem Abend und ein paar Gedanken zum Reflektieren habe ich dann doch mitgenommen.

Schluss mit Schuld (Unsere Reise zum Holocaust und zurück), von Lisa Gadenstätter und Elisabeth Gollackner. Lisa Gadenstätter kennt man ebenfalls aus dem ORF, und es gibt zum Buch auch einen DOKeins Film, den ich mir nach dem Lesen des Buches angesehen habe. Interessante Zeitzeugengespräche und lesenswert.

Feministin sagt man nicht, von Hanna Herbst. Ein kluges feministisches Buch.

weg, von Doris Knecht. Die Geschichte rund um eine verschwundene Tochter, ihre getrennten Eltern und verschiedene Lebensentwürfe. Die mindestens 3 Rechtschreibfehler sind dann hoffentlich ab der 2. Auflage, zu der es hoffentlich kommt, Vergangenheit.

Jesolo, von Tanja Raich. Depressive Grundstimmung und definitiv lesenswert, aber man sollte es eher nicht – so wie ich es gemacht habe – im Urlaub in Jesolo lesen.

Oma lässt grüßen und sagt, es tut ihr leid, von Fredrik Backman. Schönes Buch das aufzeigt, dass man nicht immer wie alle anderen sein muss, und vielleicht auch nicht jede(r) so ist, wie er/sie auf den ersten Blick zu sein scheint.

Alte weisse Männer, von Sophie Passmann. Interviews mit Sascha Lobo, Christoph Amend, Robert Habeck, Kai Diekmann, Micky Beisenherz und anderen. Ist schnell gelesen und regt stellenweise zum Nachdenken an.

Plattform und Elementarteilchen, von Michel Houellebecq. Ich wollte mich endlich mal in den Kosmos von Houellebecq begeben, und auch wenn seine Schreibweise stellenweise anstrengend obszön ist, fand ich beide Bücher sehr lesenswert (danke an Bernd Haug für die Beratung zur richtigen Reihenfolge der Bücher :)).

Alle Toten fliegen nach Amerika, von Joachim Meyerhoff. Wunderbares Buch, ich möchte die weiteren Teile unbedingt ebenfalls lesen.

Die Liebe zur Zeit des Mahlstädter Kindes, von Clemens J. Setz. Mir hat Clemens Setz mit seinem “Die Stunde zwischen Frau und Gitarre” (ein fantastisches Buch!) wieder die Regelmäßigkeit des Buchlesens gebracht und ich musste mir unbedingt mehr von ihm besorgen. Immer wieder blitzt das durch, was ich an Clemens Setz so schätze, aber es kann für mich nicht mit der “Die Stunde zwischen Frau und Gitarre” mithalten, vor allem weil ich mich mit dem Format der einzelnen (Kurz-)Geschichten nicht wirklich identifizieren konnte. Nichts­des­to­trotz lesenswert.

Gruber geht, von Doris Knecht. Das Buch war mein Einstieg in den Schreibstil von Doris Knecht, kein wirklich bleibender Eindruck, aber ich hab’s gern gelesen.

Warum französische Kinder keine Nervensägen sind, von Pamela Druckerman. Das Buch habe ich über ein Interview mit der Autorin zu ihrem neuen Buch (“Vierzig werden à la parisienne: Hommage ans Erwachsensein”) entdeckt. Selbst wenn man sich nicht mit allen Ansätzen und Vorschlägen identifizieren kann, fand ich das Buch lesenswert und kann es jedem Elternteil empfehlen.

Quasikristalle, von Eva Menasse. Ich kannte und schätze den Stil von Eva Menasse bereits aus ihrem Buch “Tiere für Fortgeschrittene”, und auch dieses Buch hat mich nicht enttäuscht. Das Leben von (Ro)Xane Molin, einer schlechten Verliererin aus einer bürgerlichen Familie, wird aus verschiedenen Blickwinkeln in 13 Kapiteln erzählt. War für mich nicht immer ganz einfach den Namen zu folgen (wer mit wem, wer ist relevant,…) und es gab ein paar schwächere Kapitel, dafür waren einige andere Kapitel umso umwerfender.

Meine wundervolle Buchhandlung, von Petra Hartlieb. Ich wurde über ein Interview mit Frau Hartlieb im Standard auf sie und ihre Buchhandlung aufmerksam. Das kleinformatige Buch liest sich schön und schnell, und gibt einen kleinen Einblick in das harte Leben einer Kleinunternehmerin im Umfeld einer Buchhandlung.

Kurz & Kickl, ihr Spiel mit der Macht, von Helmut Brandstätter. Ich wollte das Buch eigentlich nicht kaufen, nachdem es durch die Medien so gehypt wurde, aber ein Spontankauf im Urlaub in einer meiner Lieblingsbuchhandlungen war stärker als ich. Für mich nicht sonderlich erkenntnisreich, auch wenn es stellenweise geschichtlich interessant war, und das Buch musste wohl extrem dringend in die Druckerpresse, keines meiner Bücher aus 2019 hatte so viele Rechtschreibfehler wie dieses.

Supergute Tage oder Die sonderbare Welt des Christopher Boone, von Mark Haddon. Das Buch wurde in einer Literatursendung von Dirk Stermann als das Lieblingsbuch von ihm und seiner Tochter erwähnt. Wunderschön geschriebener Einblick in das Gedankenleben eines Autisten, werde das Buch sicher mal meinen Kindern auf das Nachtkasterl legen.

Die Daten, die ich rief, von Katharina Nocun. Ich hab das Video zu ihrem Vortrag ‘Archäologische Studien im Datenmüll’ vom 35C3 gesehen und das Buch von einem Bruder vererbt bekommen. Es liest sich leicht und flüssig, bietet einen guten Einstieg in das Thema, man kann es also wunderbar auch Nicht-Technikern in die Hand drücken.

Liebe Mama, ich lebe noch!, von Ernst Gelegs. Entdeckt via Erlesen am 28.05.2019. Die rund 100 Briefe eines Soldaten an seine Mutter und Frau geben einen Einblick in das Kriegsgeschehen. Berührend und lesenswert.

Gebrauchsansweisung für Israel und Palästina, von Martin Schäuble. Für den Fall, dass ich im Zuge der DebConf20 in die Region reisen sollte, war dieses Buch als Einstieg gedacht. Es geht ums Wohnen, Leben, Essen und Trinken sowie die Kultur und den israelisch-palästinensischen Konflikt, ohne dass der Autor zu stark Partei zu ergreifen versucht. Gut geschrieben mit ein paar Tipps die ich mir rausgeschrieben habe.

Ich und die Anderen, von Isolde Charim. Eine Leseempfehlung von Gregor Herrmann, die ich nur weitergeben kann. Die Philosophin Isolde Charim kennt man u.a. von den Kolumnen im Falter, und schreibt klug über die Pluralisierung unserer Gesellschaft. Keine einfache Kost, aber absolut lesenswert.

Vater unser, von Angela Lehner. Entdeckt im Album vom Standard am 08.06.2019. Im Buch geht es um die Ich-Erzählerin Eva, die in das psychiatrische Zentrum des Otto-Wagner-Spitals eingeliefert wird. Sensationelle Formulierungen und wunderschöne Alltagsbeobachtungen, ich habe mich wohlgefühlt wie bei Clemens Setz. Ein fantastisches Buch. Ich hoffe, da kommt noch mehr von Angela Lehner.

PS: Mein Bücherregal ist weiterhin gut mit noch ungelesenen Büchern gefüllt, wer aber Leseempfehlungen für mich hat, bitte gerne an z.B. bookdump (at) michael-prokop.at

09 August, 2019 09:32PM

hackergotchi for Maemo developers

Maemo developers

calibDB: easy camera calibration as a web-service

This image has an empty alt attribute; its file name is overlay1.jpg

Camera calibration just got even easier now. The pose calibration algorithm mentioned here is available as web-service now.

This means that calibration is no longer restricted to a Linux PC – you can also calibrate cameras attached to Windows/ OSX and even mobile phones.
Furthermore you will not have to calibrate at all if your device is already known to the service.
The underlying algorithm ensures that the obtained calibrations are reliable and thus can be shared between devices of the same series.

Aggregating calibrations while providing on-the-fly calibrations for unknown devices form the calibDB web-service.

In the future we will make our REST API public so you can transparently retrieve calibrations for use with your computer vision algorithms.
This will make them accessible to a variety of devices, without you having to worry about the calibration data.

0 Add to favourites0 Bury

09 August, 2019 04:13PM by Pavel Rojtberg (pavel@rojtberg.net)

Beyond the Raspberry Pi for Nextcloud hosting

When using Nextcloud it makes some sense to host it yourself at home to get the maximum benefit of having your own cloud.

If you would use a virtual private server or shared hosting, your data would still be exposed to a third party and the storage would be limited as you would have to rent it.

When setting up a server at home one is tempted to use a Raspberry Pi or similar ARM based device. Those are quite cheap and only consume little power. Especially the latter property is important as the machine will run 24/7.

I was as well tempted and started my self-hosting experience with an ARM based boards, so here are my experiences.

Do not use a Raspberry Pi for hosting

Actually this is true for any ARM based board. As for the Pi itself, only the most recent Pi 4B has a decent enough CPU and enough RAM to handle multiple PHP request (WebCAL, Contacts, WebDAV) from different clients without slowdown.
Also only with the Pi 4B you can properly attach storage over USB3.0 – previously your transfer rates would be limited by the USB2.0 bus.

One might argue that other ARM based computers are better suited. Indeed you could get the decently equipped Odroid U3, long before the Pi 4B was available.
However, non-pi boards have their own set of problems. Typically, they are based on an Smartphone design (e.g. the Odroid U3 essentialy is a Galaxy Note 2).

This makes them plagued by the Android update issues, as these boards require a custom kernel, that includes some of the board specific patches which means you cannot just grab an Ubuntu ARM build.
Instead you have to wait for a special image from the vendor – and just as with Android, at some point, there will be no more updates.

Furthermore ARM boards are actually not that cheap. While the Pi board itself is indeed not expensive at ~60€, you have to add power-supply housing and storage.

Intel NUC devices are a great choice

While everyone was looking at cheap and efficient ARM based boards, Intel has released some great NUC competitors.
Those went largely unnoticed as typically only the high-end NUCs get news coverage. It is more impressive to report how much power one can cram into a small form-factor.

However one can obviously also put only little power in there. More precisely, Intels tablet celeron chips that range around 4-6W TDP and thus compete with ARM boards power-wise. (Still they are an order of magnitude faster then a Raspberry Pi)

DevicePower (Idle)Power (load)
Odroid U33.7 W9 W
GB-BPCE-3350C4.5 W9.6 W

Here, you get the advantages of the mature x86 platform, namely interchangeable RAM, interchangeable WiFi modules, SATA & m2 SSD ports and notably upstream Linux compatibilty (and Windows for that matter).

As you might have guessed by the hardware choice above, I made the switch already some time ago. On the one hand you only get reports for the by now outdated N3350 CPU – but on on the other hand it makes this a long term evaluation.

Regarding the specific NUC model, I went with the Gigabyte GB-BPCE-3350C, which are less expensive (currently priced around 90€) than the Intel models.

Consequently the C probably stands for “cheap” as it lacks a second SO-DIMM slot and a SD-card reader. However it is fan-less and thus perfectly fine for hosting.

So after 2 Years of usage and a successful upgrade between two Ubuntu LTS releases, I can report that switching to the x86 platform was worth it.

If anything I would probably choose a NUC model that also supports M.2/ M-Key in addition to SATA to build a software RAID-1.

0 Add to favourites0 Bury

09 August, 2019 03:45PM by Pavel Rojtberg (pavel@rojtberg.net)

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Fixes for recent KDE desktop vulnerability

As you may have been made aware on some news articles, blogs, and social media posts, a vulnerability to the KDE Plasma desktop was recently disclosed publicly. This occurred without KDE developers/security team or distributions being informed of the discovered vulnerability, or being given any advance notice of the disclosure.

KDE have responded quickly and responsibly and have now issued an advisory with a ‘fix’ [1].

Kubuntu is now working on applying this fix to our packages.

Packages in the Ubuntu main archive are having updates prepared [2], which will require a period of review before being released.

Consequently if users wish to get fixed packages sooner, packages with the patches applied have been made available in out PPAs.

Users of Xenial (out of support, but we have provided a patched package anyway), Bionic and Disco can get the updates as follows:

If you have our backports PPA [3] enabled:

The fixed packages are now in that PPA, so all is required is to update your system by your normal preferred method.

If you do NOT have our backports PPA enabled:

The fixed packages are provided in our UPDATES PPA [4].

sudo add-apt-repository ppa:kubuntu-ppa/ppa
sudo apt update
sudo apt full-upgrade

As a precaution to ensure that the update is picked up by all KDE processes, after updating their system users should at the very least log out and in again to restart their entire desktop session.

Regards

Kubuntu Team

[1] – https://kde.org/info/security/advisory-20190807-1.txt
[2] – https://bugs.launchpad.net/ubuntu/+source/kconfig/+bug/1839432
[3] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports
[4] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/ppa

09 August, 2019 03:29PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Discovering Your Very Favourite Apps – The New App Suggestion System in Our App Center

At Univention, we are constantly thinking about how we can add benefit and value to our Univention Corporate Server (UCS) and App Center. One idea born from this is the app suggestion system, which I would like to introduce to you in this article. I would also like to give you some insight into how we work with hypotheses & tests in such projects at Univention. Plus, you will learn how, contrary to many other systems, we at Univention have given top priority to the protection of personal data.

The App Center – Colorful Variety and Growing Selection

The Univention App Center is the place to go to in an UCS environment to install UCS extensions or third-party solutions to UCS. About 90 Apps are currently in the App Center, a good 50 of which are from other software providers. For any system administrator, our App Center reduces the complexity of managing the IT environment, as installation, integration and updates of apps can be done easily and centrally there. With the constantly growing number of apps and the multitude of possible combinations, however, it is not that easy for administrators to keep track.

Inspiration & Idea – Finding the Right Apps for My Environment

Inspired by numerous webshops, which navigate their users and buyers with suggestions under headings such as “Customers who have looked at this article have also looked at the following” through the flood of offerings to the products they are probably also interested in, we wanted to transfer an adequate suggestion system to apps that are used in UCS environments.
To this end, we first identified the most common combinations of apps currently in use in UCS environments. From the installation of thousands of systems, we know that, for example, the Let’s Encrypt app is very widely used together with the Kopano Core, Nextcloud or ownCloud apps. Based on the apps that are already installed in the environment, the system administrator will be suggested further suitable apps with the new mechanism in the future.
Most suggestion systems of other services, e.g. for music such as Spotify, for videos and films on YouTube or Netflix, for contacts in business networks or for friends in social media networks, include the behaviour of users in their selection process. For the App Center, we have deliberately set ourselves a different focus. The personal user behavior is not recorded and analysed. Instead, the evaluation for the suggestions takes place anonymously and locally on the own UCS system after an evaluation of the suggestion list stored in UCS.

How did we Proceed? – Hypothesis & Test

In order to ascertain whether our suggestion system actually meets the needs of administrators and users, we began formulating a hypothesis and testing it using the “Build – Measure – Learn” cycle by Eric Ries from “The Lean Startup”.
The hypothesis: If a user indeed considers a proposed app interesting, they are more likely to install it than if they had come across it accidentally in the App Center. Our assumption was that the installation rate between the “view app” and “install app” actions is on average at least 20 percentage points higher for a proposed app than for the same app if it had not been proposed.

The test: With Errata-Update 110 in May the App Center frontend code was extended, so that in the App Center overview in UCS under the already installed apps a further “line” with app suggestions was displayed. The test included a static list of app suggestions which, depending on the apps already installed, suggested among others Collabora, Kopano Core, Nextcloud, ownCloud, ONLYOFFICE, Let’s Encrypt, CUPS, DHCP Service, Fetchmail, the software installation monitor, Samba 4 and Self Service.

UCS 4.4-1: The new app suggestion

The basis for this list was an analysis of the top apps in the App Center, as to how often other apps are installed with the respective app.

The Result – Over 30% points Increased Installation Rates

Three weeks after the errata update was released, we evaluated the results. At that time about 30% of the active UCS systems had installed the errata update.
The installation rates of the proposed apps were on average 32 percentage points higher compared to the same apps when not proposed. Evidently, the administrators found the proposed apps so useful that they decided to install them as well.

Next Steps & Outlook – Suggestion System to be Expanded

At first, the suggestion system was deliberately kept simple and short. We will now continue to expand the original list of suggestions. In the medium term, our vision is for an automated suggestion system to support the creation of the list. We have a large database, which in its abundance is manually laborious to view. Whether automation will help here has yet to be scrutinized by us.
The suggestion system also makes it possible to focus more on apps with matching themes, such as groupware and messaging or file, sync & share and online office solutions.

I am interested in your thoughts on our App suggestion system.

  • What comes to your mind first regarding this topic?
  • What expectations or worries do you associate with it?
  • Have you already encountered the app suggestions?

I look forward to your comments under this article or via our feedback channel.

 

Der Beitrag Discovering Your Very Favourite Apps – The New App Suggestion System in Our App Center erschien zuerst auf Univention.

09 August, 2019 12:26PM by Nico Gulden

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Enhanced Livepatch desktop integration available with Ubuntu 18.04.3 LTS

Ubuntu 18.04.3 LTS has just been released. For the Desktop, newer stable versions of GNOME components have been included, as well as a new feature – Livepatch desktop integration.

As usual with LTS point releases, the main changes are a refreshed hardware enablement stack (newer versions of the kernel, xorg & drivers) and a number of bug and security fixes.

For those who aren’t familiar, Livepatch is a service which applies critical kernel patches without rebooting. The service is available as part of an Ubuntu Advantage subscription but also made available for free to Ubuntu users (up to 3 machines).  Fixes are downloaded and applied to your machine automatically to help reduce downtime and keep your Ubuntu LTS systems secure and compliant.  Livepatch is available for servers and desktop.

To enabling Livepatch you just need an Ubuntu One account. The set up is part of the first login or can be done later from the corresponding software-properties tab.

Here is a simple walkthrough showing the steps and the result:

The wizard displayed during the first login will help you get signed in to Ubuntu One and enable Livepatch:


Clicking the ‘Set Up’ button invites you to enter you Ubuntu One information (or to create an account) and that’s all that is needed.

The new desktop integration includes an indicator showing the current status and notifications telling when fixes have been applied.


You can also get more details on the corresponding CVEs from the Livepatch configuration UI


You can always hide the indicator using the checkbox if you prefer to keep your top panel clean and simple.

Enjoy the increased security in between reboots!


The post Enhanced Livepatch desktop integration available with Ubuntu 18.04.3 LTS appeared first on Ubuntu Blog.

09 August, 2019 10:56AM

hackergotchi for Whonix

Whonix

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: Lubuntu 18.04.3 Released!

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.3 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]

09 August, 2019 12:20AM

August 08, 2019

hackergotchi for Ubuntu

Ubuntu

Ubuntu 18.04.3 LTS released

The Ubuntu team is pleased to announce the release of Ubuntu 18.04.3 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

Like previous LTS series, 18.04.3 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures and is installed by default when using one of the desktop images.

Ubuntu Server defaults to installing the GA kernel; however you may select the HWE kernel from the installer bootloader.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS.

Kubuntu 18.04.3 LTS, Ubuntu Budgie 18.04.3 LTS, Ubuntu MATE 18.04.3 LTS, Lubuntu 18.04.3 LTS, Ubuntu Kylin 18.04.3 LTS, and Xubuntu 18.04.3 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Base. All the remaining flavours will be supported for 3 years.

To get Ubuntu 18.04.3

In order to download Ubuntu 18.04.3, visit:

http://www.ubuntu.com/download

Users of Ubuntu 16.04 will be offered an automatic upgrade to 18.04.3 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/BionicUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 18.04.3 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net
http://lists.ubuntu.com/mailman/listinfo/ubuntu-users
http://www.ubuntuforums.org
http://askubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Aug 8 14:01:59 UTC 2019 by Adam Conrad, on behalf of the Ubuntu Release Team

08 August, 2019 11:47PM by guiverc

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastien Bacher: Ubuntu 18.04.3 LTS is out, including GNOME stable updates and Livepatch desktop integration

Ubuntu 18.04.3 LTS has just been released. As usual with LTS point releases, the main changes are a refreshed hardware enablement stack (newer versions of the kernel, xorg & drivers) and a number of bug and security fixes.

For the Desktop, newer stable versions of GNOME components have been included as well as a new feature: Livepatch desktop integration.

For those who aren’t familiar, Livepatch is a service which applies critical kernel patches without rebooting. The service is available as part of an Ubuntu Advantage subscriptions but also made available for free to Ubuntu users (up to 3 machines).  Fixes are downloaded and applied to your machine automatically to help reduce downtime and keep your Ubuntu LTS systems secure and compliant.  Livepatch is available for your servers and your desktops.

Andrea Azzarone worked on desktop integration for the service and his work finally landed in the 18.04 LTS.

To enabling Livepatch you just need an Ubuntu One account. The set up is part of the first login or can be done later from the corresponding software-properties tab.

Here is a simple walkthrough showing the steps and the result:

The wizard displayed during the first login includes a Livepatch step will help you get signed in to Ubuntu One and enable Livepatch:

Clicking the ‘Set Up’ button invites you to enter you Ubuntu One information (or to create an account) and that’s all that is needed.

The new desktop integration includes an indicator showing the current status and notifications telling when fixes have been applied.

You can also get more details on the corresponding CVEs from the Livepatch configuration UI

You can always hide the indicator using the toggle if you prefer to keep your top panel clean and simple.

Enjoy the increased security in between reboots!

 

 

 

08 August, 2019 07:32PM

hackergotchi for Cumulus Linux

Cumulus Linux

Kernel of Truth season 2 episode 12: Innovation in the data center

Subscribe to Kernel of Truth on iTunes, Google Play, SpotifyCast Box and Sticher!

Click here for our previous episode.

In this podcast we have an in-depth conversation about the different types and levels of innovation in the data center and where we see it going. Spiderman aka Rama Darbha and host Brian O’Sullivan are joined by a new guest to the podcast, VP of Marketing Ami Badani. They share that while innovation in the data center doesn’t appear sexy, outside of network engineers, in reality there has been a huge paradigm shift in the way data centers have built and operated last 3 years. So what does that mean? How is automation involved in this conversation? Listen here to find out.

Guest Bios

Brian O’Sullivan: Brian currently heads Product Management for Cumulus Linux. For 15 or so years he’s held software Product Management positions at Juniper Networks as well as other smaller companies. Once he saw the change that was happening in the networking space, he decided to join Cumulus Networks to be a part of the open networking innovation. When not working, Brian is a voracious reader and has held a variety of jobs, including bartending in three countries and working as an extra in a German soap opera. You can find him on Twitter at @bosullivan00.

Rama Darbha: Rama is a Senior Consulting Engineer at Cumulus Networks, helping customers and partners optimize their open networking strategy. Rama has an active CCIE #22804 and a Masters in Engineering and Management from Duke University. You can find him on LinkedIn here.

Ami Badani: Ami is VP of Marketing at Cumulus Networks responsible for all aspects of marketing from messaging and positioning, demand generation, partner marketing, and amplification of the Cumulus Networks brand. She has a decade’s worth of experience at various Silicon Valley technology companies. Most recently, Ami served as a key marketing leader at Instart Logic, helping to triple its sales growth and lead the disruption of the application delivery market. Prior to Instart Logic, she was Head of Strategic Marketing at Cisco where she co-founded and incubated the Internet of Things Platform as a Service business and launched the platform out of stealth mode. She began her career in various facets of investment banking and asset management at Goldman Sachs and JPMorgan. Ami has an MBA from University of Chicago, Booth School of Business and a BS from University of Southern California.

08 August, 2019 06:55PM by Katie Weaver

hackergotchi for SparkyLinux

SparkyLinux

Sparky 2019.08 Special Editions

There are new live/install iso images of SparkyLinux 2019.08 “Po Tolo” special editions: GameOver, Multimedia & Rescue available to download. Sparky ‘Po Tolo’ follows rolling release model and is based on Debian testing “Bullseye”.

GameOver Edition features a very large number of preinstalled games, useful tools and scripts. It’s targeted to gamers.

Multimedia Edition features a large set of tools for creating and editing graphics, audio, video and HTML pages.

The live system of Rescue Edition contains a large set of tools for scanning and fixing files, partitions and operating systems installed on hard drives.

Changes:
– system upgraded from Debian testing repos as of August 7, 2019
– Linux kernel 4.19.37
– Calamares installer 3.2.11
– new Sparky6 theme
– new Tela icon set
– refreshed desktop look

Sparky 2019.08 Special Editions is available as follows:
– GameOver 64bit Xfce
– Multimedia 64bit Xfce
– Rescue 32 & 64bit Openbox

System reinstallation is not required.
If you have Sparky rolling installed:
• make sure your OS uses Debian ‘testing’ repos
• change Sparky repos to ‘testing’ if your OS uses ‘stable’ repo
• make full system upgrade:
sudo apt update
sudo apt install sparky6-apt
sudo apt update
sudo apt full-upgrade

New rolling iso images of Sparky “Po Tolo” can be downloaded from the download/rolling page.

Sparky Multimedia Edition

 

08 August, 2019 04:07PM by pavroo

hackergotchi for Purism PureOS

Purism PureOS

Curbing Harassment with User Empowerment

User empowerment is the best tool to curb online harassment

Online harassment is both a privacy and a security concern. We all know the story of how someone (typically a woman, studies say) states their opinion online and is then harassed to the point of leaving the service (or worse). Using the infamous “with an opinion” hook, we can frame a user story that affects more than 50% of the population:

User story: I am a marginalized person with an opinion. I want to intercept online harassment, so that I can communicate safely with friends and strangers.

The truth is that a motivated mob can target anyone, marginalized or not. We would all benefit from effective anti-harassment tools.

Don’t rely on the operator

Many current and proposed solutions to stop or curb harassment rely on one or more of these methods:

  • Human content moderation. Typically volunteer or low-paid, and subject to burnout. A moderation team simply does not scale, and cannot moderate private messages (we define “private” as “end-to-end encrypted”).
  • Server-side tracking. Error-prone “algorithms”, with little or no transparency, regularly make mistakes. And once more, they cannot apply to private messages.
  • Shoot-first takedown laws that skip the deliberative process and are frequently abused.
  • Corporate censorship, or any of the above distorted by bottom line.

It is tempting to rely on a server-side solution, whether that means the machine itself or humans working on your behalf. This can work on tiny scales if you have a trusted friend with both technical and legal know-how, but in all other cases the issues are compounded. To mashup two misunderstood quotes:

You solved a harassment problem by ceding control to the service? Now you have two problems.

Empower the user

We suggest that user empowerment via client-side features is a more robust and safer approach. Potential design patterns include:

1. Client-side heuristics

Server-side solutions necessarily put power in the hands of a developer or sysadmin. By contrast, client-side heuristics put power in the hands of the user, including the power to turn them off. Privacy Badger is a great example of this in practice:

  • Fresh installations use rules generated by offline training.
  • Additional rules based on behavior-based heuristics.
  • Additional customization for experienced users.
  • No ads, no calling home, no tracking.
  • Turn it off, for example if you are researching trackers.

Moving forward we aim to enhance all Librem One clients with badger-like functionality. We believe that the majority of cases won’t require machine learning, and could be handled with simple heuristics:

Illustration of Librem One clients with privacy badger-like functionality

2. Safety mode

We can classify online correspondents into three groups:

  • Trusted contacts. People we talk to regularly, and trust.
  • Strangers. People we don’t know well, or don’t know at all.
  • Bad actors. People we don’t want to interact with, possibly based on the advice of a trusted contact.

Typically, we want to communicate with strangers online, so this should be possible by default. But if we are being actively harassed, we can assume that further messages from strangers are unsafe, and switch our account to “safety mode”–rejecting messages, invites and other interactions from strangers. We can rely on our trusted contacts for help and support, including passing on well-wishes from strangers.

At-risk individuals might choose to start their account in safety mode.

Trusted caretakers might maintain lists of bad actors, but trusting a caretaker should require very careful consideration: What is their governance model? What is their appeals process? Do they leak information about list recipients?

3. Crowd-sourced tagging for public content

In the specific case of public posts, we believe that public crowd-sourced tagging (aka, folksonomy) is a sustainable and fair replacement for human moderation, caretaker-lists and takedowns.

This approach takes moderation power out of the hands of a few sysadmins and corporate moderation teams, and grants it to all users equally. Users are free to decide which user-moderator they trust, and filter based on their tags–or skip moderation entirely.

😡: I pity the fool who can't butter their
    #toast! #onlydirectionisup

        😷: #hatespeech
        😒: #butterpolitics

👿: @😡 Shut up! My grandparents fought to
    butter side #down!

        🤩: #thoughtleader
        😒: #butterpolitics

😤: @😡 @👿 Well actually, you're ignoring
    the #margarine argument. You're such
    #lipidariantoastbros

        😒: #butterpolitics

😢: @😤 @😡 @👿 Why can't we all just get along?

        😡: #butterdowner
        👿: #butterupper
        😤: #lipidariantoastbros
        😒: #butterpolitics
        🤩: #thoughtleader

Where to, from here?

These are only a few of the high-level patterns we are considering as enhancements to all Librem One clients. Your Librem One subscription supports our team as we turn these patterns into a reality.

They build on the philosophy we’ve already outlined on our blog, under the “user empowerment” tag.

We look forward to reading more proposals from our friends and colleagues in the free software and anti-harassment communities. We are particularly interested in design patterns that honor our “no tracking” policy, and reliable (peer-reviewed) statistics that help prioritize use-cases. We are already looking at:

In the meantime, and whether you are a Librem One user or not, please refer to our stay safe guide. It’s quick and easy to read, just like our policy, and we keep it up-to-date with links to high-quality, world-audience resources.

Thanks for stopping by, stay safe, and stay tuned for more user empowerment news.

The post Curbing Harassment with User Empowerment appeared first on Purism.

08 August, 2019 02:25PM by David Seaward

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S12E18 – Pilotwings

This week we’ve been running Steam in the cloud via an NVIDIA SHIELD TV. We discuss if we even need new distros and whether its more Linux apps we need. Plus we bring you some GUI love and go over all your feedback.

It’s Season 12 Episode 18 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Mattias Wernér are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Martin has been setting up Steam with Family view and library sharing in the “nvidia cloud” using the NVIDIA SHIELD TV
    • Mattias has been snapping strife.
  • We discuss creating new distros vs. creating new Linux apps and how do we advocate for more app development.

  • We share a GUI Lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image taken from Pilotwings published in 1989 for the Super Nintendo Entertainment System by Nintendo.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

08 August, 2019 02:00PM

Ubuntu Blog: Slow snap? Trace-exec to the rescue!

Slow applications are never fun. But not knowing why an application is not behaving correctly can be even more frustrating. A well-designed system that can diagnose performance or startup issues and inform the user about the problem goes a long way toward mitigating the frustration, and may even help resolve the root cause altogether.

Back in January, we reviewed several tools used to troubleshoot snap usage, including the strace functionality built into the snap binary. However, strace requires some knowledge of system internals, so it may not be the first choice for users or developers looking to quickly diagnose and profile the behavior of their snaps. From version 2.36 onwards, snapd ships with the –trace-exec run option. This convenient and friendly feature lists the slowest processes in the snap startup and runtime chain, which can help you pinpoint the source of the issue.

Trace-exec in action

Technical troubleshooting is like an onion – layered and complex. Which is why you always start with simple things first. Going back to strace, if you want to profile the execution of an application, you can run strace with the -c (summary) flag. This will give you the list of all the system calls, the percentage of CPU time spent in each, average time per system call, errors, and several other important details. At first glance, this can be a useful indicator, but it requires a keen eye, and the data is not shown on a per-process level.

Trace-exec offers a terser output, listing up to 10 slowest processes that ran and finished during the run of a snap. Typically, this will include the actual snap startup and some portion of its runtime. While not as comprehensive as strace, it can give you a good overview of all the commands, command wrappers and helper scripts used to bootstrap the snap.

Let’s look at a practical example with the VLC media player.

snap run --trace-exec vlc
Slowest 10 exec calls during snap run:
  0.007s snap-update-ns
  0.088s /snap/core/7270/usr/lib/snapd/snap-confine
  0.006s /usr/lib/snapd/snap-exec
  0.007s /usr/bin/getent
  0.006s /bin/mkdir
  0.006s /bin/mkdir
  0.007s /bin/mkdir
  0.160s /snap/vlc/1049/bin/desktop-launch
  0.035s /snap/vlc/1049/usr/bin/glxinfo
  0.042s /snap/vlc/1049/bin/vlc-snap-wrapper.sh

Total time: 2.295s

From this run, we can see that a portion of time is dedicated to the snap setup, like the confinement. You also get the total time – but part of this number may also be the actual time the application runs, so it is not strictly the startup portion of the entire sequence. Nevertheless, it is a useful indicator, and we’ve discussed this in much greater detail in the I have a need, a need for snap article some time ago.

On its own, of course, the information will not resolve the problem – but it can be used by snap developers and application developers to optimize their tools. For instance, an enhancement or a big fix could be implemented into snapd, or perhaps a developer may realize that certain libraries staged into the snap contribute the highest penalty to the startup and behavior of their software.

In comparison, with strace, you would see the following:

snap run --strace='-c' vlc
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ------------
53.40    4.041220        3419      1182         1 poll
34.21    2.589384        2281      1135        30 futex
  4.87    0.368380         251      1466        81 read
  4.61    0.348855        2474       141        56 wait4
  1.09    0.082512          15      5607      4457 open
0.55    0.041282       41282         1           rt_sigtimedw
  0.36    0.027104           4      6200      2616 stat
  0.09    0.006461           3      2536           mprotect
  0.08    0.006255          20       316           getdents
  0.08    0.006148           8       740       490 access
  0.07    0.005506           3      1853         6 close

  0.00    0.000000           0         2           capset
  0.00    0.000000           0         1           prlimit64
------ ----------- ----------- --------- --------- ------------
100.00    7.568409                 31057      7960 total

This output is not for the faint-hearted. You have no instant visibility into which processes contributed to which system call, and you also need some understanding of what each system call means. If you want to know more, you can always use the man 2 pages to learn more about specific calls, e.g.:

man 2 mprotect

Summary

Trace-exec complements other troubleshooting features in snapd quite nicely. Alongside other assets in the snap ecosystems, it provides developers as well as users with means to better understand why sometimes their applications do not always perform quite as they should.

This valuable information, when available, also allows the snap teams to implement fixes and changes that will make the overall snap experience smoother and more enjoyable. If there are other features or capabilities you’d like to see introduced in snapd, please let us now by joining our forum for a discussion.

Photo by Marten Bjork on Unsplash.

The post Slow snap? Trace-exec to the rescue! appeared first on Ubuntu Blog.

08 August, 2019 01:39PM

August 07, 2019

Ubuntu Blog: Creating a ROS 2 CLI command and verb

Following our previous post on ROS 2 CLI (Command Line Interface), we will see here how one can extend the set of existing CLI tools by introducing a new command and its related verb(s).

As support for this tutorial, we will create a ‘Hello World’ example so that the new command will be hello and the new verb will be world. The material is readily available on github.

Compared to ROS 1, the ROS 2 CLI has been entirely re-designed in Python offering a clean API, a single entry-point (the keyword ros2) and more importantly for the topic at hand, a plugin-like interface using Python entry points.

This new interface allows one to easily extend the existing set of commands and verbs using a few boiler-plate classes and the actual implementation of our new tools.

Let’s get to it!

Setting up the package

First we will create a new ROS 2 python package and the necessary sub-folders:

$ cd ~/ros2_ws/src
$ mkdir ros2hellocli && cd ros2hellocli
$ mkdir command verb

While the ros2hellocli will be the root folder for this project, the command folder will contain a command extension point that allows ROS 2 CLI to discover the new command. Similarly, the verb folder will contain the verb extension point(s) which will hold the actual implementation of our new functionality.

But first, let us not forget to turn those sub-folders into Python packages:

$ touch command/__init__.py verb/__init__.py

Now that we have our project structure ready, we will set up the boiler-plate code mentioned earlier, starting with the classical package manifest and setup.py files.

$ touch package.xml

And copy the following,

<?xml version="1.0"?>
<package format="2">
  <name>ros2hellocli</name>
  <version>0.0.0</version>
  <description>
    The ROS 2 command line tools example.
  </description>
  <maintainer email="jeremie.deray@example.org">Jeremie Deray</maintainer>
  <license>Apache License 2.0</license>

  <depend>ros2cli</depend>

  <export>
    <build_type>ament_python</build_type>
  </export>
</package>

Then,

$ touch setup.py

And copy the following,

from setuptools import find_packages
from setuptools import setup

setup(
  name='ros2hellocli',
  version='0.0.0',
  packages=find_packages(exclude=['test']),
  install_requires=['ros2cli'],
  zip_safe=True,
  author='Jeremie Deray',
  author_email='jeremie.deray@example.org',
  maintainer='Jeremie Deray',
  maintainer_email='jeremie.deray@example.org',
  url='https://github.com/artivis/ros2hellocli',
  download_url='https://github.com/artivis/ros2hellocli/releases',
  keywords=[],
  classifiers=[
      'Environment :: Console',
      'Intended Audience :: Developers',
      'License :: OSI Approved :: Apache Software License',
      'Programming Language :: Python',
  ],
  description='A minimal plugin example for ROS 2 command line tools.',
  long_description="""The package provides the hello command as a plugin example of ROS 2 command line tools.""",
  license='Apache License, Version 2.0',
)

As those two files are fairly common in the ROS world, we skip detailing them and refer the reader to ROS documentation for further explanations (package manifest on ROS wiki).

When creating a new CLI tool, remember however to edit the appropriate entries such as name/authors/maintainer etc.
We will also notice that the package depend upon ros2cli since it is meant to extend it.

Creating a ROS 2 CLI command

Now we shall create the new command hello and its command entry-point. First we will create a hello.py file in the command folder,

$ touch command/hello.py

and populate it as follows,

from ros2cli.command import add_subparsers
from ros2cli.command import CommandExtension
from ros2cli.verb import get_verb_extensions


class HelloCommand(CommandExtension):
    """The 'hello' command extension."""

    def add_arguments(self, parser, cli_name):
        self._subparser = parser
        verb_extensions = get_verb_extensions('ros2hellocli.verb')
        add_subparsers(
            parser, cli_name, '_verb', verb_extensions, required=False)

    def main(self, *, parser, args):
        if not hasattr(args, '_verb'):
            self._subparser.print_help()
            return 0

        extension = getattr(args, '_verb')

        return extension.main(args=args)

The content of hello.py is fairly similar to any other command entry-point.

With the new command being defined, we will now edit the setup.py file to advertise this new entry-point so that the CLI framework can find it. Within the setup() function in setup.py, we append the following lines:

  ...
  tests_require=['pytest'],
  entry_points={
        'ros2cli.command': [
            'hello = ros2hellocli.command.hello:HelloCommand',
        ]
    }
)

From now on, ROS 2 CLI framework should be able to find the hello verb,

$ cd ~/ros2_ws
$ colcon build --symlink-install --packages-select ros2hellocli
$ source install/local_setup.bash
$ ros2 [tab][tab]

Hitting the [tab] key will trigger the CLI auto-completion which will display in the terminal the different options available. Among those options should appear hello,

$ ros2 [tab][tab]
action            extensions        msg               run             topic
bag      ------>  hello  <------    multicast         security
component         interface         node              service
daemon            launch            param             srv
extension_points  lifecycle         pkg               test

Since the CLI framework can find hello, we should also be able to call it,

$ ros2 hello
usage: ros2 hello [-h]
                  Call `ros2 hello <command> -h` for more detailed usage. ...

The 'hello' command extension

optional arguments:
  -h, --help            show this help message and exit

Commands:

  Call `ros2 hello <command> -h` for more detailed usage.

It works!

Fairly simple so far isn’t it? Notice that the output shown in the terminal is the same as calling ros2 hello --help.

Creating a ROS 2 CLI word

Alright, so now that we have successfully created the new command hello, we will now create its associated new verb world. Notice that the following is transposable to virtually any command.
Just like commands, verbs rely on the same ‘entry-point’ mechanism, we therefore create a world.py file in the verb folder,

$ cd ~/ros2_ws/src/ros2hellocli
$ touch verb/world.py

and populate it as follows,

from ros2cli.verb import VerbExtension


class WorldVerb(VerbExtension):
    """Prints Hello World on the terminal."""

    def main(self, *, args):
      print('Hello, ROS 2 World!')

As previously, we have to advertise this new entry-point in setup() as well by appending the following,

  ...
  tests_require=['pytest'],
  entry_points={
        'ros2cli.command': [
            'hello = ros2hellocli.command.hello:HelloCommand',
        ],
        'ros2hellocli.verb': [
            'world = ros2hellocli.verb.world:WorldVerb',
        ]
    }
)

The ROS 2 CLI framework should be able to find the world command,

$ cd ~/ros2_ws
$ colcon build --symlink-install --packages-select ros2hellocli
$ source install/local_setup.bash
$ ros2 hello [tab][tab]
world

and we should be able to call it,

$ ros2 hello world
Hello, ROS 2 World!

Et voilà! We successfully created a CLI command/verb duo.

Although this example is working fine, we will improve it a little in order to cover two more aspects of the CLI framework, the first one being handling user input arguments and the second being related to good practice.

We will start by creating an api Python package:

$ cd ~/ros2_ws/src/ros2hellocli
$ mkdir api
$ touch api/__init__.py

It will contain all of the factorized code, everything that one can turn into small and useful Python functions/classes for re-use and prevent code duplication. And that is precisely what we will do with our print call. In the api/__init__.py file, we will define the following functions,

def get_hello_world():
    return 'Hello, ROS 2 World!'

def get_hello_world_leet():
    return 'He110, R0S 2 W04ld!'

From there we will modify the WorldVerb class so that is calls one of the above function by default and the other if the user passes a given flag option to the CLI. To do so we modify the verb/world.py file as follows,

from ros2cli.verb import VerbExtension
from ros2hellocli.api import get_hello_world, get_hello_world_leet


class WorldVerb(VerbExtension):
    """Prints Hello World on the terminal."""

    def add_arguments(self, parser, cli_name):
      parser.add_argument(
                  '--leet', '-l', action='store_true',
                  help="Display the message in 'l33t' form.")

    def main(self, *, args):
      message = get_hello_world() if not args.leet else get_hello_world_leet()
      print(message)

Let us test this new option,

$ cd ~/ros2_ws
$ colcon build --symlink-install --packages-select ros2hellocli
$ ros2 hello world
Hello, ROS 2 World!
$ ros2 hello world --leet
He110, R0S 2 W04ld!

Isn’t it great?

So now that we have now covered the basics of adding both a new ROS 2 CLI command and verb, how would you expand the hello command with a new universe verb?

Notice that you may find many examples in the ros2cli github repository to help you creating powerful CLI tools. If you are not yet familiar with all existing tools, you can have a look at the ROS 2 CLI cheats sheet we put together to help you get up to date.

Come join the discussion and tell us what new CLI tool you have developed!

The post Creating a ROS 2 CLI command and verb appeared first on Ubuntu Blog.

07 August, 2019 08:19PM