June 20, 2019

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Vanilla Framework 2.0 upgrade guide

We have just released Vanilla Framework 2.0, Canonical’s SCSS styling framework, and – despite our best efforts to minimise the impact – the new features come with changes that will not be automatically backwards compatible with sites built using previous versions of the framework.

To make the transition to v2.0 easier, we have compiled a list of the major breaking changes and their solutions (when upgrading from v1.8+). This list is outlined below. We recommend that you treat this as a checklist while migrating your projects.

1. Spacing variable mapping

If you’ve used any of the spacing variables (they can be recognised as variables that start with $spv or $sph) in your Sass then you will need to update them before you can build CSS. The simplest way to update these variables is to find/replace them using the substitution values listed in the Vanilla spacing variables table.

2. Grid

2.1 Viewport-specific column names

Old class New class
mobile-col-* col-small-*
tablet-col-* col-medium-*

2.2 Columns must be direct descendants of rows

Ensure .col-* are direct descendants of .row; this has always been the intended use of the pattern but there were instances where the rule could be ignored. This is no longer the case.

Additionally, any .col-*s that are not direct descendants will just span the full width of their container as a fallback.

You can see an example of correcting improperly-nested column markup in this ubuntu.com pull request.

2.3 Remove any Shelves grid functionality

The framework no longer includes Shelves, classes starting with prefix-, suffix-, push- and pull- and no longer supported but arbitrary positioning on our new grid is achieved by stating an arbitrary starting column using col-start- classes.

For example: if you want an eight-column container starting at the fourth column in the grid you would use the classes col-8 col-start-4.

You can read full documentation and an example in the Empty Columns documentation.

2.4 Fixed with containers with no columns

Previously, a .row with a single col-12 or a col-12 on its own may have been used to display content in a fixed-width container. The nested solution adds unnecessary markup and a lone col-12 will no longer work.

A simple way to make an element fixed width, use the utility .u-fixed-width, which does not need columns.

2.5 Canonical global nav

If your website makes use of the Canonical global navigation module (If so, hello colleague or community member!) then ensure that the ensure global nav width matches the new fixed-width (72rem by default). A typical implementation would look like the following HTML:

<script src="/js/global-nav.js"></script> <script>canonicalGlobalNav.createNav({maxWidth: '72rem'});</script>

3. Renamed patterns

Some class names have been marked for renaming or the classes required to use them have been minimised.

3.1 Stepped lists

We favour component names that sound natural in English to make the framework more intuitive. “list step” wasn’t a good name and didn’t explain its use very well so we decedided to rename it to the much more explicit “stepped list”.

In order to update the classes in your project search and replace the following:

Old class name New class name
.p-list-step .p-stepped-list
.p-list-step__item .p-stepped-list__item

3.2 Code snippet

“Code snippet” was an ambiguous name so we have renamed it to “code copyable” to indicate the major difference between it and other code variants.

Change the classes your code to the following:

Old class name New class name
.p-code-snippet .p-code-copyable

If you’ve extended the mixin then you’ll need to update the mixin name as follows:

Old mixin name New mixin name
vf-p-code-snippet vf-p-code-copyable

The p-tooltips class remains the same but you no longer need two classes to use the pattern because the modified tooltips now include all required styling. Markup can be simplified as follows (this is the same for all tooltip variants):

Old markup New markup
<button class="p-tooltip p-tooltip--btm-center" …> <button class="p-tooltip--btm-center" …>

4. Breakpoints

Media queries changed due to a WG proposal. Ensure any local media queries are aligned with the new ones. Also, update any hard-coded media queries (e.g. in the markup) to the new values. The new values can be found in the docs (local link)

5. Deprecated components

5.1 Footer (p-footer)

This component is now entirely removed with no direct replacement. Footers can be constructed with a standard p-strip, row markup.

5.2 Button switch (button.p-switch)

Buttons shouldn’t be used with the p-switch component and are no longer supported. Instead, use the much more semantic checkbox input instead.

5.3 Hidden, visible (u-hidden, u-visible)

These have been deprecated and the one visibility-controlling class format is u-hide (and the more specific variants – u-hide documentation)

5.4 Warning notification (p-notification–warning)

This name for the component has been deprecated and it is now only available as p-notification--caution

5.5 p-link–no-underline

This was an obsolete modifier for a removed version underlined links that used borders

5.6 Strong link (p-link–strong)

This has been removed with no replacement as it was an under-utilised component with no clear usage guidelines.

5.7 Inline image variants

The variants p-inline-images img and p-inline-images__img have been removed and the generic implementation now supports all requirements.

5.7 Sidebar navigation (p-navigation–sidebar)

For navigation, the core p-navigation component is recommended. If sidebar-like functionality is still required then if can be constructed with the default grid components.

5.7 Float utilities (p-float*)

To move them in-line with the naming conventions used elsewhere in the framework, u-float--right and u-float--left are now u-float-right and u-float-left (One “-” is removed to make them first-level components rather than modifers. This is to allow for modifiers for screen sizes).

6. (Optional) Do not wrap buttons / anchors in paragraphs.

We have CSS rules in place to ensure that wrapped buttons behave as if they aren’t, and we would like to remove this as it is unnecessary bloat. In order to do that, we need to ensure buttons are no longer wrapped. This back-support is likely to be deprecated in future versions of the framework.

7. Update stacked forms to use the grid (p-form–stacked).

The hard-coded widths (25%/75%) on labels and inputs have been removed. This will break any layouts that use them, so please wrap the form elements in .row>.col-*.

8. Replace references to $px variable

$px used to stand for 1px expressed as a rem value. This was used, for example, to calculate padding so that the padding plus the border equals a round rem value. This could no longer work once we introduced the font size increase at 1680px, because the value of 1rem changes from 16px to 18px.

Replace instances of this variable with calc({rem value} +/- 1px).

This means you need to replace e.g.:

Before: padding: $spv-nudge - $px $sph-intra--condensed * 1.5 - ($px * 2);

After: padding: calc(#{$spv-nudge} - 1px) calc(#{$sph-intra--condensed * 1.5} - 2px);

9. Replace references to $color-warning with $color-caution

This was a backwards compatibility affordance that had been deprecated for the last few releases.

10. (Optional) Try to keep text elements as siblings in the same container

Unless you really need to wrap in different containers, e.g. (emmet notation) div>h4+div>p, use div>h4+p. This way the page will benefit from the careful adjustments of white space using <element>+<element> css rules, i.e. p+p, p+h4 etc. Ignoring this won’t break the page, but the spacing between text elements will not be ideal.

11. $font-base-size is now a map of sizes

To allow for multiple base font sizes for different screen sizes, we have turned the $font-base-size variable into a map.

A quick fix to continue using the deprecated variable locally would be:

$font-base-size: map-get($base-font-sizes, base);

But, for a more future-proof version, you should understand and use the new map.

By default, the font-size of the root element (<html>) increases on screens larger than the value of $breakpoint-x-large. This means it can no longer be represented as a single variable. Instead, we use a map to store the two font sizes ($base-font-sizes). If you are using $font-base-size in your code, replace as needed with a value from the $base-font-sizes map.

That’s it

Following the previous steps should now have your project using the latest features of Vanilla 2.0. There may be more work updating your local code and removing any temporary local fixes for issues we have fixed since the last release but this will vary from project to project.

If you still need help then please contact us on twitter or refer to the full Vanilla Framework documentation.

If, in the process of using Vanilla, you find bugs then please reprot them as issues on Github where we also welcome pull request submissions from the community if you want to suggest a fix.

The post Vanilla Framework 2.0 upgrade guide appeared first on Ubuntu Blog.

20 June, 2019 10:27AM

Canonical Design Team: Parallel installs – test and run multiple instances of snaps

In Linux, testing software is both easy and difficult at the same time. While the repository channels offer great availability to software, you can typically only install a single instance of an application. If you want to test multiple instances, you will most likely need to configure the remainder yourself. With snaps, this is a fairly simple task.

From version 2.36 onwards, snapd supports parallel install – a capability that lets you have multiple instances of the same snap available on your system, each isolated from the others, with its own configurations, interfaces, services, and more. Let’s see how this is done.

Experimental features & unique identifier

The first step is to turn on a special flag that lets snapd manage parallel installs:

snap set system experimental.parallel-instances=true

Once this step is done, you can proceed to installing software. Now, the actual setup may appear slightly counter-intuitive, because you need to append a unique identifier to each snap instance name to distinguish them from the other(s). The identifier is an alphanumeric string, up to 10 characters in length, and it is added as a suffix to the snap name. This is a manual step, and you can choose anything you like for the identifier. For example, if you want to install GIMP with your own identifier, you can do something like:

snap install gimp_first

Technically, gimp_first does not exist as a snap, but snapd will be able to interpret the format of “snap name” “underscore” “unique identifier” and install the right software as a separate instance.

You have quite a bit of freedom choosing how you use this feature. You can install them each individually or indeed in parallel, e.g. snap install gimp_1 gimp_2 gimp_3. You can install a production version (e.g. snap install vlc) and then use unique identifiers for your test installs only. In fact, this may be the preferred way, as you will be able to clearly tell your different instances apart.

Testing 1 2 3

You can try parallel installs with any snap in the store. For example, let’s setup two instances of odio. Snapd will only download the snap package once, and then configure the two requested instances separately.

snap install odio_first odio_second
odio_second 1 from Canonical✓ installed
odio_first 1 from Canonical✓ installed

From here on, you can manage each instance in its own right. You can remove each one using its full name (including the identifier), connect and disconnect interfaces, start and stop services, create aliases, and more. For instance:

snap remove odio_second
odio_second removed

Different instances, different versions

It gets better. Not only can you have multiple instances, you can manage the release channel of each instance separately. So if you want to test different versions – which can really be helpful if you want to learn (and prepare for) what new editions of an application bring, you can do this in parallel to your production setup, without requiring additional hardware, operating system instances, users – or having to worry about potentially harming your environment.

snap info vlc
name:      vlc
summary:   The ultimate media player

stable:    3.0.7                      2019-06-07 (1049) 212MB -
candidate: 3.0.7                      2019-06-07 (1049) 212MB -
beta:       2019-06-18 (1071) 212MB -
edge:      4.0.0-dev-8388-gb425adb06c 2019-06-18 (1070) 329MB -

VLC is a good example, with stable version 3.0.7 available, and preview version 4.0.0 in the edge channel. If you already have multiple instances installed, you can just refresh one of them, e.g.: the aptly named vlc_edge instance:

snap refresh --edge vlc_edge

Or you can even directly install a different version as a separate instance:

snap install --candidate vlc_second
vlc_second (candidate) 3.0.7 from VideoLAN✓ installed

You can check your installed instances, and you will see the whole gamut of versions:

snap list| grep vlc
vlc         3.0.7          1049   stable     videolan*      -
vlc_edge    4.0.0-dev-...  1070 edge     videolan*      -
vlc_second  3.0.7          1049   candidate  videolan*     -

When parallel lines touch

For all practical purposes, these will be individual applications with their own home directory and data. In a way, this is quite convenient, but it can be problematic if your snaps require exclusive access to system resources, like sockets or ports. If you have a snap that runs a service, only one instance will be able to bind to a predefined port, while others will fail.

On the other hand, this is quite useful for testing the server-client model, or how different applications inside the snap work with one another. The namespace collisions as well as methods to share data using common directories are described in detail in the documentation. Parallel installs do offer a great deal of flexibility, but it is important to remember than most applications are designed to run individually on a system.


The value proposition of self-contained applications like snaps has been debated in online circles for years now, revolving around various pros and cons compared to installations from traditional repository channels. In many cases, there’s no clear-cut answer, however parallel installs do offer snaps a distinct, unparalleled [sic] advantage. You have the ability to run multiple instances, multiple versions of your applications in a safe, isolated manner.

At the moment, parallel installs are experimental, best suited for users comfortable with software testing. But the functionality does open a range of interesting possibilities, as it allows early access to new tools and features, while at the same time, you can continue using your production setup without any risk. If you have any comments or suggestions, please join our forum for a discussion.

Photo by Kholodnitskiy Maksim on Unsplash.

The post Parallel installs – test and run multiple instances of snaps appeared first on Ubuntu Blog.

20 June, 2019 09:04AM

June 19, 2019

Canonical Design Team: Kubernetes 1.15 now available from Canonical

Canonical widens Kubernetes support with kubeadm

Canonical announces full enterprise support for Kubernetes 1.15 using kubeadm deployments, its Charmed Kubernetes, and MicroK8s; the popular single-node deployment of Kubernetes.

The MicroK8s community continues to grow and contribute enhancements, with Knative and RBAC support now available through the simple microk8s.enable command. Knative is a great way to experiment with serverless computing, and now you can experiment locally through MicroK8s. With MicroK8s 1.15 you can develop and deploy Kubernetes 1.15 on any Linux desktop, server or VM across 40 Linux distros. Mac and Windows are supported too, with Multipass.

Existing Charmed Kubernetes users can upgrade smoothly to Kubernetes 1.15, regardless of the underlying hardware or machine virtualisation. Supported deployment targets include AWS, GCE, Azure, Oracle, VMware, OpenStack, LXD, and bare metal.

“Kubernetes 1.15 includes exciting new enhancements in application, custom resource, storage, and network management. These features enable better quota management, allow custom resources to behave more like core resources, and offer performance enhancements in networking. The Ubuntu ecosystem benefits from the latest features of Kubernetes, as soon as they become available upstream“ commented Carmine Rimi, Kubernetes Product Manager at Canonical.

What’s new in Kubernetes 1.15

Notable upstream Kubernetes 1.15 features:

  • Storage enhancements:
    • Quotas for ephemeral storage: (alpha) Quotas utilises filesystem project quotas to provide monitoring of resource consumption and optionally enforcement of limits. Project quotas, initially in XFS and more recently ported to ext4fs, offer a kernel-based means of monitoring and restricting filesystem consumption. This improves performance of monitoring storage utilisation of ephemeral volumes.
    • Extend data sources for persistent volume claims (PVC): (alpha) You can now specify an existing PVC as a DataSource parameter for creating a new PVC. This results in a clone – a duplicate – of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The back end device creates an exact duplicate of the specified Volume. Clones and Snapshots are different – a Clone results in a new, duplicate volume being provisioned from an existing volume — it counts against the users volume quota, it follows the same create flow and validation checks as any other volume provisioning request, it has the same lifecycle and workflow. Snapshots, on the other hand, results in a point-in-time copy of a volume that is not, itself, a usable volume.
    • Dynamic persistent volume (PV) resizing: (beta) This enhancement allows PVs to be resized without having to terminate pods and unmount the volume first.
  • Networking enhancements:
    • NodeLocal DNSCache: (beta) This enhancement improves DNS performance by running a dns caching agent on cluster nodes as a Daemonset. With this new architecture, pods reach out to the dns caching agent running on the same node, thereby avoiding unnecessary networking rules and connection tracking.
    • Finaliser protection for service load balancers: (alpha) Adding finaliser protection to ensure the Service resource is not fully deleted until the correlating LB is also deleted.
    • AWS Network Load Balancer Support: (beta) AWS users now have more choices for AWS load balancer configuration. This includes the Network Load Balancer, which offers extreme performance and static IPs for applications.
  • Node and Scheduler enhancements:
    • Device monitoring plugin support: (beta) Monitoring agents provide insight to the outside world about containers running on the node – this includes, but is not limited to, container logging exporters, container monitoring agents, and device monitoring plugins. This enhancement gives device vendors the ability to add metadata to a container’s metrics or logs so that it can be filtered and aggregated by namespace, pod, container, etc.  
    • Non-preemptive priority classes: (alpha) This feature adds a new option to PriorityClasses, which can enable or disable pod preemption. PriorityClasses impact the scheduling and eviction of pods – pods are scheduled according to descending priority; if a pod cannot be scheduled due to insufficient resources, lower-priority pods will be preempted to make room. Allowing PriorityClasses to be non-preempting is important for running batch workloads – pods with partially-completed work won’t be preempted, allowing them to finish.
    • Scheduling framework extension points: (alpha) The scheduling framework extension points allow many existing and future features of the scheduler to be written as plugins. Plugins are compiled into the scheduler, and these APIs allow many scheduling features to be implemented as plugins, while keeping the scheduling ‘core’ simple and maintainable.
  • Custom Resource Definitions (CRD) enhancements:
    • OpenAPI 3.0 Validation: Major changes introduced to schema validation with the addition of OpenAPI 3.0 validation.
    • Watch bookmark support: (alpha) The Watch API is one of the fundamentals of the Kubernetes API. This feature introduces a new type of watch event called Bookmark, which serves as a checkpoint of all objects, up to a given resourceVersion, that have been processed for a given watcher. This makes restarting watches cheaper and reduces the load on the apiserver by minimising the amount of unnecessary watch events that need to be processed after restarting a watch.
    • Defaulting: (alpha) This features add support for defaulting to Custom Resources. Defaulting is a fundamental step in the processing of API objects in the request pipeline of the kube-apiserver. Defaulting happens during deserialisation, after decoding of a versioned object, but before conversion to a hub type. This feature adds support for specifying default values for fields via OpenAPI v3 validation schemas in the CRD manifest. OpenAPI v3 has native support for a default field with arbitrary JSON values.
    • Pruning: (alpha) Custom Resources store arbitrary JSON data without following the typical Kubernetes API behaviour to prune unknown fields. This makes CRDs different, but also leads to security and general data consistency concerns because it is unclear what is actually stored. The pruning feature will prune all fields which are not specified in the OpenAPI validation schemas given in the CRD.
    • Admission webhook: (beta) Major changes were introduced. The admission webhook feature now supports both mutating webhook and validation (non-mutating) webhook. The dynamic registration API of webhook and the admission API are promoted to v1beta1.
  • For more information, please see the upstream Kubernetes 1.15 release notes.

Notable MicroK8s 1.15 features:

  • Pure upstream Kubernetes 1.15 binaries.
  • Knative addon, try it with “microk8s.enable knative”. Thank you @olatheander.
  • RBAC support via a simple “microk8s.enable rbac”, courtesy of @magne.
  • Update of the dashboard to 1.10.1 and fixes for RBAC. Thank you @balchua.
  • CoreDNS is now the default. Thanks @richardcase for driving this.
  • Ingress updated to 0.24.1 by @JorritSalverda, thank you.
  • Fix on socat failing on Fedora by @JimPatterson, thanks.
  • Modifiable csr server certificate, courtesy of @balchua.
  • Use of iptables kubeproxy mode by default.
  • Instructions on how to run Cilium on MicroK8s by @joestringer.

For complete details, along with installation instructions, see the MicroK8s 1.15 release notes and documentation.

Notable Charmed Kubernetes 1.15 features:

  • Pure upstream Kubernetes 1.15 binaries.
  • Containerd support: The default container runtime in Charmed Kubernetes 1.15 is containerd. Docker is still supported, and an upgrade path is provided for existing clusters wishing to migrate to containerd. Both runtimes can be used within a single cluster if desired. Container runtimes are now subordinate charms, paving the way for additional runtimes to be added in the future.
  • Calico 3 support: The Calico and Canal charms have been updated to install Calico 3.6.1 by default. For users currently running Calico 2.x, the next time you upgrade your Calico or Canal charm, the charm will automatically upgrade to Calico 3.6.1 with no user intervention required.
  • Calico BGP support: Several new config options have been added to the Calico charm to support BGP functionality within Calico. These additions make it possible to configure external BGP peers, route reflectors, and multiple IP pools.
  • Custom load balancer addresses: Support has been added to specify the IP address of an external load balancer. This support is in the kubeapi-load-balancer and the kubernetes-master charms. This allows a virtual IP address on the kubeapi-load-balancer charm or the IP address of an external load balancer.
  • Private container registry enhancements: A generic images-registry configuration option that will be honored by all Kubernetes charms, core charms and add-ons, so that users can specify a private registry in one place and have all images in a Kubernetes deployment come from that registry.
  • Keystone with CA Certificate support: Kubernetes integration with keystone now supports the use of user supplied CA certificates and can support https connections to keystone.
  • Graylog updated to version 3, which includes major updates to alerts, content packs, and pipeline rules. Sidecar has been re-architected so you can now use it with any log collector.
  • ElasticSearch updated to version 6. This version includes new features and enhancements to aggregations, allocation, analysis, mappings, search, and the task manager.

For complete details, see the Charmed Kubernetes 1.15 release notes and documentation.

Contact us

If you’re interested in Kubernetes support, consulting, or training, please get in touch!

About Charmed Kubernetes

Canonical’s certified, multi-cloud Charmed Kubernetes installs pure upstream binaries, and offers simplified deployment, scaling, management, and upgrades of Kubernetes, regardless of the underlying hardware or machine virtualisation. Supported deployment environments include AWS, GCE, Azure, VMware, OpenStack, LXD, and bare metal.

Charmed Kubernetes integrates tightly with underlying cloud services and hardware – enabling GPGPU’s automatically and leveraging cloud-specific services like AWS, Azure and GCE load balancers and storage. Charmed Kubernetes allows independent placement and scaling of components such as etcd or the Kubernetes Master, providing an HA or minimal configuration, and built-in, automated, on-demand upgrades from one version to the next.

Enterprise support for Charmed Kubernetes by Canonical provides customers with a highly available, multi-cloud, flexible and secure platform for their cloud-native workloads and enjoys wide adoption across enterprise, particularly in the telco, financial and retail sectors.

The post Kubernetes 1.15 now available from Canonical appeared first on Ubuntu Blog.

19 June, 2019 08:16PM

hackergotchi for Cumulus Linux

Cumulus Linux

Validation vibes: How we’ve won the praise of customers and employees alike

The success of a company is often defined by two key factors: how your customers feel about you and how your employees feel about you. We’re excited to share that recently we’ve had some great validation by both!

Customer validation

We’re very honored to work with a variety of innovative companies that are breaking the status quo with open networking principles in data centers designed to scale. All of our customers have realized the need for an open, modern data center and are looking to build infrastructure with purpose. From web-scale giants to visionary enterprises, we give them all the ability to build something “EPIC.”

This was recently highlighted when for the second year in a row, our customers have rallied around our vision for the future of data center networking and recognized us as “The Best Data Center Networking 2019” with their reviews through Gartner Peer Insights.

As Gartner puts it, “The Gartner Peer Insights Customers’ Choice is a recognition of vendors in this market by verified end-user professionals, taking into account both the number of reviews and the overall user ratings.” To ensure fair evaluation, Gartner maintains rigorous criteria for recognizing vendors with a high customer satisfaction rate.

This distinction is one of the proudest ones and we’re naturally in love that this directly related to the amazing reviews we received from our customers. Check out just one of our favorites that showcases not only the love of our product but our “Happy to help mentality” (part of our Cumulus culture):

Employee validation

Moving along on the validation train, we were also recently recognized as one of the 50 Highest Rated Private Cloud Computing Companies To Work For in a list released by Battery Ventures, a global investment firm and cloud investor, and Glassdoor*, one of the world’s largest job and recruiting sites. The list highlights 50 privately held companies—all business-to-business, cloud-computing companies—where employees report the highest levels of satisfaction at work, according to employee feedback shared on Glassdoor.

The distinction placed Cumulus Networks at an overall company rating of 4.6 while the broader average across Glassdoor is 3.4. CEO, Josh Leslie, boasts a 94% approval rating on Glassdoor—compared to an average of 69% for all approximately 900,000 employers on the site–and the company has an 88 percent positive business-outlook rating, again based on the feedback shared by employees. The broader Glassdoor average is 49%. A positive business outlook means employees believe business will improve in the next six months.

We’ve really worked hard at creating a culture of openness and transparency here at Cumulus Networks and we’ve had a lot of success with that approach” says Sandy Palicio, VP of Human Resources. She continues, “One of the reasons I believe Josh’s approval rating is so high because he “practices what he preaches. There are tangible examples of this in action. From our weekly all-hands meetings where employees have the chance to ask any question from Josh and get a real and honest answer from him to sending out e-staff meetings minutes to the whole company.”

This is the third year Battery has issued the list, along with a related ranking of the 25 Highest Rated Public Cloud Computing Companies to Work For. The rankings highlight the broader trend of businesses increasingly turning to the cloud to run critical technology systems and software, instead of using on-premise systems.

They also “highlight the increasing importance of cohesive culture and employee happiness in running a successful business,” said Neeraj Agrawal, a Battery general partner who specializes in cloud investing.

“The private companies on this list have not only scaled their products, teams and business functions—but they’ve managed to scale culture,” Agrawal said. “We view these rankings as a key indicator of company health and longevity, and we hope all companies on this list view it as an honor to be included.” It was also more difficult to make the list this year, compared with last year, Agrawal added. A Glassdoor economic research study, as well as other third-party studies, show that companies with high employee satisfaction often post stronger financial performance.

Glassdoor noted that employees at these highly rated companies commonly mentioned in online reviews that they enjoy working for mission-driven companies with strong and unique company cultures; employers that promote transparency; and companies with experienced senior leaders who regularly and clearly communicate with employees. For instance, according to one anonymous employee review of Cumulus Networks on Glassdoor:

Full lists of the highest-rated 50 private cloud companies and 25 public cloud companies to work for can be found here.


After all that, it almost goes without saying that we’re stoked for our year so far. With the good vibes from our customers and employees along with the product updates to Cumulus Linux and Cumulus NetQ, 2019 has shaped up to be another banner year and we haven’t even crossed the half-way point! If you’d like to learn more about Cumulus culture, check out our Accolades & Awards page, About page and this amazing blog by our CEO Josh Leslie on LinkedIn.

Gartner Peer Insights Customers’ Choice constitutes the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of nor constitute an endorsement by, Gartner or its affiliates.

About Battery Ventures
Battery strives to invest in cutting-edge, category-defining businesses in markets including software and services, Web infrastructure, consumer Internet, mobile and industrial technologies. Founded in 1983, the firm backs companies at stages ranging from seed to private equity and invests globally from offices in Boston, the San Francisco Bay Area, London, New York, and Israel. Follow the firm on Twitter @BatteryVentures, visit our website at www.battery.com and find a full list of Battery’s portfolio companies here.

*By a company name, denotes a Battery investment. For a full list of all Battery investments and exits, please click here.

19 June, 2019 05:33PM by Katie Weaver

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 June Software Update

Hi everyone! The Librem 5 team has been hard at work, and we want to update you all on our software progress.


A couple of blog posts back, we mentioned that our hardware engineer gave a talk at KiCon—and it is available for watching now!

Also, recently Tobias Bernard attended the Libre Graphics Meeting, where he had lots of conversation around the future photo viewing application for the Librem 5 phone.



Libhandy v0.0.10 was released and has a slew of cool new widgets! In summary, the new widgets are:

  • HdyViewSwitcher: a view switcher which can automatically adjust its layout to fit narrow screens
  • HdySqueezer: a widget that allows switching where the view switcher is
  • HdyHeaderBar: an advanced header bar
  • HdyPreferencesWindow: an adaptive preferences window for all applications

A nice aesthetic change is that HdyComboRow handles long labels better now—by ellipsizing them.

Below you can see how HdyViewSwitcher makes the Clocks application adaptive.

Below you can see how the HdyPreferencesWindow is used in GNOME Web to make the preferences window adaptive.

We also improved Libhandy’s test suite.


Work has continued to extend wys to instantiate PulseAudio’s loopback module—which ties the modem’s and codec’s ALSA devices together when a call is activated, and de-instantiates the module when the call is terminated. Since this causes conflicts with hægtesse, a scheme was devised to keep both hægtesse and wys from running at the same time.


A chat history is being implemented via an SQLite database. Thank you, Leland Carlyle, for all of your hard work in this area!

Account verification has been added so that now, when you add a new account, a connection is established to the server and (in case of failure) the user is alerted. Thanks to Benedikt Wildenhain for the patch!


We are very committed to providing encrypted messaging when the phone ships, so we have made an extra effort to implement OMEMO encryption, via the Lurch
. Recent changes in this plugin have led us to ongoing integration and testing with Chatty.

There is a padlock symbol in the message bar now, indicating whether the chat is encrypted or not. You can also view your fingerprint—as well as your conversation partner’s fingerprints (see example below). Thanks, Richard Bayerle, for all of your work on the Lurch plugin!

Web Browsing

GNOME Web will benefit from the new widgets released in Libhandy 0.0.10, as mentioned above. Additionally, since recent testing has identified some bugs in GNOME Web, our development team has been looking into some of these issues. The outcome has been the reporting of many of those issues upstream.

Initial Setup

We plan to deliver GNOME Initial Setup in the first shipment of the phone—because it is very important for setting up your environment. Before any major porting effort was possible, though, some design effort was needed—and now porting work is underway!


So many exciting things are happening at the system level!

After many revisions, the librem5-devkit device-tree has been accepted upstream. To prepare for this, the same device tree name is used both in the kernel and in the flash-kernel as well.

The devkit image went through lots of changes, too. Wlroots v0.6.0 is now available, and contains many of our necessary changes. To make the overall experience look nicer, the shell now prefers the dark theme, and the keyboard auto-hides when the app drawer is opened. Detecting corrupted downloads of images has been made faster by adding a size verification. Thanks to Hugo Grostabussiat for the patch! The devkit image has support for the camera, too–and below you can see the devkit’s first selfie 🙂

Devkit first selfie

Several areas of the kernel have seen major improvements, and we are now very close to some important milestones. One such area is forward porting patches so that the images built for the devkit can switch from a 4.18 to a 5.2 kernel, and we’re almost there! You can find a recent image build with the 5.2 kernel here.

With the new kernel, you will be able to long press the power button to turn on the devkit, and use suspend/resume. To help better detect SoC revisions, an RFC
has been sent to improve this. Working towards improving the power management, we are testing cpufreq and preparing some cpuidle tests.

A lot of effort has been put into debugging the sound on the 5.2 kernel. After many hours of work, we have discovered that ATF was blocking access to the aips regions—and upstream ATF has it fixed now!

On the shell side of things, phosh has been made a polkit agent (so things like GNOME Software can ask for elevated credentials). We made some other improvements, like hiding the OSK when it’s not needed, removing the weekday/date from the lock screen, and making it easier to use Glade with phosh. Since a compositor switch is coming soon, the team applied many improvements to the new compositor, phoc (phone compositor). We will be showcasing this new compositor soon, so stay tuned for that!

Also, and to get us closer to separating the bootloader from the OS, we have been putting a lot of effort into placing u-boot in the MMC area. Flash has been enabled in u-boot, so that the DDR PHY firmware can be written to flash. Thank you so much, Kyle Evans, for the work on mainline u-boot!

The work on the graphics stack continues, too. To work towards mainline GC7000 GPU support, we folded the etnaviv part of libdrm into mesa upstream. Our thanks to Christian Gmeiner and Dylan Baker for the review! To take a look at the graphics on the devkit, check the Quake II demo below.


To improve the devkit unboxing experience, lots of how-to guides have been added or updated:

Some more noteworthy updates have been added to the Status of Subsystems page and the devkit peripheral software interfaces.

A big “Thanks!” to everyone that has helped review, and merge changes, into upstream projects; your time and contribution are much appreciated. And that’s all for now, folks—stay tuned for more exciting updates to come!

The post Librem 5 June Software Update appeared first on Purism.

19 June, 2019 04:27PM by Heather Ellsworth

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Kubernetes on Windows

This content is password protected. To view it please enter your password below:

The post Protected: Kubernetes on Windows appeared first on Ubuntu Blog.

The post Kubernetes on Windows appeared first on Ubuntu Blog.

19 June, 2019 04:22PM by Canonical Design Team (nospam@nospam.com)

Canonical Design Team: Fresh snaps for May 2019

A little later than usual, we’ve collected some of the snaps which crossed our “desk” (Twitter feed) during May 2019. Once again, we bring you a suite of diverse applications published in the Snap Store. Take a look down the list, and discover something new today.

Cloudmonkey (cmk) is a CLI and interactive shell that simplifies configuration and management of Apache CloudStack, the opensource IAAS cloud computing platform.

snap install cloudmonkey

Got a potato gaming computer? You can still ‘game’ on #linux with Vitetris right in your terminal! Featuring configurable keys, high-score table, multi (2) player mode and joystick support! Get your Pentomino on today!

snap install vitetris

If you’re seeking a comprehensive and easy to use #MQTT client for #Linux then look no further than MQTT Explorer.

snap install mqtt-explorer

Azimuth is a metroidvania game, with vector graphics. Azimuth is inspired by such as the Metroid series (particularly Super Metroid and Metroid Fusion), SketchFighter 4000 Alpha, and Star Control II (a.k.a. The Ur-Quan Masters).

snap install azimuth

Familiar with Excel? Then you’ll love Banana Accounting’s intuitive, all-in-one spreadsheet-inspired features. Upgrade your bookkeeping with brilliant planning & fast invoicing. Journals, balance sheets, profit & loss, liquidity, VAT, and more!

snap install banana-accounting

Remember ICQ? We do! The latest release of the popular chat application is available for #Linux as an official snap! Dust off your ID and password, and get chatting like it’s 1996!

snap install icq-im

Déjà Dup hides the complexity of backing up your system. Déjà Dup is a handy tool, with support for local, remote and cloud locations, scheduling and desktop integration.

snap install deja-dup

Ktorrent is a fast, configurable BitTorrent, with UDP tracker support, protocol encryption, IP filtering, DHT, and more. Now available as a snap.

snap install ktorrent

Fast is a minimal zero-dependency utility for testing your internet download speed from terminal.

snap install fast

No matter which Linux distribution you use, browse the Snap Store directly from your desktop. View details about snaps, read and write reviews and manage snap permissions in one place.

snap install snap-store

Krita is the full-featured digital art studio. The latest release is now available in the Snap Store.

snap install krita

From Silicon Graphics machines to modern-day PCs, BZFlag has seen them all. This 3D online multiplayer tank game blends nostalgia with fun. Now available as a snap.

snap install bzflag

That’s all for this month. Keep up to date with Snapcraft on Twitter for more updates! Also, join the community over on the Snapcraft forum to discuss anything you’ve seen here.

Header image by Alex Basov on Unsplash

The post Fresh snaps for May 2019 appeared first on Ubuntu Blog.

19 June, 2019 11:46AM

hackergotchi for Kali Linux

Kali Linux

Kali Linux Roadmap (2019/2020)

Now that our 2019.2 release is out, we thought we would take this opportunity to cover some of the changes and new features we have coming to Kali Linux in the following year. Normally, we only really announce things when they are ready to go public, but a number of these changes are going to impact users pretty extensively so we wanted to share them early.

As you read through this post, what you will see is that we are really trying to balance our efforts between changes that are user facing and those that are applicable to the backend. The backend changes don’t seem as exciting at first, but the fact is that the easier it is for us to work on Kali, the easier it is for us to get to user facing features. Plus, some of these changes are focused on tweaking the development process to make it easier for others to get involved in the project.

We are not ready to announce dates on any of these changes just yet. When they are ready, they will drop.

GitLab – The New Home for Kali Packages

One of the biggest changes, which you may have already noticed, is our move of the Official Kali git repository to GitLab. With this change, it’s easier than ever for the community to submit improvements to Kali packages and for us to apply them! We expect to make an heavy use of the GitLab Continous Integration features to streamline our work on packages and to provide automated feedback to all the contributors submitting merge requests.

Documentation is coming soon on how to contribute packages. Expect a full guide to be published in our docs later.

Runtime Tests – Finding Bugs Before Users

Speaking of packages, the detection of bugs and problems with the packages is always something to improve. Until now, we have relied on manual testing on our part and user-provided bug reports. This works ok, as popular packages would never stay broken for long but some edge packages could break for months at a time before anyone would notice and actually report it to us. (Let’s be honest, most of the time when you find something broken in Kali, you don’t create a bug report do you?)

To improve this situation, we have recently deployed debci on autopkgtest.kali.org. This allows us to have our own continuous integration system, allowing for automated testing of Kali packages on a regular basis. We have integrated the result of those tests in the Kali Package Tracker.

For this infrastructure to be as useful as it can be, we will need to have runtime tests on all our packages, which is still a long way off. Hopefully, this will be a place where we get community help to speed up the process, so feel free to submit merge requests adding tests!

Metapackages – What is Installed by Default

One of the biggest challenges with running a project like Kali Linux is balance. We now have so many users that there’s no longer “one right size”. Traditionally, what people have asked for is “all the tools, all the time”. But as time has gone by, this has led to one of the largest (pun fully intended) issues with Kali: Bloat. Too many packages making too big of a distribution, large ISO sizes, etc. etc.

To address this, we are giving our metapackages a refresh. This change includes the default Kali metapackage, “kali-linux-full”, the metapackage that controls what packages are installed on Kali by default. Needless to say, this is a big user-facing change that will impact everyone. Tools that we decide to drop are most often older tools that don’t have a lot of modern utility, have not been updated in years, or have been supplanted by newer better tools.

What this means is that by default, some of the tools you may have relied upon may no longer be included by default. These tools will still exist in the repo, so you can install them manually or use a metapackage that contains them. You can see full documentation of the metapackages and what they contain at tools.kali.org.

Before these changes go live, we will do another blog post detailing them. Expect that these metapackages will be in flux for a bit as we continue to optimize.

Default Shell – Your Primary Kali Interface

The shell in Kali is likely the most used utility in the entire distribution for the majority of users. This creates a bit of a schizophrenic challenge in that it’s used so much we want to improve it, but at the same time we have to make sure it does not break.

To address this, we will be adding default installations of ZSH and FISH to Kali. Each of these shells are optimized for penetration testers, which is sort of fun. Most of the time when you look at shell optimization, all the text is focused on developers, which is not where Kali sits. Our goal here is to have the best, most optimized, shell environment for penetration testers.

At the same time, good old Bash won’t go away and we are going to leave it as the default for now. Those of you that want to be adventurous and try the new shells will find easy ways to switch. Those of you that just want to stick with Bash will still be able to. Expect in-shell instructions (and a blog post) when this change is rolled out.

Documentation – Read The Fine Manual

Expect some changes to docs.kali.org and tools.kali.org, along with an integration of the Kali manual into git via markdown. This will allow for user submitted documentation to help us keep instructions up to date and accurate. This is another great way for you to contribute to the Kali Linux project.

NetHunter – New Blood

As you may have noticed on Twitter and git commits, we have got another developer on board, “Re4son“, and he has put the NetHunter project into overdrive. He is working on supporting new hardware, working with the latest version of Android, and various bug fixes.

There is also “Project Redback“, but that is all we are going to say about that for the time being…more about this in a blog post very soon.

What Else can we Expect?

This is just the portion of the roadmap that makes sense to talk about now. There is a lot more in development that we are just not ready to talk about yet.

We also would like to welcome g0tmi1k who has switched over from Offensive Security as a full time core Kali developer.

We are at a really exciting stage of the Kali development process, where a lot of the behind the scenes items we have been working on are getting ready to go public. Expect a fair amount of improvements in Kali Linux over the next half of the year. If you want to discuss this post with us or have ideas on things that we might consider, please get in touch via the forum.

19 June, 2019 11:00AM by elwood

hackergotchi for LiMux


MucDigital 2019 – viel frischer Ideenwind für München

Mittlerweile ist es zwei Wochen her, das die MucDigital 2019, ehemals Münchner Webwoche, zu Ende gegangen ist. Über 8.000 Besucherinnen und Besucher zählte die Veranstaltungsreihe rund um Münchner Digitalisierungsthemen dieses Jahr. Die Steigerung zu den Vorjahren … Weiterlesen

Der Beitrag MucDigital 2019 – viel frischer Ideenwind für München erschien zuerst auf Münchner IT-Blog.

19 June, 2019 07:09AM by Stefan Döring

June 18, 2019

hackergotchi for VyOS


CVE-2019-11477 (TCP SACK panic) and an Intel i40e driver issue

Recently discovered vulnerability in the Linux kernel's TCP selective acknowledgement processing code potentially allows a remote attacker to cause a kernel panic with a specially crafted packet sequence. You can read the details in the announcement

18 June, 2019 08:18PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

L’enjeu de la bataille du Libre : la réappropriation des savoirs-faire



Que ce soit dans le domaine médical, alimentaire, boursier, industriel, éducatif, ou même agricole, il n’existe presque plus de secteur d’activité ayant échappé à l’emprise du code informatique : nous en dépendons désormais dans tous les domaines dans lesquels le savoir est clé.

Il est banal aujourd’hui d’avoir recours à des logiciels spécifiques pour élaborer un diagnostic médical, une voiture, la domotique d’un bureau ou d’une maison. Mais bien que les codes informatiques soient supposés nous simplifier la vie, ils nous maintiennent au passage dans un état de dépendance de plus en plus accrue… captifs de la logique qu’ils mettent en œuvre et qui nous échappe. C’est le cas par exemple dans le travail en cas de panne logicielle, y compris dans un tracteur ou une moissonneuse batteuse, loin des villes. Impossible pour l’agriculteur de réparer lui-même la panne car il n’a ni accès au code qui fait désormais fonctionner les multiples ordinateurs embarqués à bord, ni même non plus d’ailleurs au simple manuel de réparation qui a tout simplement disparu avec l’informatisation des machines, y compris agricoles. Il faudra à tout prix qu’il fasse appel à la marque concessionnaire de la machine, et au fabriquant, l’unique propriétaire de ce code… Et qu’il patiente jusqu’à son intervention, avant de devoir payer le prix fort.

Les logiciels dits « propriétaires » font ainsi perdre toute autonomie à leurs utilisateurs, dépendants de cette seule entité dont presque tout dépend désormais. Dans le documentaire Internet ou la révolution du partage réalisé par Philippe Borrel et diffusé sur ARTE, (version courte de son documentaire « La bataille du Libre »), des agriculteurs du Nebraska, en plein Middle-West américain, se rebellent contre le diktat des codes informatiques propriétaires qui leur ont fait perdre le droit de réparer leurs propres machines et même mis certains d’entre eux sur la paille. Kenneth Roelofsen, un fournisseur de pièces agricoles de Abilene au Kansas, ajoute : « s’ils ont le pouvoir de faire cela avec les logiciels, où est-ce que cela va s’arrêter ? ».

Nul ne peut le prédire. En attendant, on ne peut que constater l’emprise de ce nouveau capitalisme de « la connaissance » qui standardise tout au nom de la « propriété intellectuelle ». Dans le film, Karen Sandler, une juriste new-yorkaise, s’inquiète : « Quand j’achète une technologie, mon portable ou autre chose, je veux pouvoir la contrôler. Je veux savoir si quelqu’un m’espionne, je veux savoir si on transfère mes données à un tiers et je veux être capable de modifier cette technologie, afin de me protéger d’acteurs potentiellement nuisibles ». Comme cette activiste du Software Freedom Conservancy, de plus en plus de personnes se révoltent aujourd’hui contre l’emprise des logiciels privateurs. Pour ce faire, ils ont trouvé la meilleure parade : l’alternative des logiciels libres.


Affiche du documentaire « La bataille du Libre »,
version longue de « 
Internet ou la révolution du partage »

Richard Stallman, fondateur du projet GNU, président de la Free Software Foundation, et auteur de la Licence Publique Générale GNU, nous explique en détails ce qu’est un logiciel libre, à ne pas confondre avec un logiciel gratuit : « Le logiciel libre respecte la liberté de l’utilisateur. Il y a quatre libertés essentielles qu’un utilisateur de logiciels devrait toujours avoir : la liberté zéro est la liberté d’exécuter le programme de n’importe quelle manière, selon vos envies. La liberté un est la liberté d’étudier le code source du programme et de le modifier pour lui faire faire ce que vous souhaitez. La liberté deux est la liberté d’en distribuer des copies à d’autres gens comme on veut et donc, la possibilité de republier le programme. La liberté trois est celle de distribuer des copies de votre version modifiée à d’autres gens ».

Le logiciel libre promeut le partage, la connaissance au service de tous. C’est l’ensemble de ces quatre libertés qui attribue son caractère libre à un logiciel. S’il en manque une seule, il ne s’agit plus d’un logiciel libre, mais d’un logiciel propriétaire.

Pour Stallman, les trois principaux domaines d’où il faudrait bannir les brevets sont les semences, les médicaments et bien sûr les logiciels. Les brevets, c’est à dire le droit de propriété intellectuelle, constituant un véritable frein à la liberté de créer et d’innover. Alors que les logiciels propriétaires ont souvent un coût d’achat de licences exorbitant, ou des termes de licence contraignants sur nos données lorsqu’ils sont gratuits, les logiciels libres quant à eux sont fiables et sécurisés, économiques, pérennes et efficaces. Ils disposent de fonctionnalités très avancées, grâce à leurs contributeurs très nombreux et surtout passionnés. Les logiciels libres garantissent l’indépendance des utilisateurs, puisque leur code source reste accessible de tous, librement. Utiliser des logiciels sans pouvoir les contrôler, les modifier, les améliorer ou tout simplement pouvoir comprendre comment ils fonctionnent, c’est délibérément accepter de rester captif d’un système qui aliène.

Philippe Borrel, le réalisateur du documentaire

Dans une interview accordée à Frédéric Couchet animateur de l’émission «  Libre à Vous  » sur la radio cause Commune, et délégué général de l’April, Philippe Borrel, le réalisateur de « Internet ou la révolution du partage », revient sur le cas des agriculteurs prisonniers des logiciels privateurs implantés dans leurs engins. Pour lui, le problème va plus loin car « ces agriculteurs du Nebraska, véritables entrepreneurs de l’agriculture productiviste, sont en fait prisonniers dans tous les domaines: ils se sont endettés auprès de banques pour acquérir des terres mortes. Sans engrais ou intrants chimiques plus rien ne pousse spontanément. Les semences sont propriétaires elles aussi, hybrides ou OGM. Les machines agricoles de plus en plus « autonomes », sont pilotées par un algorithme dont le code informatique reste inaccessible. Que leur reste t-il comme pouvoir, comme autonomie ? Ils s’aperçoivent qu’ils sont juste pieds et poings liés, piégés. Ils se sont fait dépouiller de presque tous leurs savoirs faire ». Il ajoute : « À ce moment du film, on n’est plus sur le monde du Libre, on est sur quelque chose de plus global : en fait une résistance contre une logique qui avance en ce moment comme un rouleau compresseur et qui concerne au final tout le monde ».

De nombreuses associations ont ainsi vu le jour pour sensibiliser les populations aux enjeux de l’utilisation des logiciels libres. Par exemple : April, PING, Framasoft, le collectif Emmabuntüs en France, et bien d’autres associations en Afrique et dans le reste du monde. Plusieurs initiatives telles que les install party et les JerryClan naissent de par le monde afin de pousser les utilisateurs de logiciels à se réapproprier le savoir-faire et à le contrôler. Le documentaire de Philippe Borrel est d’ailleurs un bon moyen de prendre connaissance des enjeux des logiciels libres.

Le documentaire peut, dès à présent, être diffusé dans des salles de classe en France, dans le cadre de l’accord entre les ministères de l’éducation nationale, de l’enseignement supérieur et de la recherche, et la PROCIREP (société des producteurs de cinéma et de télévision), par les enseignants ayant le souci de faire connaître le plus tôt possible les enjeux de la culture libre à leurs élèves.

Visitez la page du documentaire sur Wikipedia pour en savoir plus ; regardez ce documentaire sur Arte car comme dirait Richard Stallman, « il est temps de rejoindre la résistance », il est temps d’arracher notre liberté. N’hésitez pas à inviter Philippe Borrel pour des projections-débats autour de son documentaire, il se fera un plaisir de répondre présent, tel qu’il l’a d’ailleurs déjà fait jusqu’ici. Retrouvez tout le programme des projections de « La Bataille Du Libre », la version longue du film documentaire de Philippe Borrel, sur cette page.

Les 15 bonus du documentaire sur le compte PeerTube de Philippe Borrel

Francine Rochelet du collectif Emmabuntüs.

Crédits photos : Affiche : Temps noir, photo du réalisateur : Patrice Terraz, extrait et bonus du documentaire : Marion Chataing et Philippe Borrel.

18 June, 2019 07:31PM by Patrick Emmabuntüs

hackergotchi for ARMBIAN



Tested with MACCHIATObin Double Shot

18 June, 2019 06:53PM by Igor Pečovnik

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Protected: Kubernetes on Windows

This content is password protected. To view it please enter your password below:

The post Protected: Kubernetes on Windows appeared first on Ubuntu Blog.

18 June, 2019 05:55PM

Canonical Design Team: Your first robotic arm with Ubuntu Core, coming from Niryo

Niryo has built a fantastic 6-axis robotic arm called ‘Niryo One’. It is a 3D-printed, affordable robotic arm focused mainly on educational purposes. Additionally, it is fully open source and based on ROS. On the hardware side, it is powered by a Raspberry Pi 3 and NiryoStepper motors, based on Arduino microcontrollers. When we found out all this, guess what we thought? This is a perfect target for Ubuntu Core and snaps!

When the robotic arm came to my hands, the first thing I did was play with Niryo Studio; a tool from Niryo that lets you move the robotic arm, teach sequences to it and store them, and many more things. You can programme the robotic arm with Python or with a graphical editor based on Google’s Blocky. Niryo Studio is a great tool that makes starting on robotics easy and pleasant.

Nyrio studio for UbuntuNyrio Studio

After this, I started the task of creating a snap with the ROS stack that controls the robotic arm. Snapcraft supports ROS, so this was not a difficult task: the catkin plugin takes care of almost everything. However, as happens with any non-trivial project, the Niryo stack had peculiarities that I had to address:

  • It uses a library called WiringPi which needs an additional part in the snap recipe.
  • GCC crashed when compiling on the RPi3, due to the device running out of memory. This is an issue known by Niryo that can be solved by using only two cores when building (this can be done by using -j2 -l2 make options). Unfortunately we do not have that much control when using Snapcraft’s catkin plugin. However, Snapcraft is incredibly extensible so we can resort to creating a local plugin by copying around the catkin plugin shipped with Snapcraft and doing the needed modifications. That is what I did, and the catkin-niryo plugin I created added the -j2 -l2 options to the build command so I could avoid the GCC crash.
  • There were a bunch of hard coded paths that I had to change in the code. Also, I had to add some missing dependencies, and there are other minor code changes. The resulting patches can be found here.
  • I also had to copy around some configuration files inside the snap.
  • Finally, there is also a Node.js package that needs to be included in the build. The nodejs plugin worked like a charm and that was easily done.

After addressing all these challenges, I was able to build a snap in an RPi3 device. The resulting recipe can be found in the niryo_snap repo in GitHub, which includes the (simple) build instructions. I went forward and published it in the Snap Store with name abeato-niryo-one. Note that the snap is not confined at the moment, so it needs to be installed with the --devmode option.

Then, I downloaded an Ubuntu Core image for the RPi3 and flashed it to an SD card. Before inserting it in the robotic arm’s RPi3, I used it with another RPi3 to which I could attach to the UART serial port, so I could run console-conf. With this tool I configured the network and the Ubuntu One user that would be used in the image. Note that the Nyrio stack tries to configure a WiFi AP for easier initial configuration, but that is not yet supported by the snap, so the networking configuration from console-conf determines how we will be able to connect later to the robotic arm.

At this point, snapd will possibly refresh the kernel and core snaps. That will lead to a couple of system reboots, and once complete  those snaps will have been updated. After this, we need to modify some files from the first stage bootloader because Niryo One needs some changes in the default GPIO configuration so the RPi3 can control all the attached sensors and motors. First, edit /boot/uboot/cmdline.txt, remove console=ttyAMA0,115200, and add plymouth.ignore-serial-consoles, so the content is:

dwc_otg.lpm_enable=0 console=tty0 elevator=deadline rng_core.default_quality=700 plymouth.ignore-serial-consoles

Then, add the following lines at the end of /boot/uboot/config.txt:

# For niryo

Now, it is time to install the needed snaps and perform connections:

snap install network-manager
snap install --devmode --beta abeato-niryo-one
snap connect abeato-niryo-one:network-manager network-manager

We have just installed and configured a full ROS stack with these simple commands!

Nyrio robotic arm in actionThe robotic arm in action

Finally, insert the SD card in the robotic arm, and wait until you see that the LED at the base turns green. After that you can connect to it using Niryo Studio in the usual way. You can now handle the robotic arm in the same way as when using the original image, but now with all the Ubuntu Core goodies: minimal footprint, atomic updates, confined applications, app store…

As an added bonus, the snap can also be installed on your x86 PC to use it in simulation mode. You just need to stop the service and start the simulation with:

snap install --devmode --beta abeato-niryo-one
snap stop --disable abeato-niryo-one
sudo abeato-niryo-one.simulation

Then, run Niryo Studio and connect to, as simple as that – no need at all to add the ROS archive and install manually lots of deb packages.

And this is it – as you can see, moving from a ROS debian based project to Ubuntu Core and snaps is not difficult, and has great gains. Easy updates, security first, 10 years updates, and much more, is a few keystrokes away!

The post Your first robotic arm with Ubuntu Core, coming from Niryo appeared first on Ubuntu Blog.

18 June, 2019 04:06PM

Elizabeth K. Joseph: Building a PPA for s390x

About 20 years ago a few clever, nerdy folks got together and ported Linux to the mainframe (s390x architecture). Reasons included because it’s there, and other ones you’d expect from technology enthusiasts, but if you read far enough, you’ll learn that they also saw a business case, which has been realized today. You can read more about that history over on Linas Vepstas’ Linux on the IBM ESA/390 Mainframe Architecture.

Today the s390x architecture not only officially supports Ubuntu, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES), but there’s an entire series of IBM Z mainframes available that are devoted to only running Linux, that’s LinuxONE. At the end of April I joined IBM to lend my Linux expertise to working on these machines and spreading the word about them to my fellow infrastructure architects and developers.

As its own architecture (not the x86 that we’re accustomed to), compiled code needs to be re-compiled in order to run on the s390x platform. In the case of Ubuntu, the work has already been done to get a large chunk of the Ubuntu repository ported, so you can now run thousands of Linux applications on a LinuxONE machine. In order to effectively do this, there’s a team at Canonical responsible for this port and they have access to an IBM Z server to do the compiling.

But the most interesting thing to you and me? They also lend the power of this machine to support community members, by allowing them to build PPAs as well!

By default, Launchpad builds PPAs for i386 and amd64, but if you select “Change details” of your PPA, you’re presented with a list of other architectures you can target.

Last week I decided to give this a spin with a super simple package: A “Hello World” program written in Go. To be honest, the hardest part of this whole process is creating the Debian package, but you have to do that regardless of what kind of PPA you’re creating and there’s copious amounts of documentation on how to do that. Thankfully there’s dh-make-golang to help the process along for Go packages, and within no time I had a source package to upload to Launchpad.

From there it was as easy as clicking the “IBM System z (s390x)” box under “Change details” and the builds were underway, along with build logs. Within a few minutes all three packages were built for my PPA!

Now, mine was the most simple Go application possible, so when coupled with the build success, I was pretty confident that it would work. Still, I hopped on my s390x Ubuntu VM and tested it.

It worked! But aren’t I lucky, as an IBM employee I have access to s390x Linux VMs.

I’ll let you in on a little secret: IBM has a series of mainframe-driven security products in the cloud: IBM Cloud Hyper Protect Services. One of these services is Hyper Protect Virtual Servers which are currently Experimental and you can apply for access. Once granted access, you can launch and Ubuntu 18.04 VM for free to test your application, or do whatever other development or isolation testing you’d like on a VM for a limited time.

If this isn’t available to you, there’s also the LinuxONE Community Cloud. It’s also a free VM that can be used for development, but as of today the only distributions you can automatically provision are RHEL or SLES. You won’t be able to test your deb package on these, but you can test your application directly on one of these platforms to be sure the code itself works on Linux on s390x before creating the PPA.

And if you’re involved with an open source project that’s more serious about a long-term, Ubuntu-based development platform on s390x, drop me an email at lyz@ibm.com so we can have a chat!

18 June, 2019 02:59PM

Santiago Zarate: Permission denied for hugepages in QEMU without libvirt

So, say you’re running qemu, and decided to use hugepages, nice isn’t it? helps with performace and stuff, however a wild wall appears!

 QEMU: qemu-system-aarch64: can't open backing store /dev/hugepages/ for guest RAM: Permission denied

This basically means that you’re using the amazing -mem-path /dev/hugepages, and that QEMU running as an unprivileged user can’t write there… This is how it looked for me:

sudo -u _openqa-worker qemu-system-aarch64 -device virtio-gpu-pci -m 4094 -machine virt,gic-version=host -cpu host \ 
  -mem-prealloc -mem-path /dev/hugepages -serial mon:stdio  -enable-kvm -no-shutdown -vnc :102,share=force-shared \ 
  -cdrom openSUSE-Tumbleweed-DVD-aarch64-Snapshot20190607-Media.iso \ 
  -pflash flash0.img -pflash flash1.img -drive if=none,file=opensuse-Tumbleweed-aarch64-20190607-gnome-x11@aarch64.qcow2,id=hd0 \ 
  -device virtio-blk-device,drive=hd0

The machine tries to start, but utimately I get that dreadful message. You can simply do a chmod to the directory, use an udev rule, and get away with it, it’s quick and does the job but also there are few options to solve this using libvirt, however if you’re not using hugeadm to manage those pools and let the operating system take care of it, likely the operating system will take care of this for you, so you can look to /usr/lib/systemd/system/dev-hugepages.mount, since trying to add an udev rule failed for a colleague of mine, I decided to use the systemd approach, ending up with the following:

Description=Systemd service to fix hugepages + qemu ram problems.

ExecStart=/usr/bin/chmod o+w /dev/hugepages/


18 June, 2019 12:00AM

June 17, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 583

Welcome to the Ubuntu Weekly Newsletter, Issue 583 for the week of June 9 – 15, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

17 June, 2019 10:21PM

Full Circle Magazine: Full Circle Weekly News #135

Linux Command Line Editors Vulnerable to High Severity Bug

KDE 5.16 Is Now Available for Kubuntu

Debian 10 Buster-based Endless OS 3.6.0 Linux Distribution Now Available

Introducing Matrix 1.0 and the Matrix.org Foundation

System 76’s Supercharged Gazelle Laptop is Finally Available

Lenovo Thinkpad P Laptops Are Available with Ubuntu

Atari VCS Linux-powered Gaming Console Is Now Available for Pre-order

Ubuntu “Complete” sound: Canonical
Theme Music: From The Dust – Stardust


17 June, 2019 03:46PM

hackergotchi for VyOS


VyOS Project 2019 - June update

The summer is often a slow news season, but we’ve got quite a few things to share. We’ve been slacking off on posting development news and keeping the community updated, so it’s time to fix it.

17 June, 2019 03:40PM by Daniil Baturin (daniil@sentrium.io)

hackergotchi for Ubuntu developers

Ubuntu developers

Simos Xenitellis: How to run LXD containers in WSL2

Microsoft announced in May that the new version of Windows Subsystem for Linux 2 (WSL 2), will be running on the Linux kernel, itself running alongside the Windows kernel in Windows.

In June, the first version of WSL2 has been made available as long as you update your Windows 10 installation to the Windows Insider program, and select to receive the bleeding edge updates (fast ring).

In this post we are going to see how to get LXD running in WSL2. In a nutshell, LXD does not work out of the box yet, but LXD is versatile enough to actually make it work even when the default Linux kernel in Windows is not fully suitable yet.


You need to have Windows 10, then join the Windows Insider program (Fast ring).

Then, follow the instructions on installing the components for WSL2 and switching your containers to WSL2 (if you have been using WSL1 already).

Install the Ubuntu container image from the Windows Store.

At the end, when you run wsl in CMD.exe or in Powershell, you should get a Bash prompt.

The problems

We are listing here the issues that do not let LXD run out of the box. Skip to the next section to get LXD going.

In WSL2, there is a modified Linux 4.19 kernel running in Windows, inside Hyper-V. It looks like this is a cut-down/optimized version of Hyper-V that is good enough for the needs of Linux.

The Linux kernel in WSL2 has a specific configuration, and some of the things that LXD needs, are missing. Specifically, here is the output of lxc-checkconfig.

ubuntu@DESKTOP-WSL2:~$ lxc-checkconfig
 --- Namespaces ---
 Namespaces: enabled
 Utsname namespace: enabled
 Ipc namespace: enabled
 Pid namespace: enabled
 User namespace: enabled
 Network namespace: enabled

--- Control groups ---
 Cgroups: enabled

--- Control groups ---
 Cgroups: enabled

Cgroup v1 mount points:

Cgroup v2 mount points:

 Cgroup v1 systemd controller: missing
 Cgroup v1 clone_children flag: enabled
 Cgroup device: enabled
 Cgroup sched: enabled
 Cgroup cpu account: enabled
 Cgroup memory controller: enabled
 Cgroup cpuset: enabled

--- Misc ---
 Veth pair device: enabled, not loaded
 Macvlan: enabled, not loaded
 Vlan: missing
 Bridges: enabled, not loaded
 Advanced netfilter: enabled, not loaded
 CONFIG_NF_NAT_IPV4: enabled, not loaded
 CONFIG_NF_NAT_IPV6: enabled, not loaded
 CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
 FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
 checkpoint restore: enabled
 CONFIG_EPOLL: enabled
 File capabilities:

Note : Before booting a new kernel, you can check its configuration
 usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig


The systemd-related mount point is OK in the sense that currently systemd does not work anyway in WSL (either WSL1 or WSL2). At some point it will get fixed in WSL2, and there are pending issues on this at Github. Talking about systemd, we cannot use yet the snap package of LXD because snapd depends on systemd. And no snapd, means no snap package of LXD.

The missing netfilter kernel modules mean that we cannot use the managed LXD network interfaces (the one with default name lxdbr0). If you try to create a managed network interface, you will get the following error.

Error: Failed to create network 'lxdbr0': Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables: No chain/target/match by that name.

For completeness, here is the LXD log. Notably, AppArmor is missing from the Linux kernel and there was no CGroup network class controller.

ubuntu@DESKTOP-WSL2:~$ cat /var/log/lxd/lxd.log
 t=2019-06-17T10:17:10+0100 lvl=info msg="LXD 3.0.3 is starting in normal mode" path=/var/lib/lxd
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Configured LXD uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="Couldn't find the CGroup network class controller, network limits will be ignored."
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel features:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - netnsid-based network retrieval: no"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - unprivileged file capabilities: yes"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Initializing local database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Starting /dev/lxd handler:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
 t=2019-06-17T10:17:14+0100 lvl=info msg="REST API daemon:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing global database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing storage pools"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing networks"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Loading daemon configuration"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating instance types"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating instance types"

Having said all that, let’s get LXD working.

Configuring LXD on WSL2

Let’s get a shell into WSL2.

C:\> wsl

The aptpackage of LXD is already available in the Ubuntu 18.04.2 image, found in the Windows Store. However, the LXD service is not running by default and we will to start it.

ubuntu@DESKTOP-WSL2:~$ sudo service lxd start

Now we can run sudo lxd initto configure LXD. We accept the defaults (btrfs storage driver, 50GB default storage). But for networking, we avoid creating the local network bridge, and instead we configure LXD to use an existing bridge. The existing bridge configures macvlan, which avoids the error, but macvlan does not work yet anyway in WSL2.

ubuntu@DESKTOP-WSL2:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=50GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: eth0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
 config: {}
 networks: []
 - config:
     size: 50GB
   description: ""
   name: default
   driver: btrfs
 - config: {}
   description: ""
     name: eth0
     nictype: macvlan
     parent: eth0
     type: nic
     path: /
     pool: default
     type: disk
   name: default
 cluster: null 


For some reason, LXD does not manage to mount sysfor the containers, therefore we need to perform this ourselves.

ubuntu@DESKTOP-WSL2:~$ sudo mkdir /usr/lib/x86_64-linux-gnu/lxc/sys
ubuntu@DESKTOP-WSL2:~$ sudo mount sysfs -t sysfs /usr/lib/x86_64-linux-gnu/lxc/sys

The containers will not have direct Internet connectivity, therefore we need to use a Web proxy. In our case, it suffices to use privoxy. Let’s install it. privoxy uses by default the port 8118, which means that if the containers can somehow get access to port 8118 on the host, they get access to the Internet!

ubuntu@DESKTOP-WSL2:~$ sudo apt update
ubuntu@DESKTOP-WSL2:~$ sudo apt install -y privoxy

Now, we are good to go! In the following we create a container with a Web server, and view it using Internet Explorer. Yes, IE has two uses, 1. to download Firefox, and 2. to view the Web server in the LXD container as evidence that all these are real.

Setting up a Web server in a LXD container in WSL2

Let’s create our first container, running Ubuntu 18.04.2. It does not get an IP address from the network because macvlan is not working. The container has no Internet connectivity!

ubuntu@DESKTOP-WSL2:~$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

ubuntu@DESKTOP-WSL2:~$ lxc list
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
| mycontainer | RUNNING |      |      | PERSISTENT | 0         |


The container has no Internet connectivity, so we need to give it access to port 8118 on the host. But how can we do that, if the container does not have even network connectivity with the host? We can do this using a LXD proxy device. Run the following on the host. The command creates a proxy device called myproxy8118 that proxies the TCP port 8118 between the host and the container (the binding happens in the container because the port already exists on the host).

ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy8118 proxy listen=tcp: connect=tcp: bind=container
Device myproxy8118 added to mycontainer


Now, get a shell in the container and configure the proxy!

ubuntu@DESKTOP-WSL2:~$ lxc exec mycontainer bash
root@mycontainer:~# export http_proxy=http://localhost:8118/
root@mycontainer:~# export https_proxy=http://localhost:8118/

It’s time to install and start nginx!

root@mycontainer:~# apt update
root@mycontainer:~# apt install -y nginx
root@mycontainer:~# service nginx start

nginx is installed. For a finer touch, let’s edit a bit the default HTML file of the Web server so that it is evident that the Web server runs in the container. Add some text you think suitable, using the command

root@mycontainer:~# nano /var/www/html/index.nginx-debian.html

Up to now, there is a Web server running in the container. This container is not accessible by the host and obviously by Windows either. So, how can we view the website from Windows? By creating an additional proxy device. The command creates a proxy device called myproxy80 that proxies the TCP port 80 between the host and the container (the binding happens on the host because the port already exists in the container).

root@mycontainer:~# logout
ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy80 proxy listen=tcp: connect=tcp: bind=host

Finally, find the IP address of your WLS2 Ubuntu host (hint: use ifconfig) and connect to that IP using your Web browser.


We managed to install LXD in WSL2 and got a container to start. Then, we installed a Web server in the container and viewed the page from Windows.

I hope future versions of WSL2 will be more friendly to LXD. In terms of the networking, there is need for more work to make it work out of the box. In terms of storage, btrfs is supported (over a loop file) and it is fine.

17 June, 2019 02:22PM

hackergotchi for Purism PureOS

Purism PureOS

The New libhandy 0.0.10

Libhandy 0.0.10 just got released, and you can get this new version here. It comes with a few new adaptive widgets for your GTK app we’d like to tell you about:

The View Switcher

GNOME applications typically use a GtkStackSwitcher to switch between their views. This design works fine on a desktop, but not so well on really narrow devices like mobile phones, so Tobias Bernard designed a more modern and adaptive replacement – now available in libhandy as the HdyViewSwitcher:Adaptive view switcher

In many ways, the HdyViewSwitcher functions very similarly to a GtkStackSwitcher: you assign it a GtkStack containing your application’s pages, and it will display a row of side-by-side, homogeneously-sized buttons, each one representing a page. It differs in that it can display both the title and the icon of your pages, and that the layout of the buttons automatically adapts to a narrower version, depending on the available width. We have also added a view switcher bar, designed to be used at the bottom of the window: HdyViewSwitcherBar (and we’d like to thank Zander Brown for the prototypes!).

The Squeezer

To complete the view switcher design, we needed a way to automatically switch between having a view switcher in the header bar, and a view switcher bar at the bottom of the window.

We added HdySqueezer; give it widgets, and it shows the first one that fits in the available space. A common way to use it would be:

<object class="GtkHeaderBar">
  <property name="title">Application</property>
  <child type="title">
    <object class="HdySqueezer">
      <property name="transition-type">crossfade</property>
      <signal name="notify::visible-child" handler="on_child_changed"/>
        <object class="HdyViewSwitcher" id="view_switcher">
          <property name="stack">pages</property>
        <object class="GtkLabel" id="title_label">
          <property name="label">Application</property>
            <class name="title"/>

In the example above, if there is enough space the view switcher will be visible in the header bar; if not, a widget mimicking the window’s title will be displayed. Additionally, you can reveal or conceal a HdyViewSwitcherBar at the bottom of your window, depending on which widget is presented by the squeezer, and show a single view switcher at a time.

Another Header Bar?

To make the view switcher work as intended, we need to make sure it is always strictly centered; we also need to make sure the view switcher fills all the height of the header bar. Both of these are unfortunately not possible with GtkHeaderBar in GTK 3, so I forked it as HdyHeaderBar to, first, make sure it does not force its title widget to be vertically centered, and hence to allow it to fill all the available height; and second, to allow for choosing between strictly or loosely centering its title widget (similarly to GtkHeaderBar).

The Preferences Window

To simplify writing modern, adaptive and featureful applications, I wrote a generic preferences window you can use to implement your application’s preferences window: HdyPreferencesWindow – and organized it this way:

• the window contains pages implemented via HdyPreferencesPage;

• pages have a title, and contain preferences groups implemented via HdyPreferencesGroup;

• groups can have a title, a description, and preferences implemented via rows (HdyPreferencesRow) or any other widget;

• preferences implemented via HdyPreferencesRow have a name, and can be searched via their page title, group title or name;

• HdyActionRow is a derivative of HdyPreferencesRow, so you can use it (and its derivatives) to easily implement your preferences.

The next expected version of libhandy is libhandy 1.0. It will come with quite a few API fixes, which is why a major version number bump is required. libhandy’s API has been stable for many versions now, and we will guarantee that same stability starting from version 1.0.

The post The New libhandy 0.0.10 appeared first on Purism.

17 June, 2019 12:15PM by Adrien Plazas

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: So That Happened...

I previously made a call for folks to check in on a net so I could count heads. It probably was not the most opportune timing but it was what I had available. You can listen to the full net at https://archives.anonradio.net/201906170000_sdfarc.mp3 and you'll find my after-net call to all Ubuntu Hams at roughly 44 minutes and 50 seconds into the recording.

This was a first attempt. The folks at SDF were perfectly fine with me making the attempt. The net topic for the night was "special projects" we happened to be undertaking.

Now you might wonder what I might be doing in terms of special projects. That bit is special. Sunspots are a bit non-existent at the moment so I have been fiddling around with listening for distant stations on the AM broadcast band which starts in the United States at 530 kHz and ends at 1710 kHz. From my spots in Ashtabula I end up hearing some fairly distant stations ranging from KYW 1060 in Philadelphia to WCBS 880 in New York City to WPRR 1680 in Ada, Michigan. When I am out driving Interstate Route 90 in the mornings during the winter I have had the opportunity to hear stations such as WSM 650 broadcasting from the vicinity of the Grand Old Opry in Nashville, Tennessee. One time I got lucky and heard WSB 750 out of Atlanta while driving when conditions were right.

These were miraculous feats of physics. WolframAlpha would tell you that the distance between Ashtabula and Atlanta is about 593 miles/955 kilometers. In the computing realm we work very hard to replicate the deceptively simple. A double-sideband non-suppressed carrier amplitude modulated radio signal is one of the simplest voice transmissions that can be made. The receiving equipment for such is often just as simple. For all the infrastructure it would take to route a live stream over a distance somewhat further than that between Derry and London proper, far less would be needed for the one-way analog signal.

Although there is Digital Audio Broadcasting across Europe we really still do not have it adopted across much of the United States. A primary problem is that it works best in areas with higher population density than we have in the USA. So far we have various trade names for IBOC, that is to say in-band on-channel, subcarriers giving us hybrid signals now. Digital-only IBOC has been tested at WWFD in Maryland and there was a proposal to the Federal Communications Commission to make a permanent rules change to make this possible. It appears in the American Experience, though, that the push is more towards Internet-connected products like iHeartRadio and Spotify rather than the legacy media outlets that has public service obligations as well as emergency alerting obligations.

I am someone who considers the Internet fairly fragile as evidenced most recently by the retailer Target having a business disaster through being unable to accept payments due to communications failures. I am not against technology advances, though. Keeping connections to the technological ways of old as well as sometimes having cash in the wallet as well as knowing how to write a check seem to be skills that are still useful in our world today.

Creative Commons License
So That Happened... by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

17 June, 2019 02:33AM

Bryan Quigley: Hack Computer review

I bought a hack computer for $299 - it's designed for teaching 8+ year olds programming. That's not my intended use case, but I wanted to support a Linux pre-installed vendor with my purchase (I bought an OLPC back in the day in the buy-one give-one program).

I only use a laptop for company events, which are usually 2-4 weeks a year. Otherwise, I use my desktop. I would have bought a machine with Ubuntu pre-installed if I was looking for more of a daily driver.

The underlying specs of the ASUS Laptop E406MA they sell are:

Unboxing and first boot


Included was an:

  • introduction letter to parents
  • tips (more for kids)
  • 2 pages of hack stickers
  • 2 hack pins
  • ASUS manual bits
  • A USB to Ethernet adapter
  • and the laptop:

Laptop in sleeve Laptop out of sleeve first open

First boot takes about 20 seconds. And you are then dropped into what I'm pretty sure is GNOME Initial Setup. They also ask on Wifi connections if they are metered or not.

first open

There are standard philips head screws on the bottom of the laptop, but it wasn't easy to remove the bottom and I didn't want to push - I've been told there is nothing user replaceable within.


The options I'd like change are there, and updating the BIOS was easy enough from the BIOS (although no LVFS support..).

bios ez mode bios advanced

A kids take

Keep in mind this review is done by 6 year old, while the laptop is designed for an 8+ year old.

He liked playing the art game and ball game. The ball game is an intro to the hack content. The art game is just Krita - see the artwork below. First load needed some help, but got the hang of the symmetrical tool.

He was able to install an informational program about Football by himself, though he was hoping it was a game to play.

AAAAA my favorite water color


For target market: It's really the perfect first laptop (if you want to buy new) with what I would generally consider the right trade-offs. Given Endless OS's ability to have great content pre-installed, I may have tried to go for a 128 GB drive. Endless OS is setup to use zram which will minimize RAM issues as much as possible. The core paths are designed for kids, but some applications are definitely not. It will be automatically updating and improving over time. I can't evaluate the actual Hack content whose first year is free, but after that will be $10 a month.

For people who want a cheap Linux pre-installed laptop: I don't think you can do better than this for $299.


  • CPU really seems to be the best in this price range. A real Intel quad-core, but is cheap enough to have missed some of the vulernabities that have plaqued Intel (no HT).
  • Battery life is great
  • A 1080p screen


  • RAM and disk sizes. Slow eMMC disk. Not upgradeable.
  • Fingerprint reader doesn't work today (and that's not part of their goal with the machine, it defaults to no password)
  • For free software purists, Trisquel didn't have working wireless or trackpad. The included usb->ethernet worked though.
  • Mouse can lack sensitivty at times
  • Ubuntu: I have had Wifi issues after suspend, but stopping and starting Wifi fixed them
  • Ubuntu: Boot times are slower than Endless
  • Ubuntu: Suspend sometimes loses the ability to play sound (gets stuck on headphones)

I do plan on investiaging the issues above and see if I can fix any of them.

Using Ubuntu?

My recommendations:

  • Purge rsyslog (may speed up boot time and reduces unnessary writes)
  • For this class of machine, I'd go deb only (remove snaps) and manual updating
  • Install zram-config
  • I'm currently running with Wayland and Chromium
  • If you don't want to use stock Ubuntu, I'd recommend Lubuntu.

Dive deeper

17 June, 2019 12:00AM

June 15, 2019

hackergotchi for LiMux


Einladung zur kostenlosen Radtour durchs Smarter-Together-Projektgebiet

Sie interessieren sich für zukunftsweisende Smart City Projekte und sind gerne mal mit dem Rad unterwegs? Dann haben wir einen tollen Tipp für Sie: Am 28. Juni gibt es eine Radtour durch das Projektgebiet Neuaubing-Westkreuz/Freiham. … Weiterlesen

Der Beitrag Einladung zur kostenlosen Radtour durchs Smarter-Together-Projektgebiet erschien zuerst auf Münchner IT-Blog.

15 June, 2019 06:06AM by Lisa Zech

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Conversations With Bacon: Kate Drane, Techstars

Kate Drane is a bit of an enigma. She helped launch hundreds of crowdfunding projects at Indiegogo (in fact, I worked with her on the Ubuntu Edge and Global Learning XPRIZE campaigns). She has helped connect hundreds of startups to expertise, capital, and customers at Techstars, and is a beer fan who co-founded a canning business called The Can Van.

There is one clear thread through her career: providing more efficient and better access for innovators, no matter what background they come from or what they want to create. Oh, and drinking great beer. She is fantastic and does great work.

In this episode of Conversations With Bacon we unpack her experiences of getting started in this work, her work facilitating broader access to information, funding, and people, what it was like to be at Indiegogo through the teenage years of crowdfunding, how she works to support startups, the experience of entrepreneurship from different backgrounds, and more.


   Listen on Google Play Music

The post Conversations With Bacon: Kate Drane, Techstars appeared first on Jono Bacon.

15 June, 2019 01:14AM

June 14, 2019

Jonathan Riddell: KDE.org Description Update

The KDE Applications website was a minimal possible change to move it from an unmaintained and incomplete site to a self-maintaining and complete site.  It’s been fun to see it get picked up in places like Ubuntu Weekly News, Late Night Linux and chatting to people in real life they have seen it get an update. So clearly it’s important to keep our websites maintained.  Alas the social and technical barriers are too high in KDE.  My current hope is that the Promo team will take over the kde-www stuff giving it communication channels and transparancy that doesn’t currently exist.  There is plenty more work to be done on kde.org/applications website to make it useful, do give me a ping if you want to help out.

In the mean time I’ve updated the kde.org front page text box where there is a brief description of KDE.  I remember a keynote from Aaron around 2010 at Akademy where he slagged off the description that was used on kde.org.  Since then we have had Visions and Missions and Goals and whatnot defined but nobody has thought to put them on the website.  So here’s the new way of presenting KDE to the world:

Thanks to Carl and others for review.


14 June, 2019 12:52PM

hackergotchi for Qlustar


Qlustar 11 released

The Qlustar team is pleased to announce the immediate availability of Qlustar 11.0.0 for download. It updates Qlustar's core platform to current Ubuntu 18.04 LTS. The CentOS edge platform is now based on 7.6 with full integration of the just released OpenHPC 1.3.8.

As a result of our continuous platform optimization/simplification process we moved to dnsmasq as a replacement of the previously used ISC DHCP and atftp TFTP server. dnsmasq also provides cluster internal name services (DNS) replacing the NIS hosts map and acts as a DNS proxy.

In addition to the dnsmasq management interface, the second major feature of QluMan is the possibility to handle Network Filesystem resources. Initially this supports NFS mounts including RDMA connections and a mechanism to automatically choose the optimal network path to the NFS server. Mount resources are implemented as systemd automount units. This new interface replaces the previously used automount daemon which is now deactivated per default.

Highlights among the various major component updates include Kernel 4.19.x, Slurm 18.08.x, CUDA 10.1, OpenMPI 4.0.1and BeeGFS 7.1.3. Please read the release notes for more details.

14 June, 2019 10:28AM by root

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).

Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

14 June, 2019 07:20AM

Ted Gould: Development in LXD

Most of my development is done in LXD containers. I love this for a few reasons. It takes all of my development dependencies and makes it so that they're not installed on my host system, reducing the attack surface there. It means that I can do development on any Linux that I want (or several). But it also means that I can migrate my development environment from my laptop to my desktop depending on whether I need more CPU or whether I want it to be closer to where I'm working (usually when travelling).

When I'm traveling I use my Pagekite SSH setup on a Raspberry Pi as the SSH gateway. So when I'm at home I want to connect to the desktop directly, but when away connect through the gateway. To handle this I set up SSH to connect into the container no matter where it is. For each container I have an entry in my .ssh/config like this:

Host container-name
    User user
    IdentityFile ~/.ssh/id_container-name
    CheckHostIP no
    ProxyCommand ~/.ssh/if-home.sh desktop-local desktop.pagekite.me %h

You'll notice that I use a different SSH key for each container. They're easy to generate and it is worth not reusing them, this is a good practice. Then for the ProxyCommand I have a shell script that'll setup a connection depending on where the container is running, and what network my laptop is on.


set -e



ROUTER_IP=$( ip route get to | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )
ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )


IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"
NC_COMMAND="nc -6 -q0"

IP=$( bash -c "${IP_COMMAND}" )
if [ "${IP}" != "" ] ; then
    # Local
    exec ${NC_COMMAND} ${IP} 22

if [ "${HOME_ROUTER_MAC}" == "${ROUTER_MAC}" ] ; then

IP=$( echo ${IP_COMMAND} | ssh ${SSH_HOST} bash -l -s )

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What this script does it that it first tries to see if the container is running locally by trying to find its IP:

IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"

If it can find that IP, then it just sets up nc command to connect to the SSH port on that IP directly. If not, we need to see if we're on my home network or out and about. To do that I check to see if the MAC address of the default router matches the one on my home network. This is a good way to check because it doesn't require sending additional packets onto the network or otherwise connecting to other services. To get the router's IP we look at which router is used to get to an address on the Internet:

ROUTER_IP=$( ip route get to | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )

We can then find out the MAC address for that router using the ARP table:

ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )

If that MAC address matches a predefined value (redacted in this post) I know that it's my home router, else I'm on the Internet somewhere. Depending on which case I know whether I need to go through the proxy or whether I can connect directly. Once we can connect to the desktop machine, we can then look for the IP address of the container off of there using the same IP command running on the desktop. Lastly, we setup an nc to connect to the SSH daemon using the desktop as a proxy.

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What all this means so that I just type ssh contianer-name anywhere and it just works. I can move my containers wherever, my laptop wherever, and connect to my development containers as needed.

14 June, 2019 12:00AM

June 13, 2019

Julian Andres Klode: Encrypted Email Storage, or DIY ProtonMail

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!


The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str:
    """Encrypt given message"""
    encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients)
    if not encrypted_content:
        raise ValueError(encrypted_content.status)

    # Build the parts
    enc = email.mime.application.MIMEApplication(

    control = email.mime.application.MIMEApplication(
        _data=b'Version: 1\n',
        _subtype='pgp-encrypted; name="msg.asc"',
    control['Content-Disposition'] = 'inline; filename="msg.asc"'

    # Put the parts together
    encmsg = email.mime.multipart.MIMEMultipart(

    # Copy headers
    headers_not_to_override = {key.lower() for key in encmsg.keys()}

    for key, value in message.items():
        if key.lower() not in headers_not_to_override:
            encmsg[key] = value

    return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str:
    """Decrypt the given message"""
    return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

       Subject: =?utf-8?Q?p=E2=89=A1p?=
       X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None:
    """Program entry"""
    parser = argparse.ArgumentParser(
        description="Encrypt/Decrypt mail using GPG/MIME")
    parser.add_argument('-d', '--decrypt', action="store_true",
                        help="Decrypt rather than encrypt")
    parser.add_argument('recipient', nargs='*',
                        help="key id or email of keys to encrypt for")
    args = parser.parse_args()
    msg = email.message_from_file(sys.stdin)

    if args.decrypt:
        sys.stdout.write(encrypt(msg, args.recipient))

if __name__ == '__main__':

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail" "myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

13 June, 2019 08:47PM

hackergotchi for Purism PureOS

Purism PureOS

hackergotchi for Kali Linux

Kali Linux

WSL2 and Kali

Kali Linux has had support for WSL for some time, but its usefulness has been somewhat limited. This was mostly due to restrictions placed on some system calls , most importantly those revolving around networking. Furthermore, additional issues with speed, specifically I/O, were also problematic. Because of this, Kali WSL has mostly been relegated to reporting functions after an assessment is completed. A cool technology, and certainly an amazing engineering feat, but as is, it just was not that useful in the field.

When WSL 2 was announced however, we were excited about what this could mean for actually making Kali WSL more useful in. As such, when we saw that WSL 2 was available in the Windows Insiders program we wanted to jump right on it and see what improvements were made.

WSL2 Conversion

After you have the new Windows Insider build installed, converting Kali WSL 1 to 2 is very easy.

This was a great surprise for us, as it also means we don’t have to do anything on our end to support WSL2. Kali’s current WSL distribution will work just fine, and you can convert your existing installation easily. According to the docs you can also set WSL2 as your default if you don’t have a Kali installed yet.

Overall, this was a great surprise, and means Kali is ready for WSL 2 today.

Kali WSL 2 Usage

Ok, so WSL 2 works with Kali, but is it useful? We are just starting to play with WSL 2, so it’s really too early to say. However there are a few quick observations we have.

Basic usage, such as updating Kali and installing packages, appears to work just fine.

However, simply installing something is not that interesting, The question is: does it work? One specific tool we wanted to immediately check was Nmap, which has always been a WSL pain point. As you can see from the screenshot, a basic Nmap scan works right out of the box! Thats great news and is very promising for WSL 2 as it continues development.

That should not be a great surprise however, as WSL 2 at its core is really a low overhead and optimized VM. This has brought about some changes for those of us who have been using WSL for a while. These changes fall mostly along the lines of process spaces, networking, and filesystem interaction. This brings up some items we will have to watch as WSL continues to mature.

All networking appears to be NATed in the current release.

Microsoft states:

In the initial builds of the WSL 2 preview, you will need to access any Linux server from Windows using the IP address of your Linux distro, and any Windows server from Linux using the IP address of your host machine. This is something that is temporary, and very high on our priority list to fix.

So, no bridged mode. Anyone who uses Kali in a VM knows that for an actual assessment work it’s always better to run Kali in bridged mode, not NAT. With the current release, reverse shells are really not going to be an easy option without playing around with port forwarding on the Windows side. Additionally, we don’t yet know the strength of the NAT engine. While scans ran through WSL2 are now possible, their results will remain questionable until we find how much the NAT engine impacts them.

As it is in a VM, the process space is separate.

This is interesting, as it might actually open up Kali WSL 2 to be a useful endpoint protection bypass. If you get code execution on a Windows 10 system that supports WSL 2, could you install a Kali instance and pivot from there instead of the base operating system? This remains to be seen as this is still in development and Microsoft seems to want to unify the Linux and Windows experience as much as possible. The end point protection programs might become “WSL Aware”, which makes this is an interesting item to watch.

WSL 2’s filesystem is now in a virtual disk.

Similar to traditional VMs, there is now a virtual disk that holds the WSL 2 instance. In the past, one of the WSL issues that would come up is that many Kali tools would trigger anti-virus protections. To keep Kali WSL useful you would have to make exclusions for the location in which the Kali files were saved on the Windows filesystem.

Now that it’s in a virtual disk, much like the process space isolation, it will remain to be seen how AV might deal with it. Currently, it appears that AV ignores this virtual disk and its contents but as WSL reaches general availability it is possible AV products will become WSL 2 aware. Again, something we will need to watch.


As it stands, WSL 2 is an exciting technology and most definitely worth paying attention to. This is the first public beta and a lot will change over time. As such, we will track its development and see what we can do to make WSL 2 more useful for our purposes. As it stands however, it already seems more useful than what we have experienced with WSL 1 for actual production use. However, WSL 1 is still supported on a WSL 2 system so if you are a WSL user you can pick what’s best for you.

13 June, 2019 04:47PM by elwood

hackergotchi for Maemo developers

Maemo developers

WIP: changing the backend for contacts in Ubports

More than one year has passed since the initial announcement of my plan to investigate using a different backend for contact storage. If you want to get a better understanding of the plan, that mail is still a good read -- not much has changed since them, planning wise.

The reason for this blog post is to give a small update on what has happened since then, and as a start nothing can be better than a couple of screenshots:

Adding CardDAV accounts in the Addressbook application
Aggregated contact details from multiple sources

In other words, that means that contact synchonisation works, both with the new CardDAV protocol (for which we'll have preconfigured setups for NextCloud and OwnCloud accounts) and with Google Contacts, for which we are now using a different engine. What you see in the second screenshot (although indeed it's not obvious at all) is that the new qtcontacts-sqlite backend performs automatic contact merging based on some simple heuristics, meaning that when synchonising the same contact from multiple sources you should not happen to find a multitude of semi-identical copies of the contact, but a single one having all the aggregated details.

Before you get too excited, I have to say that this code is pre-alpha quality and that it's not even available for testing yet. The next step is indeed to setup CI so that the packaggqes get automatically built and published to a public repository, at which point I'll probably issue another update here in my blog.

The boring stuff

And now some detail for those who might wonder why this feature is not ready yet, or would like to get an idea on the time-frame for its completion.

Apart from a chronical lack of time from my part, the feature complexity is due to the large number of components involved:

  • qtcontacts-sqlite: the QtContacts backend we are migrating to. This is a backend for the QtContacts API (used by our Addressbook application) which uses a SQLite database as storage for your contacts.
  • buteo-sync-plugin-carddav: the CardDAV plugin for Buteo (our synchronisation manager). This plugin is loaded by Buteo and synchronises the contacts between a CardDAV remote source and the qtcontacts-sqlite database.
  • buteo-sync-plugins-social: a Buteo plugin which can synchronise contacts from a multitude of sources, including Google, Facebook and Vk. At the moment we only care about Google, but once this feature has landed we can easily extend it to work with the other two as well.
  • address-book-app: this is our well-known Contacts application. It needs some minor changes to adapt to the qtcontacts-sqlite backend and to support the creation of new CardDAV, NextCloud and OwnCloud accounts.
  • QtPim: the contacts and calendar API developed by the Qt project. Our Contacts application is using the front-end side of this API, and the qtcontacts-sqlite component implements the backend side. There are some improvements proposed by Jolla, which we need to include in order to support grouping contacts by their initials.

The other tricky aspect is that the first three projects are maintained by Jolla as part of the Sailfish OS, and while on one side this means that we can share the development and maintenance burden with Jolla, on the other side of the coin it means that we need to apply extra care when submitting changes, in order not to step on each other's shoes. Specifically, Sailfish OS is using a much older version of QtPim than Ubports is, and the APIs between the two versions have changes in an incompatible version, so that it's nearly impossible to have a single code base working with both versions of QtPim. Luckily git supports branches, and Chris from Jolla was kind enough to create a branch for us in their upstream repository where I've proposed our changes (and they are a lot!).

However, this is not as bad as it sounds, and the fact that I have a roughly working version on my development device is a good sign that things are moving forwards.

0 Add to favourites0 Bury

13 June, 2019 02:59PM by Alberto Mardegan (mardy@users.sourceforge.net)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S12E10 – Salamander

This week we’ve been playing with tiling window managers, we “meet the forkers”, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been playing with i3wm.
  • We “meet the forkers”; when projects end, forks are soon to follow.

  • We share a command line lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image taken from Salamander arcade machine manufactured in 1986 by Konami.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

13 June, 2019 02:00PM

hackergotchi for LiMux


Wahlsimulation 2019 – Ein Blick hinter die Kulissen der IT

Die Europawahl in einer Großstadt wie München ist ein Mega-Projekt für die Stadtverwaltung. Dieses wäre ohne die vielen, freiwilligen Wahlhelferinnen und Wahlhelfer nicht denkbar. Aber eine solche Wahl ohne IT-Unterstützung durchzuführen, ist ebenfalls kaum denkbar. Schon … Weiterlesen

Der Beitrag Wahlsimulation 2019 – Ein Blick hinter die Kulissen der IT erschien zuerst auf Münchner IT-Blog.

13 June, 2019 01:34PM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: New release: Vanilla framework 2.0

Over the past year, we’ve been working hard to bring you the next release of Vanilla framework: version 2.0, our most stable release to date.

Since our last significant release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version we’ve released.

You can see the full list of new and updated changes in the framework in the full release notes .

New to the Framework


The release has too many changes to list them all here but we’ve outlined a list of the high-level changes below.

The first major change was removing the Shelves grid, which has been in the framework since the beginning, and reimplementing the functionality with CSS grid. A Native CSS solution has given us more flexibility with layouts. While working on the grid, we also upped the grid max width base value from 990px to 1200px, following trends in screen sizes and resolutions.

We revisited vertical spacing with a complete overhaul of what we implemented in our previous release. Now, most element combinations correctly fit the baseline vertical grid without the need to write custom styles.

To further enforce code quality and control we added a prettier dependency with a pre-commit hook, which led to extensive code quality updates following running it for the first time. And in regards to dependencies, we’ve added renovate to the project to help to keep dependencies up-to-date.

If you would like to see the full list of features you can look at our release notes, but below we’ve captured quick wins and big changes to Vanilla.

  • Added a script for developers to analyse individual patterns with Parker
  • Updated the max-width of typographic elements
  • Broke up the large _typography.scss file into smaller files
  • Standardised the naming of spacing variables to use intuitive (small/medium/large) naming where possible
  • Increased the allowed number of media queries in the project to 50 in the parker configuration
  • Adjusted the base font size so that it respects browser accessibility settings
  • Refactored all *.scss files to remove sass nesting when it was just being used to build class names – files are now flatter and have full class names in more places, making the searching of code more intuitive

Components and utilities

Two new components have been added to Vanilla in this release: `p-subnav` and `p-pagination`. We’ve also added a new `u-no-print` utility to exclude web-only elements from printed pages.

New components to the framework: Sub navigation and Pagination.

Removed deprecated components

As we extend the framework, we find that some of our older patterns are no longer needed or are used very infrequently. In order to keep the framework simple and to reduce the file size of the generated CSS, we try to remove unneeded components when we can. As core patterns improve, it’s often the case that overly-specific components can be built using more flexible base components.

  • p-link–strong: this was a mostly-unused link variant which added significant maintenance overhead for little gain
  • p-footer: this component wasn’t flexible enough for all needs and its layout is achievable with the much more flexible Vanilla grid
  • p-navigation–sidebar: this was not widely used and can be easily replicated with other components

Documentation updates


During this cycle we improved content structure per component, each page now has a template with hierarchy and grouping of component styles, do’s and don’ts of using and accessible rules. With doing so we also updated the examples to showcase real use-case examples used across our marketing sites and web applications.

Updated Colorpage on our documentation site.

As well as updating content structure across all component pages, we also made other minor changes to the site listed below:

  • Added new documentation for the updated typographic spacing
  • Documented pull-quote variants
  • Merged all “code” component documentation to allow easier comparison
  • Changed the layout of the icons page


In addition to framework and documentation content, we still managed to make time for some updates on vanillaframework.io, below is a list of high-level items we completed to users navigate when visiting our site:

  • Updated the navigation to match the rest of the website
  • Added Usabilla user feedback widget
  • Updated the “Report a bug” link
  • Updated mobile nav to use two dropdown menus grouped by “About” and “Patterns” rather than having two nav menus stacked
  • Restyled the sidebar and the background to light grey

Bug fixes

As well as bringing lots of new features and enhancements, we continue to fix bugs to keep the framework up-to-date. Going forward we plan to improve our release process by pushing our more frequent patch releases to help the team with bugs that may be blocking feature deliverables.

Getting Vanilla framework

To get your hands on the latest release, follow the getting started instuctions which include all options for using Vanilla.

The post New release: Vanilla framework 2.0 appeared first on Ubuntu Blog.

13 June, 2019 08:52AM

Canonical Design Team: Customisable for the enterprise: the next-generation of drones

Drones, and their wide-ranging uses, have been a constant topic of conversation for some years now, but we’re only just beginning to move away from the hypothetical and into reality. The FAA estimates that there will be 2 million drones in the United States alone in 2019, as adoption within the likes of distribution, construction, healthcare and other industries accelerates.

Driven by this demand, Ubuntu – the most popular Linux operating system for the Internet of Things (IoT) – is now available on the Manifold 2, a high-performance embedded computer offered by leading drone manufacturer, DJI. The Manifold 2 is designed to fit seamlessly onto DJI’s drone platforms via the onboard SDK and enables developers to transform aerial platforms into truly smarter drones, performing complex computing tasks and advanced image processing, which in-turn creates rapid flexibility for enterprise usage.

As part of the offering, the Manifold 2 is planning to feature snaps. Snaps are containerised software packages, designed to work perfectly across cloud, desktop, and IoT devices – with this the first instance of the technology’s availability on drones. The ability to add multiple snaps means a drone’s functionality can be altered, updated, and expanded over time. Depending on the desired use case, enterprises can ensure the form a drone is shipped in does not represent its final iteration or future worth.

Snaps also feature enhanced security and greater flexibility for developers. Drones can receive automatic updates in the field, which will become vital as enterprises begin to deploy large-scale fleets. Snaps also support roll back functionality in the event of failure, meaning developers can innovate with more confidence across this growing field.

Designed for developers, having the Manifold 2 pre-installed with Ubuntu means support for Linux, CUDA, OpenCV, and ROS. It is ideal for the research and development of professional applications, and can access flight data and perform intelligent control and data analysis. It can be easily mounted to the expansion bay of DJI’s Matrice 100, Matrice 200 Series V2 and Matrice 600, and is also compatible with the A3 and N3 flight controller.

DJI has now counted at least 230 people have been rescued with the help of a drone since 2013. As well as being used by emergency services, drones are helping to protect lives by eradicating the dangerous elements of certain occupations. Apellix is one such example; supplying drones which run on Ubuntu to alleviate the need for humans to be at the forefront of work in elevated, hazardous environments, such as aircraft carriers and oil rigs.

Utilising the freedom brought by snaps, it is exciting to see how developers drive the drone industry forward. Software is allowing the industrial world to move from analog to digital, and mission-critical industries will continue to evolve based on its capabilities.

The post Customisable for the enterprise: the next-generation of drones appeared first on Ubuntu Blog.

13 June, 2019 07:00AM

Stephen Michael Kellat: A Modest Ham-Related Proposal

Over the past couple months I have been trying to participate in the Monday morning net run by the SDF Amateur Radio Club from SDF.org. It has been pretty hard for me to catch up with any of the local amateur radio clubs. There is no local club associated with the American Radio Relay League in Ashtabula County but it must be remembered that land-wise Ashtabula County is fairly large in terms of land area.

For reference, the state of Rhode Island and Providence Plantations has a dry land area of 1,033.81 square miles. Ashtabula County has a dry land area of 702 square miles. Ashtabula County is 68% the size of the state of Rhode Island in terms of land area even though population-wise Ashtabula County has 9.23% of Rhode Island's equivalent population. Did I hear mooing off in the distance somewhere? For British readers, it is not only safe to say I'm not just in a fairly isolated area but that it may resemble Ambridge a bit too much.

Now, the beautiful part about the SDF Amateur Radio Club net is that it takes place via the venerable EchoLink system. The package known as qtel allows for access to the repeater-linking network from your Ubuntu desktop. Unlike normal times, the Wikipedia page about EchoLink actually provides a fairly nice write-up for the non-specialist.

Now, there is a relatively old article on the American Radio Relay League's website about Ubuntu. If you look at the Ubuntu Wiki, there is talk about Ubuntu Hams having their own net but the last time that page was edited was 2012. While there is talk of an IRC channel, a quick look at irclogs.ubuntu.com shows that it does not look like the log bot has been in the channel this month. E-mail to the Launchpad Team's mailing list hosted on Launchpad itself is a bit sporadic.

I have been a bit MIA myself due to work pressures. That does not mean I am unwilling to act as the Net Control Station if there is a group willing to hold a net on EchoLink perhaps. It would be a good way to get hams from across the Ubuntu realms to have some fellowship with each other.

For now, I am going to make a modest proposal. If anybody is interested in such an Ubuntu net could you please check in on the SDF ARC net on June 17 at 0000 UTC? To hear what the most recent net sounded like, you can listen to the recorded archive of that net's audio in MP3 format. Just check in on June 17th at 0000 UTC and please stick around until after the net ends. We can talk about possibilities after the SDF net ends. All you need to do is be registered to use EchoLink and have appropriate software to connect to the appropriate conference.

I will cause notice of this blog post to be made to the Launchpad Team's mailing list.

Creative Commons License
A Modest Ham-Related Proposal by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

13 June, 2019 02:35AM

June 12, 2019

Canonical Design Team: Ubuntu Server development summary – 11 June 2019

Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: Bryce Harrington

Keeping with the theme of “bringing them back into the fold”, we are proud to announce that Bryce Harrington has rejoined Canonical on the Ubuntu Server team. In his former tenure at Canonical, he maintained the X.org stack for Ubuntu and helped bridge us from the old ‘edit your own xorg.org’ days, swatted GPU hang bugs on Intel, and contributed to Launchpad development.

Home-based in Oregon, with around 20 years of open source development experience. Bryce created the Inkscape project, and he is currently a board member of the X.org Foundation. He joins us most recently from Samsung Research America where he was a Senior Open Source Developer and the release manager for the Cairo and Wayland projects. Bryce will be helping us tackle the development and maintenance of Ubuntu Server packages. We are thrilled to have his additional expertise to help spread the wealth of software and packaging improvements that help make Ubuntu great. When he’s not building software, he is building things in his woodworking shop.

Welcome (back) Bryce (bryce on Freenode)!


  • Allow identification of OpenStack by Asset Tag [Mark T. Voelker] (LP: #1669875)
  • Fix spelling error making ‘an Ubuntu’ consistent. [Brian Murray]
  • run-container: centos: comment out the repo mirrorlist [Paride Legovini]
  • netplan: update netplan key mappings for gratuitous-arp [Ryan Harper] (LP: #1827238)


  • vmtest: dont raise SkipTest in class definition [Ryan Harper]
  • vmtests: determine block name via dname when verifying volume groups [Ryan Harper]
  • vmtest: add Centos66/Centos70 FromBionic release and re-add tests [Ryan Harper]
  • block-discover: add cli/API for exporting existing storage to config [Ryan Harper]
  • vmtest: refactor test_network code for Eoan [Ryan Harper]
  • curthoooks: disable daemons while reconfiguring mdadm [Michael Hudson-Doyle] (LP: #1829325.)
  • mdadm: fix install to existing raid [Michael Hudson-Doyle] (LP: #1830157)

Contact the Ubuntu Server team

Bug Work and Triage

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 10

Uploads Released to the Supported Releases

Total: 26

Uploads to the Development Release

Total: 9

The post Ubuntu Server development summary – 11 June 2019 appeared first on Ubuntu Blog.

12 June, 2019 04:43PM

hackergotchi for Cumulus Linux

Cumulus Linux

Cumulus content roundup: May

May is well in the books and summer seems to be in full swing with recent heatwaves across the country. Since we know life can get pretty busy and you may have missed some of May’s great content, we’ve rounded up some of our favorite podcasts, blog posts, and articles for you here. So settle in, hopefully, stay cool, and get ready for all things open networking!

From Cumulus Networks:

Minipack Highlight Video from OCP Summit: Listen to Brian O’Sullivan & Michael Lane, VP of Business Development at Edgecore Networks discuss the recently launched Minipack, open, modular switch.

Kernel of Truth season 2 episode 7: Certifications: Listen as we discuss the value of certifications, if any, what works for certifications and what doesn’t, who should be taking certifications and more!

Installing Cumulus packages on air-gapped equipment: Check out this excerpt to help you get additional packages into an air-gapped environment for the install where you don’t have a repo or mirror available to pull from.

ngrok on Cumulus Linux: If you have a good idea of what ngrok is and what it does, here are step-by-step instructions for turning up ngrok ssh services on Cumulus Linux.

The future of HCI: We sit down with Naveen Chhabra’s, a senior industry analyst at Forrester, to discuss and hear his opinion on the future of HCI.

Kernel of Truth season 2 episode 8: Network of pods: Kernel of Truth host Brian O’Sullivan and “resident jam band man” JR Rivers have a great conversation about what pod architecture is and how is it relevant to you so be sure to queue it up and listen!

News from the web:

Cumulus Networks: The in-kernel webscale-ready network catalyst: Read how Cumulus Networks offers an open source Linux package that natively includes advanced L2/L3 networking features here by author Jason English via Intellyx.

Infrastructure as Code: Free Your Intellect: When it comes to “Infrastructure as Code,” author Michael Stump advocates for you to free your intellect. Read why here via Gestalt IT.

The status of white box networking in the enterprise: Author Lee Doyle from TechTarget lays out the five questions you should ask yourself to determine when white box implementation is a smart move.

12 June, 2019 03:24PM by Katie Weaver

hackergotchi for Purism PureOS

Purism PureOS

“See Your Junk” – Behind the scenes

At Purism, we aim to promote privacy and freedom through the use of free software (and we see it as ethical software). When we work, and in order to produce our content–such as what you see in this page–we use free software, too. And so, with a small budget and some basic audio and video gear, along with a few Librem laptops (running free software only, of course), we have made this video the ethical way, using ethical tools from beginning to end.


Pre-production took us the longest, and kept us working for quite some time–we ended up taking over a month to prepare everything. Todd, Purism’s founder and CEO, handed me a really funny script that he had written himself; I read it and started organizing the shoot with the help of Jenny Lavery, who did an amazing job of finding the perfect actors, location and props.

After planning every shot I started drawing a storyboard, using GIMP, my Librem 13, and a simple graphics tablet.


Once everything was planned for, I packed my suitcase and traveled to Austin, Texas, with Therry Cazorla, a fellow French countryman. Therry is the director of photography with whom I have been working for years.

We were so lucky with the weather; the weather is always a big question mark, and a nightmare in what comes to delaying everything, when shooting outside. I had planned on spending two full days shooting the entire video, but everything went so well–and everybody was so professional–that we ended up managing to shoot it in just one day, which was amazing!

Like I said before, we chose to use free software only. That choice led us to shoot with a camera that is compatible with Magic Lantern, a free software add-on that… well, adds features to the camera we used, and that also allowed us to shoot in RAW format–i.e., straight from the sensor and generating an uncompressed file, resulting in the maximum possible raw image quality.

The RAW format is also very useful for it allowed me to process the look of our initial footage as soon as I finished recording; I used a software called MLV-App to do just that, because it lets me analyze my raw footage and apply a flat look to it. This is a personal preference, it’s what I prefer to do regarding color grading technique and results. Everything flowed so well I managed to put together a first, rough edit, the very day we finished shooting.

Post production

Shooting in the USA was fun, but once we finished I had to travel back to France, to my home studio, where I had all the material I needed to start editing.

Video editing

I used Kdenlive to edit the video; it is a very complete, very professional, free software, non-linear video editor.

I now had the perfect opportunity to test the new Librem 15 with a 4k screen… having this video to finish meant I was simultaneously able to test hardware and software in a real project, and help developers improve product experience. And so I made a first cut of the video, and started to edit the audio.

Audio editing

Audio quality plays a big part in the overall quality of the final, resulting video. I would even go a step further and say the quality of the audio is perhaps more important than that of the image, when making a professional-grade video (but feel free to disagree).

In order to professionally edit the audio, I had to export my timeline from the video editor to a proper audio editor—Ardour, in this case. This import/export feature exists neither in Kdenlive nor in Ardour, but I needed it and had to find a solution—and this is one of the great advantages of using free software: that it is public and belongs to its users, making creating a missing feature (and giving it back to the community) something very doable. And that’s exactly what I did: I created a python script that converts the timeline from my video editor into a timeline for my audio editor. If you want it, you can get it in our Gitlab repository.

This allowed me to perfectly edit my audio, using very professional free software tools; it guaranteed a smooth, even sound, where the cuts between different shots are impossible to hear. Afterwards I added some extra ambient noise to ensure continuity and to give a bit more color–which leads us to the subject of our next chapter.

Color grading

The color of each individual take was then worked in order to guarantee its consistency over the whole video. I later applied a global (meaning, to the whole video sequence) color grading, to give it a consistent style and tone. I like to work over very flat footage that is low in saturation and contrast. I then add some contrast with a bleach bypass effect, and do the final tweaks on the curves and levels filters. The graph monitors in Kdelive let me adjust colors and levels with a high degree of precision—and I can be sure that colors are just right, that my eyes are not tricking me.

Motion design

I added some text effects at the end of the video: mostly, over the Librem One logo.

I made all my text animations in Blender, an amazing 3D free software application with very powerful compositing and animation features.

You might have noticed a subtle light effect in the logo animation (just before the rainbow appears)–it’s actually a handmade, traditional animation made with OpenToonz, a free software app that was also used for some of the biggest productions of the Japanese anime industry.

That’s it, and thank you for your time–it was fun 🙂

The participants:

Andre Martin as The Husband

Will Moleon as The Enlightened One

Karina Dominguez as The Wife

Christine Hoang as The Neighbor

Tom Costello Jr. as The Gardener

Sanjay Rao as The Business Man

Giselle Marie Munoz as The Business Woman

Bryan Lunduke as The Voice-Over

The crew:

Jenny Lavery co-producer, casting director and clapperboard

Camille Westmoland makeup artist

Blake Addyson sound engineer

Thierry Cazorla director of photography

Todd Weaver producer, screenwriter (and co-directed the actors, too)

The post “See Your Junk” – Behind the scenes appeared first on Purism.

12 June, 2019 02:28PM by François Téchené

hackergotchi for Qlustar


Qlustar BoF at ISC 2019

We are organizing a Birds of a feather session at this years ISC (Wednesday, June 19th 2:45pm - 3:45pm, Room Kontrast). There will be two presentations one by Qlustar founder Roland Fehrenbacher and a second one by Ansgar Esztermann from the Max-Planck Institute in Göttingen. The goal of the BoF is to bring together developers, HPC cluster admins and hardware vendors to identify the most pressing issues to further enhance Qlustar's suitability as an open-source full stack HPC management solution. So if you're in Frankfurt next week, take the chance to join us.

Please submit ideas for Qlustar enhancements here.

12 June, 2019 08:00AM by root

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Get to know these 5 Ubuntu community resources

Ubuntu community

As open-source software, Ubuntu is designed to serve a community of users and innovators worldwide, ranging from enterprise IT pros to small-business users to hobbyists.

Ubuntu users have the opportunity to share experiences and contribute to the improvement of this platform and community, and to encourage our wonderful community to continue learning, sharing and shaping Ubuntu, here are five helpful resources:

Ubuntu Tutorials

These tutorials provide step-by-step guides on using Ubuntu for different projects and tasks across a wide range of Linux tools and technologies.
Many of these tutorials are contributed and suggested by users, so this site also provides guidance on creating and requesting a tutorial on a topic you believe needs to be covered.

Ubuntu Community Hub

This community site for user discourse is relatively new and intended for people working at all levels of the stack on Ubuntu. The site is evolving, but currently includes discussion forums, announcements, QA and testing requests, feedback to the Ubuntu Community Council and more.

Ubuntu Community Help Wiki page

From installation to documentation of Ubuntu flavours such as Lubuntu and Kubuntu, this wiki page offers instructions and self-help to users comfortable doing it themselves. Learn some tips, tricks and hacks, and find links to Ubuntu official documentation as well as additional help resources.

Ubuntu Server Edition FAQ page

Its ease of use, ability to customise and capacity to run on a wide range of hardware makes Ubuntu the most popular server choice of the cloud age. This FAQs page provides answers to technical questions, maintenance, support and more to address any Ubuntu server queries.

Ubuntu Documentation

If you are a user who relies extensively on Ubuntu documentation, perhaps you can lend a hand to the Documentation Team to help improve it by:

  • Submitting a bug: Sending in a bug report when you find mistakes.
  • Fixing a bug: Proposing a fix an existing bug.
  • Creating new material: Adding to an existing topic or writing on a new topic.

These are just a few of the available resources and recommended suggestions for getting involved in the Ubuntu community. For more, visit ubuntu.com/community.

The post Get to know these 5 Ubuntu community resources appeared first on Ubuntu Blog.

12 June, 2019 07:52AM

June 11, 2019

Kubuntu General News: Plasma 5.16 for Disco 19.04 available in Backports PPA

We are pleased to announce that Plasma 5.16, is now available in our backports PPA for Disco 19.04.

The release announcement detailing the new features and improvements in Plasma 5.16 can be found here

Released along with this new version of Plasma is an update to KDE Frameworks 5.58. (5.59 will soon be in testing for Eoan 19.10 and may follow in the next few weeks.)

To upgrade:

Add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade


Please note that more bugfix releases are scheduled by KDE for Plasma 5.16, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.15 as included in the original 19.04 Disco release.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

11 June, 2019 03:24PM

June 10, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 582

Welcome to the Ubuntu Weekly Newsletter, Issue 582 for the week of June 2 – 8, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

10 June, 2019 08:53PM

Podcast Ubuntu Portugal: Ep. 57 – O bom, o mau e o lambão

Neste episódio ficámos a saber quais foram os últimos destinos das viagens do Diogo Constantino, onde é que ele andou a gastar dinheiro, mas também o que andou o Tiago a fazer para não ter estado nos últimos episódios. Já sabes, ouve, subscreve e partilha!

  • https://sintra2019.ubucon.org/
  • https://videos.ubuntu-paris.org/
  • https://slimbook.es/zero-smart-thin-client-linux-windows-fanless
  • https://slimbook.es/pedidos/mandos/mando-gaming-inal%C3%A1mbrico-nox-comprar
  • https://panopticlick.eff.org/
  • https://www.mozilla.org/en-US/firefox/67.0/releasenotes/
  • https://blog.mozilla.org/addons/2019/03/26/extensions-in-firefox-67/
  • https://discourse.ubuntu.com/t/mir-1-2-0-release/11034
  • https://www.linuxondex.com/
  • https://github.com/tcarrondo/aws-mfa-terraform
  • https://devblogs.microsoft.com/commandline/announcing-wsl-2/


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa é: VisualHunt licenciada nos termos da licença: CC0 1.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

10 June, 2019 02:47PM

Canonical Design Team: Design and Web team summary – 10 June 2019

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.


Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Integrating the blog into www.ubuntu.com

We have been working on integrating the blog into www.ubuntu.com, building new templates and an improved blog module that will serve pages more quickly.

Takeovers and engage pages

We built and launched a few new homepage takeovers and landing pages, including:

– Small Robot Company case study

– 451 Research webinar takeover and engage page

– Whitepaper takeover and engage page for Getting started with AI

– Northstar robotics case study engage page  

– Whitepaper about Active directory

Verifying checksum on download thank you pages

We have added the steps to verify your Ubuntu download on the website. To see these steps download Ubuntu and check the thank you page.

Mir Server homepage refresh

A new homepage hero section was designed and built for www.mir-server.io.

The goal was to update that section with an image related to digital signage/kiosk and also to give a more branded look and feel by using our Canonical brand colours and Suru folds.


Brand squad champion a consistent look and feel across all media from web to social to print and logos.

Usage guide to using the company slide deck

The team have been working on storyboarding a video to guide people to use the new company slide deck correctly and highlight best practices for creating great slides.

Illustration and UI work

We have been working hard on breaking down the illustrations we have into multiple levels. We have identified x3 levels of illustrations we use and are in the process of gathering them across all our websites and reproducing them in a consistent style.

Alongside this we have started to look at the UI icons we use in all our products with the intention of creating a single master set that will be used across all products to create a consistent user experience.

Marketing support

Created multiple documents for the Marketing team, these included x2 whitepapers and x3 case studies for the Small Robot Company, Northstar and Yahoo Japan.

We also created an animated screen for the stand back wall at Kubecon in Barcelona.


The MAAS squad develop the UI for the maas project.

Renamed Pod to KVM in the MAAS UI

MAAS has been using the label “pod” for any KVM (libvert) or RSD host – a label that is not industry standard and can be confused with the use of pods in Kubernetes. In order to avoid this, we went through the MAAS app and renamed all instances of pod to KVM and separated the interface for KVM and RSD hosts.

Replaced Karma tests with Jest

The development team working on MAAS have been focusing on modernising areas of the application. This lead to moving from the Karma test framework to Jest.

Absolute import paths to modules

Another area the development team would like to tackle is migrating from AngularJS to React. To decouple us from Angular we moved to loading modules from a relative path.

KVM/RSD: In-table graphs for CPU, RAM and storage

MAAS CPU, RAM and Storage mini charts
MAAS usage tooltip
MAAS storage tooltip


The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Design update for JAAS.ai

We have worked on a design update for jaas.ai which includes new colours and page backgrounds. The aim is to bring the website in line with recent updates carried out by the brand team.

Top of the new JAAS homepage

Design refresh of the top of the store page

We have also updated the design of the top section of the Store page, to make it clearer and more attractive, and again including new brand assets.

Top of jaas store page

UX review of the CLI between snap and juju

Our team has also carried out some research in the first step to more closely aligning the commands used in the CLI for Juju and Snaps. This will help to make the experience of using our products more consistent.


The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.  

Vanilla 2.0.0 release

Since our last major release, v1.8.0 back in July last year, we’ve been working hard to bring you new features, improve the framework and make it the most stable version of Vanilla yet. These changes have been released in v2.0.0.

Over the past 2 weeks, we’ve been running QA tests across our marketing sites and web applications using our pre-release 2.0.0-alpha version. During testing, we’ve been filing and fixings bugs against this version, and have pushed up to a pre-release 2.0.0-beta version.

Vanilla framework v2.0.0 banner

We plan to launch Vanilla 2.0.0 today once we have finalised our release notes, completed our upgrade document which will help and guide users during upgrades of their sites.

Look out for our press release posts on Vanilla 2.0.0 and our Upgrade document to go along with it.


The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Install any snap on any platform

We’re pleased to announce we’ll be releasing distribution install pages for all Snaps. They’re one-stop-shops for any combination of Snap and supported distro. Simply visit https://snapcraft.io/install/spotify/debian or, say https://snapcraft.io/install/vlc/arch. The combinations are endless and not only do the pages give you that comfy at-home feeling when it comes to branding they’re also pretty useful. If you’ve never installed Snaps before we provide some easy step-by-step instructions to get the snap running and suggest some other Snaps you might like.

Snap how to install VSC

The post Design and Web team summary – 10 June 2019 appeared first on Ubuntu Blog.

10 June, 2019 02:30PM

LoCo Ubuntu PT: Ubucon Portugal 2019 – Rescaldo

No dia 6 de Abril,realizou-se na ISCTE, o evento Ubucon Portugal 2019, este evento foi organizado pela Comunidade Ubuntu Portugal, o ISCTE – Instituto Universitário de Lisboa, o ISTAR-IUL – Information Sciences and Technologies and Architecture Research Center e o ISCTE-IUL ACM Student Chapter. O intuito deste evento foi o de divulgar o software e comunidade Open Source sobre a alçada da comunidade portuguesa de Ubuntu, no entanto acabou por se tornar num evento muito mais amplo do que isso, podendo destacar-se por as seguintes palavras chaves:

  • Consciencialização: Durante o decorrer do evento, os participantes foram consciencializados várias vezes para os nossos direitos digitais, o que fazer para os manter, lutar por eles e acima de tudo compreendê-los. Foram analisados temas como os infames artigos 11 e 13 tais como as alterações que irão existir aquando da implementação dos mesmos pela União Europeia e adaptação por cada estado membro.
    Foram ainda consciencializados relativamente ao tipo de licenciamento de software que existe e como é possível proteger o trabalho que é desenvolvido de forma aberta, tal como todos os processos que são necessários para se conseguir alterar a nossa legislação para ter em conta estes aspectos.
  • Comunidade: Sendo este um evento destinado à comunidade, foi possível assistir as apresentação de projectos que são impulsionados por membros das nossa comunidade.
    Foi possível descobrir a evolução do ambiente gráfico KDE desde a primeira versão de todas até ao tão aclamado Kde plasma e o seu ecossistema, deixando o Kde de ser apenas um ambiente gŕafico e passando a ser um ecossistema orientado para a comunidade. Foi possível ainda desmistificar o velho tema “O linux não serve para jogar”, onde foram apresentados os números atuais de jogos que são suportados para linux nativamente ou através de software de terceiros, esta apresentação demonstrou ainda os esforços que estão a ser desenvolvidos pela Steam, para tornar o Linux uma plataforma para jogos e conseguir desmistificar de uma vez por todas que em Linux não se joga ou apenas se jogam jogos mais simples.
  • Futuro: Não é possível projectar o futuro sem primeiro compreender o passado, como tal foram apresentados os planos futuros da comunidade portuguesa de Ubuntu, quais são os planos e missões da mesma no futuro. Os participantes foram ainda contemplados pela oferta formativa do ISCTE a nível de mestrado na área de open source, talvez um dos mestrados mais fora da caixa que existe neste momento no panorama nacional de ensino e que dá uma força especial aos apoiantes do open source para seguir um sonho se formarem nesta área. Por fim foi debatida a utilização uma das maiores revoluções industriais dos últimos tempos, a impressão 3D, casos de
    estudo e aplicação, onde é que iremos chegar com esta tecnologia, tal como os impactos que a impressão 3d poderá ter na sociedade industrial e ambiental, recorrendo a produtos que possam ser degradáveis e amigos do ambiente.

Para finalizar, foi um evento cheio de bom ambiente, muita curiosidade pelo assunto do opensource, em que nunca se pôs de parte, uma das coisas mais importantes, o espírito de comunidade e companheirismo por esta causa e por este sonho de ter um mundo tecnológico mais aberto e mais transparente. Se vale a pena participar ?
Claro, mas não só como alguém que vai assistir ao evento, mais sim como alguém que organiza eventos, apoia a comunidade e participa na comunidade.

Nota: Como a comunidade ambiciona sempre mais e mais, assim que acabou a Ubucon Portugal, a comunidade começou logo a prepar a Ubucon Europe, como tal este rescaldo veio de forma mais tardia, no entanto, um evento deste género, não poderia ser esquecido.
Obrigado a todos os que tiveram presentes e que para o ano estejamos todos juntos novamente para a realização da Ubucon Portugal 2020.

10 June, 2019 09:34AM

June 08, 2019

hackergotchi for ARMBIAN


Orange Pi 3

PCIe port is not supported: Allwinner H6 has a quirky PCIe controller that doesn’t map the PCIe address space properly to CPU, and accessing the PCIe config space, IO space or memory space will need to be wrapped. As Linux doesn’t wrap PCIe memory space access, it’s not possible to do a proper PCIe controller …

08 June, 2019 05:00PM by Igor Pečovnik

hackergotchi for Ubuntu developers

Ubuntu developers

Colin King: Working towards stress-ng 0.10.00

Over the past 9+ months I've been cleaning up stress-ng in preparation for a V0.10.00 release.   Stress-ng is a portable Linux/UNIX Swiss army knife of micro-benchmarking kernel stress tests.

The Ubuntu kernel team uses stress-ng for kernel regression testing in several ways:
  • Checking that the kernel does not crash when being stressed tested
  • Performance (bogo-op throughput) regression checks
  • Power consumption regression checks
  • Core CPU Thermal regression checks
The wide range of micro benchmarks in stress-ng allow us to keep track of a range of metrics so that we can catch regressions.

I've tried to focus on several aspects of stress-ng over the last last development cycle:
  • Improve per-stressor modularization. A lot of code has been moved from the core of stress-ng back into each stress test.
  • Clean up a lot of corner case bugs found when we've been testing stress-ng in production.  We exercise stress-ng on a lot of hardware and in various cloud instances, so we find occasional bugs in stress-ng.
  • Improve usability, for example, adding bash command completion.
  • Improve portability (various kernels, compilers and C libraries). It really builds on runs on a *lot* of Linux/UNIX/POSIX systems.
  • Improve kernel test coverage.  Try to exercise more kernel core functionality and reach parts other tests don't yet reach.
Over the past several days I've been running various versions of stress-ng on a gcov enabled 5.0 kernel to measure kernel test coverage with stress-ng.  As shown below, the tool has been slowly gaining more core kernel coverage over time:

With the use of gcov + lcov, I can observe where stress-ng is not currently exercising the kernel and this allows me to devise stress tests to touch these un-exercised parts.  The tool has a history of tripping kernel bugs, so I'm quite pleased it has helped us to find corners of the kernel that needed improving.

This week I released V0.09.59 of stress-ng.  Apart from the usual sets of clean up changes and bug fixes, this new release now incorporates bash command line completion to make it easier to use.  Once the 5.2 Linux kernel has been released and I'm satisfied that stress-ng covers new 5.2 features I will  probably be releasing V0.10.00. This  will be a major release milestone now that stress-ng has realized most of my original design goals.

08 June, 2019 04:26PM by Colin Ian King (noreply@blogger.com)

June 07, 2019

Benjamin Mako Hill: Sinonym

I’d like to use “sinonym” as another word for an immoral act. Or perhaps to refer to the Chinese name for something. Sadly, I think it might just be another word for another word.

07 June, 2019 06:46PM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

Emmabuntus as a software showcase

We just released on PeerTube a nice video with English subtitles made by our friend Amaury of  «  Blabla Linux ».

In this video Amaury explains what is the philosophy of the Emmabuntüs project, and then goes – with much enthusiasm – into the details of the Emma-DE-3 « Buster » alpha release he just received and tested.

To select a english subtitle language, please click on the gearwheel located
at the bottom right of the window and pick the english language.

PeerTube is a decentralized French platform hosting video files, thanks to a free software based on a peer to peer distribution model. It works on the basis of a federation of instances hosted by several remote entities, similar to the Diaspora* or Mastodon architecture

The subtitles were edited with the free Subtitle Editor Program.

07 June, 2019 05:07PM by yves

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Small Robot Company sows the seeds for autonomous and more profitable farming

In Europe, the cost of running a cereal farm – cultivating wheat, rice, and other grains – has risen by 85% in the last 25 years, yet crop yields and revenues have stagnated. And while farms struggle to remain profitable, it won’t be long before those static yields become insufficient to support growing populations.

Reliance on tractors is at the heart of both of these problems. Not only are tractors immensely costly to buy and maintain, they are also inefficient.

The Small Robot Company (SRC), a UK based agri-tech start up, is working to overturn this paradigm by replacing tractors with lightweight robots. Developed using Ubuntu, these robots are greener and cheaper to run than tractors, and generate far higher yields thanks to AI-driven precision.

In this innovative deployment of robotics and AI for commercial farming, you’ll learn:

  • How the agri-tech start up is using robotics to grow crops and reduce waste, including its current partnership with a leading UK supermarket chain.
  • The emergence of Farming as a Service (FaaS) business model, eliminating the need to invest upfront in expensive machinery
  • How the use of Ubuntu in the cloud and on the hardware powers SRC’s three robots – Tom, Dick and Harry – plus Wilma, its AI system, to accelerate development and provide a stable platform.
<noscript> <a class="p-link--external" href="https://www.ubuntu.com/engage/small-robot-company">Get the case study</a> </noscript>

The post Small Robot Company sows the seeds for autonomous and more profitable farming appeared first on Ubuntu Blog.

07 June, 2019 04:06PM

hackergotchi for Tails


Tails report for May, 2019


The following changes were introduced in Tails 3.14:

  • Enable all available mitigations for the MDS (Microarchitectural Data Sampling) attacks and disable SMT (simultaneous multithreading) on all vulnerable processors to fix the RIDL, Fallout and ZombieLoad security vulnerabilities.

  • Remove the following applications:

    • Desktop applications

      • Gobby
      • Pitivi
      • Traverso
    • Command-line tools

      • hopenpgp-tools
      • keyringer
      • monkeysign
      • monkeysphere
      • msva-perl
      • paperkey
      • pwgen
      • ssss
      • pdf-redact-tools

    You can install these applications again using the Additional Software feature.

    Thanks to the removal of these less popular applications in 3.14 and the removal of some language packs in 3.13.2, Tails 3.14 is 39 MB smaller than 3.13.

  • Add back the OpenPGP Applet and Pidgin notification icons to the top navigation bar.

  • Fix NoScript being deactivated when restarting Tor Browser.

    NoScript is removed from the set of icons displayed by default in Tor Browser. This is how Tor Browers looks in Tails 3.14.

    To see the list of add-ons that are enabled, choose  ▸ Add-ons.


  • We spent lots of time dealing with the Firefox add-ons fiasco, also known as "armagadd-on 2.0". #16694

  • Finally, most of our patches improving the security of Thunderbird's email configuration wizard were merged and will be part of Thunderbird 68, that is: they'll be shipped not just in Tails, but to all Thunderbird users on any operating system! #6156

  • We sent out a new call to test a fix for the issue in which 16389: Some USB sticks become unbootable in legacy BIOS mode after first boot. We received a few conclusive test results.

  • We tested Onionshare 2.0 in Tails and decided to ship it in 4.0. #14649

Documentation and website

Hot topics on our help desk

The first two weeks, before the release of Tails 3.14, we had users reporting

  1. #16608: Disable the topIcons GNOME Shell extension
  2. #16447: Regression on some Intel GPU (Braswell, Kaby Lake)

Then, after the release, the changes on Tor Browser made many users report to us

  1. #16746: document the disappearence of NoScript and HTTPSEverywhere icons from Tor Browser's toolbar
  2. #16762: Document why NoScript last update is january 2000
  3. #16727: Update doc to Tor Browser 8.5


  • We finished benchmarking the prototype node for our next-generation CI hardware. (#15501)



Past events

  • Jesús Marín García presented 2 workshops about Tails in May in the Comunidad Valenciana, Spain:

    • On May 4 at Hack and Beers in Castellón de la Plana
    • On May 11 at VLCTechFest in Valencia

Upcoming events

  • Ulrike and sajolida will be at the Tor meeting on July 11-14 in Stockholm, Sweden.

  • intrigeri, lamby and nodens will attend DebConf19

On-going discussions


All the website

  • de: 44% (2533) strings translated, 9% strings fuzzy, 42% words translated
  • es: 50% (2867) strings translated, 6% strings fuzzy, 43% words translated
  • fa: 31% (1778) strings translated, 11% strings fuzzy, 33% words translated
  • fr: 91% (5228) strings translated, 2% strings fuzzy, 92% words translated
  • it: 32% (1872) strings translated, 7% strings fuzzy, 29% words translated
  • pt: 25% (1435) strings translated, 9% strings fuzzy, 21% words translated

Total original words: 60511

Core pages of the website

  • de: 68% (1225) strings translated, 13% strings fuzzy, 71% words translated
  • es: 78% (1387) strings translated, 11% strings fuzzy, 79% words translated
  • fa: 33% (605) strings translated, 13% strings fuzzy, 32% words translated
  • fr: 97% (1735) strings translated, 2% strings fuzzy, 97% words translated
  • it: 61% (1093) strings translated, 17% strings fuzzy, 63% words translated
  • pt: 44% (789) strings translated, 14% strings fuzzy, 47% words translated


  • Tails has been started more than 791 323 times this month. This makes 25 527 boots a day on average.
  • 9 697 downloads of the OpenPGP signature of a Tails USB image or ISO from our website.

How do we know this?

07 June, 2019 02:07PM

hackergotchi for Ubuntu developers

Ubuntu developers

Hajime MIZUNO: Original USB flash drive for Ubunchu?

What is Ubunchu?

"Ubunchu!" is a Japanese manga series featuring Ubuntu Linux.
Three school students in a system-admin club are getting into Ubuntu!
(see http://seotch.wordpress.com/ubunchu/)

These manga are serializes with the "Ubuntu Magazine Japan". This is a first magazine specialised for Ubuntu in Japan. It is widely sold at book stores throughout Japan. and the license of a back number is CC-BY-NC.

Back to the subject

I ordered USB flash drive of the original ubunchu design. However, it was very expensive. (about $100 USD, 8GB :p)
Of course, Ubuntu is installed!

07 June, 2019 02:30AM

Hajime MIZUNO: Open Source "Small" Conference 2011 Aizu

I and an another Japanese LoCo team member Mitsuya Shibata(lp: cosmos-door) attended Open Source "Small" Conference 2011 Aizu, Japan.
Aizu is in Fukushima Pref. Fukushima suffered the damage of the earthquake, Tsunami, and Nuclear hazard in 2011/03/11. But the open source of Fukushima is very active!

Junya Terazono(寺薗淳也). He is organizer of OSSC Aizu. He was the staff of Hayabusa in JAXA before. And, of course, he is a Ubuntu user!
Junya's presentation is about the use method of Ubuntu on his work in a university.

Aizu sightseeing

I and Mitsuya traveled to Aizu on the previous day.

This small temple called Sazae-Do was built 200 years or more ago.
The inside is a one-way traffic slope of double helix structure. It has structure similar to a univalve shell. It is because a Sazae(さざえ) means a univalve shell.
It is a straight road from an entrance to an exit through a summit. It is a very unusual and interesting design.

07 June, 2019 02:30AM

June 06, 2019

Ubuntu Studio: Updates for June 2019

We hope that Ubuntu Studio 19.04’s release has been a welcome update for our users. As such, we are continuing our work on Ubuntu Studio with our next release scheduled for October 17, 2019, codenamed “Eoan Ermine”. Bug Fix for Ubuntu Studio Controls A bug identified in which the ALSA-Jack MIDI bridge was not surviving […]

06 June, 2019 09:08PM

Sebastien Bacher: Ubuntu keeping up with GNOME stable updates

Recently Michael blogged about epiphany being outdated in Ubuntu. While I don’t think that a blog ranting was the best way to handle the problem (several of the Ubuntu Desktop members are on #gnome-hackers for example, it would have been easy to talk to us there) he was right that the Ubuntu package for epiphany was outdated.

Ubuntu does provide updates, even for packages in the universe repository

One thing Michael wrote was

Because Epiphany is in your universe repository, rather than main, I understand that Canonical does not provide updates

That statement is not really accurate.

First Ubuntu is a community project and not only maintained by Canonical. For example most of work done in recent cycles on the epiphany package was from Jeremy (which was one of the reason the package got outdated, Jeremy had to step down from that work and no-one picked it up).

Secondly, while it’s true that Canonical doesn’t provide official support for packages in universe we do have engineers who have interest in some of those components and help maintaining them.

Epiphany is now updated (deb & snap)

Going back to the initial problem, Michael was right and in this case Ubuntu didn’t keep up with available updates for epiphany, which has now been resolved

    • 3.28.5 is now available in Bionic (current LTS)
    • 3.32.1 is available in the devel serie and in the Disco (the current stable)
    • The snap versions are a build of gnome-3-32 git for the stable channel and a build of master in the edge channel.

Snaps and GTK 3.24

Michael also wrote that

The snap is still using 3.30.4, because Epiphany 3.32 depends on GTK 3.24, and that is not available in snaps yet.

Again the reality is a bit more complex. Snaps don’t have depends like debs do, so by nature they don’t have problems like being blocked by missing depends. To limit duplication we do provide a gnome platform snap though and most of our GNOME snaps use it. That platform snap is built from our LTS archive which is on GTK 3.22 and our snaps are built on a similar infrastructure.

Ken and Marcus are working on resolving that problem by providing an updated gnome-sdk snap but that’s not available yet. Meanwhile they changed the snap to build gtk itself instead of using the platform one, which unblocked the updates, thanks Ken and Marcus!

Ubuntu does package GNOME updates

I saw a few other comments recently along the lines of “Ubuntu does not provide updates for its GNOME components in stable series” which I also wanted to address here.

We do provide stable updates for GNOME components! Ubuntu usually ship its new version with the .1 updates included from the start and we do try to keep up with doing stable updates for point releases (especially for the LTS series).

Now we have a small team and lot to do so it’s not unusual to see some delays in the process.
Also while we have tools to track available updates, our reports are currently only for the active distro and not stable series which is a gap and leads us sometime to miss some updates.
I’ve now hacked up a stable report and reviewed the current output and we will work on updating a few components that are currently outdated as a result.

Oh, and as a note, we do tend to skip updates which are “translations updates only” because launchpad does allows us to get those without needing a stable package upload (the strings are shared by serie so getting the new version/translations uploaded to the most recent serie is enough to have those available for the next language pack stable updates)

And as a conclusion, if as an upstream or user you have an issue with a component that is still outdated in Ubuntu feel free to get in touch with us (IRC/email/launchpad) and we will do out best to fix the situation.

06 June, 2019 03:07PM

Ubuntu Podcast from the UK LoCo: S12E09 – Great Giana Sisters

This week we’ve been cabling and tinkering with RGB on Razer keyboards and mice. We discuss a new application for visually impaired users called Magnus, updates from Ubuntu MATE and LibreOffice plus a round up of news from elsewhere in the open source and tech world.

It’s Season 12 Episode 09 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

06 June, 2019 02:00PM