October 16, 2019

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ubuntu Server development summary – 16 October 2019

Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: Ubuntu 19.10 (Eoan Ermine) release imminent

The final testing and certification of Ubuntu 19.10 (Eoan Ermine) are nearly complete! Check out the release notes for a preview of what will be avialble shortly.

cloud-init

  • Publish cloud-init update 19.2-36-g059d049c-0ubuntu3 to Ubuntu Eoan
  • Publish cloud-init SRU to Xenial, Bionic, Disco: 19.2-36-g059d049c-0ubuntu2
  • net: handle openstack dhcpv6-stateless configuration [Harald Jensås] (LP: #1847517)
  • Add .venv/ to .gitignore [Dominic Schlegel]
  • Small typo fixes in code comments. [Dominic Schlegel]
  • cloud_test/lxd: Retry container delete a few times
  • Add Support for e24cloud to Ec2 datasource. (LP: #1696476)
  • Add RbxCloud datasource [Adam Dobrawy]
  • get_interfaces: don’t exclude bridge and bond members (LP: #1846535)
  • Add support for Arch Linux in render-cloudcfg [Conrad Hoffmann]
  • util: json.dumps on python 2.7 will handle UnicodeDecodeError on binary (LP: #1801364)
  • debian/ubuntu: add missing word to netplan/ENI header (LP: #1845669)
  • ovf: do not generate random instance-id for IMC customization path
  • sysconfig: only write resolv.conf if network_state has DNS values (LP: #1843634)
  • sysconfig: use distro variant to check if available (LP: #1843584)
  • systemd/cloud-init.service.tmpl: start after wicked.service [Robert Schweikert]
  • docs: fix zstack documentation lints
  • analyze/show: remove trailing space in output
  • Add missing space in warning: “not avalid seed” [Brian Candler]
  • pylintrc: add ‘enter_context’ to generated-members list
  • Add datasource for ZStack platform. [Shixin Ruan] (LP: #1841181)
  • docs: organize TOC and update summary of project [Joshua Powers]
  • tools: make clean now cleans the dev directory, not the system
  • docs: create cli specific page [Joshua Powers]
  • docs: added output examples to analyze.rst [Joshua Powers]
  • docs: doc8 fixes for instancedata page [Joshua Powers]
  • docs: clean up formatting, organize boot page [Joshua Powers]
  • net: add is_master check for filtering device list (LP: #1844191)
  • docs: more complete list of availability [Joshua Powers]
  • docs: start FAQ page [Joshua Powers]
  • docs: cleanup output & order of datasource page [Joshua Powers]
  • Brightbox: restrict detection to require full domain match .brightbox.com
  • VMWware: add option into VMTools config to enable/disable custom script. [Xiaofeng Wang]
  • net,Oracle: Add support for netfailover detection
  • atomic_helper: add DEBUG logging to write_file (LP: #1843276)
  • doc: document doc, create makefile and tox target [Joshua Powers]
  • .gitignore: ignore files produced by package builds
  • docs: fix whitespace, spelling, and line length [Joshua Powers]
  • docs: remove unnecessary file in doc directory [Joshua Powers]
  • Oracle: Render secondary vnic IP and MTU values only
  • exoscale: fix sysconfig cloud_config_modules overrides (LP: #1841454)
  • net/cmdline: refactor to allow multiple initramfs network config sources
  • ubuntu-drivers: call db_x_loadtemplatefile to accept NVIDIA EULA (LP: #1840080)
  • Add missing #cloud-config comment on first example in documentation. [Florian Müller]
  • ubuntu-drivers: emit latelink=true debconf to accept nvidia eula (LP: #1840080)
  • DataSourceOracle: prefer DS network config over initramfs
  • format.rst: add text/jinja2 to list of content types (+ cleanups)
  • Add GitHub pull request template to point people at hacking doc
  • cloudinit/distros/parsers/sys_conf: add docstring to SysConf
  • pyflakes: remove unused variable [Joshua Powers]
  • Azure: Record boot timestamps, system information, and diagnostic events [Anh Vo]
  • DataSourceOracle: configure secondary NICs on Virtual Machines
  • distros: fix confusing variable names
  • azure/net: generate_fallback_nic emits network v2 config instead of v1
  • Add support for publishing host keys to GCE guest attributes [Rick Wright]
  • New data source for the Exoscale.com cloud platform [Chris Glass]
  • doc: remove intersphinx extension
  • cc_set_passwords: rewrite documentation (LP: #1838794)

curtin

  • storage_config: interpret value, not presence, of DM_MULTIPATH_DEVICE_PATH [Michael Hudson-Doyle]
  • vmtest: Add skip_by_date for test_ip_output on eoan + vlans
  • block-schema: update raid schema for preserve and metadata
  • dasd: update partition table value to ‘vtoc’ (LP: #1847073)
  • clear-holders: increase the level for devices with holders by one (LP: #1844543)
  • tests: mock timestamp used in collect-log file creation (LP: #1847138)
  • ChrootableTarget: mount /run to resolve lvm/mdadm issues which require it.
  • block-discover: handle multipath disks (LP: #1839915)
  • Handle partial raid on partitions (LP: #1835091)
  • install: export zpools if present in the storage-config (LP: #1838278)
  • block-schema: allow ‘mac’ as partition table type (LP: #1845611)
  • jenkins-runner: disable the lockfile timeout by default [Paride Legovini]
  • curthooks: use correct grub-efi package name on i386 (LP: #1845914)
  • vmtest-sync-images: remove unused imports [Paride Legovini]
  • vmtests: use file locking on the images [Paride Legovini]
  • vmtest: enable arm64 [Paride Legovini]
  • Make the vmtests/test_basic test suite run on ppc64el [Paride Legovini]
  • vmtests: separate arch and target_arch in tests [Paride Legovini]
  • vmtests: new decorator: skip_if_arch [Paride Legovini]
  • vmtests: increase the VM memory for Bionic
  • vmtests: Skip Eoan ZFS Root tests until bug fix is complete
  • Merge branch ‘fix_merge_conflicts’
  • d/control: update Depends for new probert package names [Dimitri John Ledkov]
  • util: add support for ‘tbz’, ‘txz’ tar format types to sanitize_source (LP: #1843266)
  • net: ensure eni helper tools install if given netplan config (LP: #1834751)
  • d/control: update Depends for new probert package names [Dimitri John Ledkov]
  • vmtest: fix typo in EoanBcacheBasic test name
  • storage schema: Update nvme wwn regex to allow for nvme wwid format (LP: #1841321)
  • Allow EUI-64 formatted WWNs for disks and accept NVMe partition naming [Reed Slaby] (LP: #1840524)
  • Makefile: split Python 2 and Python 3 unittest targets apart
  • Switch to the new btrfs-progs package name, with btrfs-tools fallback. [Dimitri John Ledkov]

Contact the Ubuntu Server team

Bug Work and Triage

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 7

Uploads Released to the Supported Releases

Total: 82

Uploads to the Development Release

Total: 142

16 October, 2019 09:32PM

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 Aspen Batch – Photo and Video Gallery

Librem 5‘s from the Aspen batch have started shipping to early backers so we’ve done a roundup of some of the best photos and videos shared by us and others as well as some never seen before photos.

Photos

Black Anodized Aluminum Chassis

The Librem 5 case has evolved to a black anodized aluminium shell (with non-metal backing to keep radio reception quality high) with flush, easy-to-slide hardware kill switches.

From the Factory Floor

The Purism factory is ready to ship thousands of Librem 5s to backers over the coming months.

Shell and Applications

Just look at how great PureOS with GNOME and our default applications look on mobile!

Updating PureOS

PureOS is secure and easy to update for any user.

In the Wild

People have been using their Librem 5 while traveling, working and relaxing to connect to WiFi hotspots, browse the internet, use social media, play games and yes… to call and send text messages.

Videos:

If you are looking to see the Librem 5 in action, we’ve got you covered.  Including a hands-on (and hardware teardown) from “The Linux Gamer” featuring our CEO, Todd Weaver.

  • Purism: The Librem 5 Now shipping on Arhive.org and YouTube.
  • Purism: Librem 5 Hardware Kill Switches on Archive.org and YouTube.
  • Purism: Librem 5 First Run Walk-through on YouTube.
  • The Linux Gamer: I got my hands on the Librem 5 Phone on YouTube.
  • The Linux Gamer: Librem 5 teardown with Purism CEO Todd Weaver on YouTube.

We will continue to share more media and stories from other users as they roll in. Thank you to our community for the support and excitement for helping us make a private, secure and open Linux smartphone!

 

Discover the Librem 5

Purism believes building the Librem 5 is just one step on the road to launching a digital rights movement, where we—the people—stand up for our digital rights, where we place the control of your data and your family’s data back where it belongs: in your own hands.

Preorder now

The post Librem 5 Aspen Batch – Photo and Video Gallery appeared first on Purism.

16 October, 2019 03:19PM by Sean Packham

Halo Privacy partners with Purism

SAN FRANCISCO, Calif. & SEATTLE, Wash., October 15, 2019 — Halo Privacy partners with Purism to provide best-in-class secure hardware devices to large enterprise customers in defense, aerospace, and the cryptocurrency/fintech sector.

Halo is excited to deliver solutions utilizing Purism’s industry unique security stack across Librem Laptops, the Librem 5 phone, and including the recently released Made in the USA Librem Key. This advanced security combines hardware with PureBoot, Purism’s UEFI replacement (combining coreboot, Heads, TPM, and Librem Key), to cryptographically guarantee signing of the lowest level of hardware and firmware.

Halo Privacy, combines custom managed attribution techniques with strong cryptography to secure communications from direct attack while maintaining confidentiality for a user’s identity. By integrating with the Purism suite, Halo significantly reduces the attack surface while providing strong assurance based on the integrity of Purism’s supply chain.

Building on a foundation of shared enthusiasm for privacy and control, Purism and Halo Privacy are happy to announce a partnership focused around delivering Purism hardware into Halo Privacy’s Corona & Eclipse secure communications platforms. Halo is a solutions partner with its network of Government and private sector clients. As an additional step, Halo is allocating developer resources to deliver additional functionality on Purism’s platform.

“Halo Privacy has proven to be an instrumental partner with Purism, helping shape some of the security products by getting involved in the early phases of development and product purchasing.” says Todd Weaver, Founder & CEO of Purism.

“When looking to mitigate the supply chain risk in publicly available hardware offerings, nothing compares to Purism. Delivering solutions using the foundational strength of Purism’s products provides an unparalleled level of confidence and control” says Lance Gaines, Founder & CTO of Halo Privacy.

About Purism:

Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco, California, and around the world, Purism manufactures premium-quality laptops and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware by carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

Media Contact:
Marie Williams
Coderella
415-689-4029
pr@puri.sm

About Halo Privacy:

Halo Privacy was founded by like-minded experts with many years of experience in government and industrial secure communications who believe that genuine privacy is still possible. The Halo approach to securing privacy goes beyond just hardware and technology. A keen understanding that the greatest threats to privacy are often human, coupled with the know-how to assume an attacker’s perspective, have allowed Halo to protect the most sensitive information for grateful clients in the government and private sectors. Through a combination sophisticated intelligence tradecraft, streamlined training and proprietary disruptive technology, Halo offers every client a tailored privacy solution. For customers from government entities to corporations to family offices seeking a low profile, we place a secure “Halo” around our clients’ smartphones, laptops, smart homes and businesses that protects the information, intellectual property and personal privacy of everyone and everything inside the “Halo.” Simple training and concierge-level staff support ensure frictionless client use of the Halo systems. In fact, anyone who can manage email and a smartphone is already savvy enough to communicate securely within the Halo.

For more information, please contact: press@haloprivacy.com

The post Halo Privacy partners with Purism appeared first on Purism.

16 October, 2019 02:50PM by Purism

hackergotchi for Cumulus Linux

Cumulus Linux

How inspiration from your data center can modernize your campus network.

Campus networks are undergoing a rapid evolution as they draw inspiration from their data center peers from both a technology and cost perspective. At the forefront of this evolution is open networking, led by innovation and cost efficiencies that apply equally across data center and campus networks.

Interestingly, Cumulus Linux was originally intended for data center networking, but without a doubt, we’re seeing the lines between data center and campus blurring with campus standing to benefit significantly, and it’s about time. It’s the data center that has historically benefited from innovation, especially in compute and storage. The data center network, however, seemed to lag for more than a decade until our founders set out in 2010 to develop a fundamentally different approach to the data center with Cumulus Networks.

Cumulus Networks introduced an open, modern and innovative network operating system called Cumulus Linux. Cumulus Linux was originally designed to emulate the network architecture of the web-scale giants including Google, Amazon, Apple, Microsoft and Facebook allowing you to automate, customize and scale your data center network like no other, and for the first time, bringing this capability to the masses.

Cumulus Networks is building the modern data center network for applications of the future. We provide networking software including a network operating system and network operations tools to provide actionable insights via real-time streaming telemetry data, to help architect, build, operate, and manage your data center in a more simple, open, agile, and scalable way.

So it’s at the request of our customer install base that we are entering into the campus network space to allow organizations to apply the same innovation and efficiencies that has occurred over the last decade in data center, and apply this to a “Modern Campus.” In particular, we’re already seeing large enterprise campus networks starting to look much like enterprise data centers, primarily fueled by a desire to natively leverage automation, to simplify, and to add much needed flexibility and control to their campus networks. To accomplish this requires a new approach to stale legacy campus architectures that, like their data center brethren, was not originally designed to handle modern workloads such as containers, devops/automation and big data or modern applications.

With this open, modern and disaggregated, standards-based approach to networks, the simplicity and seamless integration with automation tools, and the ability to leverage common tools and best practices across data center, our customers are finding a better, more modern approach to building campus networks. One that is designed with applications of the future in mind.

These benefits are now available to campus networks through the joint solution from Dell EMC and Cumulus Networks.

In addition to the features and benefits that make Cumulus Linux and NetQ/NetQ Cloud, so appealing to our data center customers, we recognize that there are additional campus features that are specialized and non-negotiable with our customers. The most critical, but are not limited to, include:

  • 802.1x for authentication and security (many sub features: MAB/MDA/URL redirect/port security, etc.)
  • PoE mandatory
  • 1G supported hardware with large number of ports (Mulitgig coming soon)
  • Voice VLAN
  • L2/L3
  • Automation
  • MLAG

It’s time to consider that a legacy approach to campus networks may no longer provide efficiency from a technology or cost perspective and often leads to an inflexible network with a bandaid approach to features that continue to add cost and complexity.

If you’d like to learn more, you’ll find a full set of collaterals, videos, web pages and a press release below but don’t take my word for it, I would suggest you try Cumulus Linux and NetQ in your campus environment for yourself by taking a test drive.

Modern Campus Network Press Release
Cumulus Linux for Modern Campus Web Page
How can web-scale networking modernize your campus networks? Whitepaper
Naturalis adopts Cumulus Linux to modernize their campus Network Video
Modern Campus Networking Architecture Guide
Cumulus Linux Datasheet
Cumulus Linux Solution Overview
Cumulus NetQ and NetQ Cloud Web Page
Cumulus NetQ and NetQ Cloud Datasheet

To join our live webinar entitled, “Extending data center innovation to campus networks“ on Tuesday, October 29th at 10 am Pacific, please register here. If you would like to see Cumulus Linux and NetQ in action, don’t hesitate to contact your sales team for a demo.

Please let me know if you have any comments or questions, or via Twitter at @CicconeScott.

16 October, 2019 01:00PM by Scott Ciccone

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Ansible vs Terraform vs Juju: Fight or cooperation?

Ansible vs Terraform vs Juju vs Chef vs SaltStack vs Puppet vs CloudFormation – there are so many tools available out there. What are these tools? Do I need all of them? Are they fighting with each other or cooperating?

The answer is not really straightforward. It usually depends on your needs and the particular use case. While some of these tools (Ansible, Chef, StaltStack, Puppet) are pure configuration management solutions, the others (Juju, Terraform, CloudFormation) focus more on services orchestration. For the purpose of this blog, we’re going to focus on Ansible vs Terraform vs Juju comparison – the three major players which have dominated the market.

Ansible

Ansible is a configuration management tool, currently maintained by Red Hat Inc. Although the core project is open-source, some commercial extensions, such as Ansible Tower, are available too. By supporting a variety of modules, Ansible can be used to manage both Unix-like and Windows hosts. Its architecture is serverless and agentless. Instead of using proprietary communication protocols, Ansible relies on SSH or remote PowerShell sessions to perform configuration tasks.

Ansible vs Terraform vs Juju (Ansible)

The tool implements an imperative DevOps paradigm. This means that Ansible users are responsible for defining all of the steps required to achieve their desired goal. This includes writing instructions on how to install applications, preparing templates of configuration files, etc. All these steps are usually implemented in a form of so-called playbooks, however, users can execute ad hoc commands too. Once written, the playbooks can be used to automate configuration tasks across multiple machines in various environments.

Although perfectly suited for traditional configuration management, Ansible cannot really orchestrate services. It was just designed for different purposes, with automation being in the core. Moreover, some of its modules are cloud-specific which makes a potential migration from one platform to the other difficult. Finally, due to its imperative nature, Ansible does not scale in large environments consisting of various interconnected applications. 

Terraform

In turn, Terraform is an open-source IaC (Infrastructure-as-Code) solution that was developed by HashiCorp. It allows users to provision and manage cloud, infrastructure, and service resources using simple, human-readable configuration language called HCL (HashiCorp Configuration Language). The resources are delivered by so-called providers. At the moment Terraform supports over 200 providers, including public clouds, private clouds and various SaaS (Software-as-a-Service) providers, such as DNS, MySQL or Vault.

Ansible vs Terraform vs Juju (Terraform)

Terraform uses a declarative DevOps paradigm which means that instead of defining exact steps to be executed, the ultimate state is defined. This is a huge progress compared to the traditional configuration management tools. However, Terraform’s declarative approach is limited to providers only. The applications being deployed still have to be installed and configured using traditional scripts and tools. Of course, pre-built images can be used too, when deploying applications in cloud environments. Those can be later customized according to the users’ requirements.

In addition to the initial deployment, Terraform can also be used to orchestrate deployed workloads. This functionality is provided by its execution plans and resource graphs. Thanks to the execution plans users can define exact steps to be performed and the order in which they will be executed. In turn, resource graphs allow to visualise those plans. Again, this is much more than what Ansible can do.

Juju

Contrary to both Ansible and Terraform, Juju is an application modelling tool, developed and maintained by Canonical. You can use it to model and automate deployments of even very complex environments consisting of various interconnected applications. Examples of such environments include OpenStack, Kubernetes or Ceph clusters. Apart from the initial deployment, you can also use Juju to orchestrate deployed services too. Thanks to Juju you can backup, upgrade or scale-out your applications as easily as executing a single command. 

Like Terraform, Juju uses a declarative approach, but it brings it beyond the providers up to the applications layer. You can not only declare a number of machines to be deployed or number of application units, but also configuration options for deployed applications, relations between them, etc. Juju takes care of the rest of the job. This allows you to focus on shaping your application instead of struggling with the exact routines and recipes for deploying them. Forget the “How?” and focus on the “What?”.

Ansible vs Terraform vs Juju (Juju)

The real power of Juju lies in charms – collections of scripts and metadata which contain a distilled knowledge of experts from Canonical and other companies. Charms contain all necessary logic required to install, configure, interconnect and operate applications. Canonical maintains a Charm Store with over 400 charms, but you can also write your own charms. This is because the whole framework and ecosystem is fully open-source.

While Juju’s role is to deploy and orchestrate applications, like Terraform it relies on a variety of providers to spin up machines (bare metal, VMs or containers) for hosting those applications. The supported providers include leading public clouds (AWS, Google Cloud, Azure, etc.) and various on-premise providers: LXD, MAAS, VMware vSphere, OpenStack and Kubernetes. In a very rare case, when your cloud environment is not natively supported by Juju, you can use a manual provider to let Juju deploy applications on top of your manually provisioned machines.

Ansible vs Terraform vs Juju

Now, as we’ve arrived at the last section of this blog, could we somehow compare Ansible vs Terraform vs Juju? The answer is short – we cannot. This is because all of them were designed for different purposes and with a different focus in mind. It is fair to say that in some way they formed an evolution path of lifecycle management frameworks. It is really hard to perform Ansible vs Terraform vs Juju comparison then, as each of them is absolutely different.

Thus, if we cannot compare them, let’s maybe get back to the original questions and try to answer them instead.

Do I need all of those tools?

Well, it really depends on your use case, so let’s try to sum up what these tools are for. Ansible is a configuration management tool and fits very well wherever traditional automation is required. On the other hand, Terraform focuses more on infrastructure provisioning, assuming that applications will be delivered in a form of pre-built images. Finally, Juju takes a completely different approach by using charms for applications deployments and operations.

Are they fighting with each other or cooperating?

There are definitely areas in which they cooperate. For example, Juju charms can use Ansible playbooks to maintain configuration files. Or you can use Juju-deployed applications (e.g. OpenStack) as a provider for Terraform. As data centers are becoming more and more complex, there’s definitely space for all of them. This is because all of them are great in what they are doing and what they were designed for.

I want Juju, what next?

If you want to evolve your DevOps organisation and benefit from model-driven, declarative approach to applications deployments and operations, Juju is the answer. Simply visit the Juju website, watch the “Introduction To Juju: Automating Cloud Operations” webinar or contact us directly. Canonical’s DevOps experts are waiting to help you to move forward with the transformation of your organisation.

16 October, 2019 07:17AM

October 15, 2019

Ubuntu Blog: Grace Hopper Conference 2019

We are so excited about what just happened that we felt we should tell everyone about it!

A group of 24 of us at Canonical from various teams including Sales, HR and Engineering, attended the Grace Hopper Celebration in Orlando, Florida. This year, it was an epic gathering of more than 26,000 people from all over the globe interested in tech. Despite its start as women’s work, the tech industry has gained a reputation of being dominated by and mostly suited for men. In reality, this only made the Grace Hopper conference feel more impactful, especially knowing that in its very first edition in 1994, only 500 women were present at the event. The Grace Hopper Conference was an awesome celebration of women; diverse, multi-talented, and deeply skilled!

Both women and men, mostly students, interested in everything from security to machine learning came by the Canonical booth to hear about Ubuntu. We brought along an Orange box so we could demo MAAS, OpenStack, and other incredible technologies happening on Ubuntu at Canonical.

Canonical employee standing at a Ubuntu Orange box setup which includes a computer screen and a rectangular box with various ports"

We rotated attending informative and inspiring sessions; exploring an exhibition hall pulsating with energy and booths as far as the eye can see; and discussed Canonical offerings and job opportunities at our Canonical booth.

Canonical employee speaking with three conference attendees in front of the purple Canonical booth
A group of Canonical employees and conference attendees at the Canonical booth

There were so many best parts to the week. We discussed various technologies with others in the industry, scoped out exceptional talent for Canonical job opportunities, visited various booths and found out who uses Ubuntu and what for. We also gave out Ubuntu trinkets and collected bags of trinkets from others. Perhaps our favourite was just hanging out and getting to know fellow Canonical’ers on the various teams and what they worked on.

Two Canonical employees standing side by side, smiling at the camera and holding pink and purple balloon swords"

All of us had the opportunity to share what we do and what we love about working for Canonical, the company behind Ubuntu. It was interesting for us that most of the people we met did not know the name ‘Canonical’, but knew and worked regularly with Ubuntu. Someone even said: “Ubuntu is the reason I chose this career!” and were very excited to talk to the people behind it.

Meeting that many smart women in tech made us realise that we are not alone. Every one of us has the capacity to contribute and drive change. #WeWill make a difference. See you next year at GHC 2020!

15 October, 2019 07:00PM

Costales: Ubucon Europe 2019 | Sintra edition

¡Y comienza una nueva Ubucon Europea! En esta ocasión en Sintra, Portugal.
¡Bienvenidos!


Llegué el día anterior justo a tiempo para una cena de bienvenida organizada en un vivero de empresas extravagante: Chalet 12. Allí unas 25 personas compartimos momentos entrañables con una cena cocinada por la propia organización.
Marco | Costales | Tiago | Olive
Lo cierto es que la organización Ubuntu-PT estuvo realizando actividades y visitas toda la semana para los miembros de la comunidad que iban llegando con antelación, todo un detallazo.

Día 1

Llegué de los primeros al Centro Cultural Olga Cadaval, un edificio que se divide en dos alas principales con grandes espacios abiertos. 
A parte de las conferencias, había un stand de UBPorts y Libretrend. Incluso café gratis durante toda la jornada. En el stand de UBPorts pude probar el Pinebook con Ubuntu Touch.

Pinebook


Tras recoger mi identificación y un paquete de bienvenida (camiseta, pings, pegatinas...) comenzó en el auditorio la presentación de esta nueva edición por parte de Tiago Carrondo. 

Conferencia de apertura

Acto seguido, el mismo Tiago nos anunció el 15 cumpleaños de Ubuntu, algo en lo que no había caído y moló, repasando los momentos más importantes de Ubuntu en su corta pero intensa vida.
Conferencia 15 Cumpleaños


Yo puse mi granito de arena con dos conferencias, la primera por la mañana, rodeado de arte (cuadros de Nadir Afonso) analicé los peligros concernientes a nuestra privacidad online y cómo podemos mejorarla.
Privacy on the Net


En cuanto finalicé mi conferencia, acudí a ver la mitad de la conferencia de Rudy sobre "Events in your Local Community Team", donde repasa los logros de Ubuntu Paris, con sus Ubuntu Party y WebCafe.

Events in your Local Community Team

A las 13:15 nos fuimos a comer unos cuantos a un restaurante cerca de la estación. 

Comida


Yo impartía un workshop de dos horas a las 3 (o eso pensaba) sobre cómo desarrollar una aplicación nativa para Ubuntu Touch. Salimos Mateo Salta y yo un poco antes para llegar a tiempo, pero me estaba buscando Tiago, que la conferencia comenzaba a las 14:30 y había personas esperando desde entonces. Vaya vergüenza y desde aquí pedir disculpas a la organización y a los asistentes a mi conferencia por ese retraso. En el workshop mostré cómo realizar una linterna en QML para Ubuntu Touch, algo que maravilló a los asistentes por la sencillez y pocas líneas de código.

Creating an Ubuntu Phone app


El día lo finalizamos yendo a una cervecería para calentar motores

Saloon 2


Y posteriormente cenar todos juntos al restaurante O Tunel, donde degustamos platos tradicionales que estaban exquisitos. Estos momentos son los mejores (en mi opinión) pues es cuando realmente se crea y convive en comunidad.

Cena

Día 2

Día largo por delante, con 4 conferencias simultáneas.
Yo me decanté por la de Jesús Escolar y su conferencia Applied Security for Containers, una conferencia donde te das cuenta de los peligros que rodean todas las plataformas y servicios.
Applied Security for Containers


Después conocí a Vadim, desarrollador web profesional que nos mostró su flujo de trabajo y pequeños trucos para ganar tiempo desarrollando.
Scripts de Vadim


Tras Vadim, Marius Quabeck mostró los pasos para crear un podcast. Apunté algún programa que comentó para editar el podcast de Ubuntu y otras hierbas.

Quabeck mostrando cómo crear y editar un podcast


La comida no fue organizada y nos juntamos todos, por lo que costó encontrar un restaurante para tanta gente.
En la tarde, Joan CiberSheep comenzó las conferencias enseñándonos las posibilidades para crear una aplicación de Ubuntu Touch. Yo me quedé un poco anclado en el tiempo con los comandos y workflow de Canonical y UBPorts ha evolucionado muchísimo la programación del móvil con Ubuntu.
Joan


Finalmente, Simos nos mostró las bondades de LXC con su conferencia Linux Containers LXC/LXD.
Linux Containers


Destacar aquí la gifbox que montó Rudy y Olive, una cámara que junta una secuencia de fotografías en un gif, siendo muy divertido e inesperado el resultado final de cada uno que se fotografía.




Al atardecer, el plan fue juntarnos en una cervecería de las afueras. Tras unas tapas, el dueño nos mostró el proceso de elaboración de la cerveza en su pequeña bodega. 

Explicándonos la fabricación de cerveza


El plato principal fue un bacalao a la brasa junto a una degustación de cervezas. Este evento estaba subvencionado parcialmente por un mecenas anónimo, así que mil gracias desde este humilde post.
Cervecería


Como broche final, Jaime preparó una sorpresa que me entusiasmó, una bandina de 2 gaitas y un tambor nos amenizó y animó a bailar en una fiesta que duró hasta la media noche.

¡Fiesta! :)


Día 3

Hoy nos depara una fiesta del 15 aniversario de Ubuntu, estamos todos ansiosos de cómo será :P
Hoy podríamos decir que es la 'UBPortsCON', pues habrá un montón de conferencias sobre el estado de Ubuntu Touch.
Precisamente la primera de todas es de Jan Sprinz, repasando el pasado, mostrándonos el presente y analizando hacia dónde se encamina este interesante proyecto que nos otorga una alternativa libre a los todo poderosos Android e iOS.
Jan Sprinz narrando la historia de Ubuntu Touch


El mismo Jan nos enseñó uno de los bastiones de UBPorts, el instalador que automatiza y convierte en un juego de niños instalar Ubuntu Touch en nuestro móvil, siempre que sea uno de los dispositivos compatibles a los que ha sido portado.
Tras la conferencia de Jan, Rudy me avisó para ir a la Ubuntu Europe Federation Board Open Meeting, una federación creada precisamente para facilitar a organizadores realizar eventos ubunteros como este.
Finalizando la mañana, Joan CiberSheep nos explicó las guías de usabilidad y diseño de Ubuntu Touch.
Usabilidad y diseño de Ubuntu Touch


En esta ocasión comimos por grupos en distintos restaurantes y volvimos puntuales para realizar la fotografía de grupo.
Después el gran Martin Wimpress nos narró la historia de la paquetería snap y los motivos de Canonical para crearla.
Martin Wimpress


Una conferencia muy interesante fue la de Dario Cavedon, que enlazó de forma poco habitual su afición por correr con la privacidad.
Dario Cavedon


Escogí como última conferencia la de Rute Solipa, que nos explicó el proceso y las dificultades de migrar a software libre el municipio portugués de Seixal.

Migración de Seixal


En la noche, acudimos al mismo bar bar, cenando y celebrando a ritmo de gaita el 15 aniversario de Ubuntu :))

Fiesta de cumpleaños

Día 4

Último día de la Ubucon :'( Yo quiero más jejejeje
Escogí la conferencia de Michal Kohutek, quien nos mostró cómo mejorar los materiales educativos analizando con sensores el seguimiento ocular del lector.
Michal y Jesús Escolar con reconocimiento ocular


Marco Trevisan nos mostró la transición a GNOME del escritorio de Ubuntu y qué nos depara la futura versión LTS.
Futura Ubuntu 20.04


Y para finalizar, Tiago Carrondo, quien abrió el primer día, cerró el evento explicando qué es necesario para realizar una Ubucon, las dificultades para organizar esta edición y estadísticas de asistencia. Fue emotivo cuando todos los voluntarios subieron al escenario.

El final


Para la comida fuimos en grupos a distintos restaurantes, nosotros finalizamos en una cafetería con un café y pastel.

Comida


En la tarde había pensado pasear y conocer un poco mejor Sintra, pero con Joan, una conversación deriva a la siguiente, así que la tarde transcurrió en el mismo bar que cenamos los días anteriores. A la hora de la cena se juntó más gente y acabó dándonos la una de la madrugada mientras intentábamos arreglar el mundo :)

Los últimos supervivientes

El resumen


La Ubucon Europea se consolida año tras año. La organización este año ha sido muy buena, con muchas conferencias y actividades extra.
Sintra ha sido una buena elección, una ciudad acogedora, con buenas infraestructuras que permitiesen desarrollar un evento de estas características.
Y ha sido una muestra más de que lo mejor de Ubuntu es su comunidad.
 
¡Hasta el próximo año!


Parece que resuenan rumores de que el próximo año será en Italia... ¡Quién sabe, ojalá! :)
Ya en el recuerdo queda el haber disfrutado con evento único, el haber aprendido un poco en cada una de las conferencias y especialmente, el volver a ver a los amigos que se van formando en ediciones previas y que son los que realmente hacen que la Ubucon Europea sea entrañable.

15 October, 2019 05:00PM by Costales (noreply@blogger.com)

hackergotchi for ArcheOS

ArcheOS

Forensic Facial Reconstruciotn of St. Catherine of Genua (technical report)

Hi all,
this year we worked on several Forensic Facial Reconstruction (FFR) with our expert Cicero Moraes. One of these projects regards St. Catherine of Genua, whos mortal remains are preserved in the church of the "Santissima Annunziata di Portoria", in Genua, and are considered by the Roman Catholic Church, among the so-called "incorrupted bodies".
The FFR project has been very interesting, since it needed some extraordinary procedures, due to three factors: the exceptional conditions of preservation of the body, the particular structure of the sarcophagus and the history of the different necroscopic reconnaissances of the relics of the Saint.
In order to perform the final FFR, we had to adapt our protocol to this particular situation. The solution came from a 3D model produced with SfM-MVS techniques, without opening the sarcophagus, and from some reverse engineering techniques related with the "Coherent Anatomical Deformation", developed in 2014.
Thanks to the kindness of padre Vittorio Casalino, we can now share not only the final result of our study (image below), but also the scientific report (by now, unfortunately, only in Italian), which you can read on ResearchGate, Academia or simply on the Arc-Team Digital Archive.

The final model of St. Catherine of Genua developed by Arc-Team (FFR by Cicero Moraes, 3D data acquisition by Alessandro Bezzi, historical research by Luca Bezzi)

I will try to translate the text in English ASAP (any help is greatly appreciated), but in the meantime I hope this version will be useful, also to go on with the scientific discussion about FFR (which is pretty animated in the last year).
Have a nice day!

15 October, 2019 04:13PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Security monitoring for managed cloud Kubernetes

Photo by chuttersnap on Unsplash Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It has recently seen rapid adoption across enterprise environments. Many environments rely on managed Kubernetes services such as Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS) to take advantage of the benefits of both containerization and the cloud. Google Cloud Platform, for example, offers advanced cluster management features such as automatic upgrades, load balancing, and Stackdriver logging. Amazon Web Services (AWS) provides Kubernetes(R) Role based access control (RBAC) integration with AWS IAM Authenticator and logging via Cloudtrail and Cloudwatch. However, adoption of these cloud-managed services can introduce new challenges to your monitoring and detection capabilities, such as: complexity understanding the shared responsibility model and lack of normalization in logging & monitoring. The purpose of this post and associated release is to help fill...

Ashley Graves Posted by:
Ashley Graves

Read full post

       

15 October, 2019 01:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Design and Web team summary – 11 October 2019

This was a fairly busy two weeks for the Web & design team at Canonical. This cycle we had two sprints. The first was a web performance workshop run by the amazing Harry Roberts. It was a whirlwind two days where we learned a lot about networking, browsers, font loading and more. We also spent a day working on implementing a lot of the changes. Hopefully our sites will feel a bit faster.  More updates will be coming over the next few months. The second sprint was for the Brand and Web team, where we looked at where the Canonical and Ubuntu brands need to evolve. Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical

Takeovers and engage pages

This iteration we built two webinars with engage pages and two more case study engage pages.

Deep Tech webinar

We built a new homepage takeover along with an engage page to learn more about the webinar.

Intro to edge computing webinar series

We created a homepage takeover that leads to an engage page with a series of webinars people can watch about computing at the edge.

Yahoo! Japan case study

We posted a new case study about how Canonical works with Yahoo! Japan and there IaaS platform. 

Domotz case study

We posted a new case study about how Canonical has helped Domotz with their IoT strategy.

Base

Base is the team that underpins our toolsets and architure of our projects. They maintain the CI and deployment of all websites we maintain. 

HTTP/2 and TLS v1.3 for Kubernetes sites

Back in August, a number of vulnerabilities were discovered in HTTP/2, which opened up some DOS possibilities. In response to this, we disabled HTTP/2 for our sites until the vulnerabilities were fixed.

This iteration, the NGINX Ingress controller on our k8s cluster was updated, updating our sites to be served with the latest version of openresty, which includes all relevant fixes for these earlier vulnerabilities. In response we’ve re-enabled HTTP/2, which was also a strong performance recommended by Harry during the workshop.

Another recommendation was that we switch to the latest TLS v1.3, which also carries significant performance benefits, so we switched this on for the whole cluster this iteration.

IRC bot migrated to our kubernetes cluster

We maintain a Hubot-based IRC bot for alerting us to new pull-requests and releases on our projects. Up until now, this has been hosted externally on Heroku.

This iteration, we added a Dockerfile so it could be built as an image and the configs to host it on Kubernetes. We’ve released it so now our IRC bot is hosted in-house on Kubernetes 🎉.

image-template v1

Our canonicalwebteam.image-template module provides a template function which outputs <img> element markup in a recommended format for performance.

The performance workshop highlighted a number of best practices which we used to improve the module and release v1.0.0:

Request latency metrics in Graylog

Many of our sites (particularly snapcraft.io, jaas.ai, ubuntu.com/blog and certification.ubuntu.com) rely heavily on pulling their data from an API. For these sites, the responsiveness of those APIs is central.

Talisker, our Gunicorn-based WSGI server, can output latency stats for outgoing API requests as either Prometheus metrics or just in logs.

This iteration, we have enhanced our Graylog installation to read these metrics from logs and output beautiful graphs of our API.

MAAS

The MAAS squad develop the UI for the maas project.

Our team continues with the work of separating the UI from maas-core. We have very nearly completed taking the settings section to React and are also working on converting the user preferences tab to React as well. 

We are also progressing with the work on network testing. The core functionality is all complete now and we’re ironing out some final details.

As part of the work on representing NUMA topology in MAAS, we completely redesigned the machine summary page, which was implemented this iteration.

We are also experimenting with introducing white background to MAAS as well as the rest of the suite of websites and applications we create. This work is ongoing.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

The team continued working on the new JAAS dashboard, moving forward the design with explorations on responsiveness, interactions, navigation, and visuals.

The team also continued working on Juju website, and the alignment between the CLI commands of Juju, Snap, Charm and Snapcraft. CharmHub wise, the team explored the home page of the new website charmhub.io, to start defining the content and the user experience of the page and navigation.

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

The headline story from the last iteration is the improvement to overall page load times, but specifically the store page. With some code organisation, and the aforementioned image-template module, we’ve managed to drop the initial load time of the store page from an average of ~15s to ~5s (or quicker, as in the video above).

Faster Snap browsing for everyone!

15 October, 2019 11:46AM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, September 2019

A Debian LTS logo
Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In September, 212.75 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Adrian Bunk did nothing (and got no hours assigned), but has been carrying 26h from August to October.
  • Ben Hutchings did 20h (out of 20h assigned).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 30h (out of 23.75h assigned and 5.25h from August), thus anticipating 1h from October.
  • Hugo Lefeuvre did nothing (out of 23.75h assigned), thus is carrying over 23.75h for October.
  • Jonas Meurer did 5h (out of 10h assigned and 9.5h from August), thus carrying over 14.5h to October.
  • Markus Koschany did 23.75h (out of 23.75h assigned).
  • Mike Gabriel did 11h (out of 12h assigned + 0.75h remaining), thus carrying over 1.75h to October.
  • Ola Lundqvist did 2h (out of 8h assigned and 8h from August), thus carrying over 14h to October.
  • Roberto C. Sánchez did 16h (out of 16h assigned).
  • Sylvain Beucler did 23.75h (out of 23.75h assigned).
  • Thorsten Alteholz did 23.75h (out of 23.75h assigned).

Evolution of the situation

September was more like a regular month again, though two contributors were not able to dedicate any time to LTS work.

For October we are welcoming Utkarsh Gupta as a new paid contributor. Welcome to the team, Utkarsh!

This month, we’re glad to announce that Cloudways is joining us as a new silver level sponsor ! With the reduced involvment of another long term sponsor, we are still at the same funding level (roughly 216 hours sponsored by month).

The security tracker currently lists 32 packages with a known CVE and the dla-needed.txt file has 37 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 October, 2019 07:20AM

October 14, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 600

Welcome to the Ubuntu Weekly Newsletter, Issue 600 for the week of October 6 – 12, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

14 October, 2019 08:38PM

hackergotchi for Freedombone

Freedombone

Epicyon Calendar

Going beyond the usual functionality of fediverse servers, calendars have now been added to Epicyon. This is part of the effort to add features useful for organizing social events.

When creating a new post you can now optionally add a date, time and place description. The information gets attached to the post as a tag, using the ActivityStreams Event and Place activities.

There is now a calendar icon which then shows the current month.

Events posted by people you follow will show up there, and selecting particular days then gives you a list of events. If you just want to create reminders for your own use then you can send a DM to yourself containing a calendar event.

Epicyon calendar

This makes organizing events very simple, and you can use message scopes to set up public or private events.

Users of Zot based networks will be yawning at this stage, because this sort of functionality has existed since Friendica, but as far as I know it's new within ActivityPub based systems.

14 October, 2019 12:02PM

October 13, 2019

hackergotchi for Ubuntu developers

Ubuntu developers

Ubucon Europe 2019: Ubucon Europe 2019 in local media

Remember Marta, our volunteer from the registration booth? She took care of the translation of the article written by Fátima Caçador for SAPO Tek:

Ubucon Europe: What is the Ubuntu community doing in Sintra? Sharing technical knowledge and tightening connections

News from the new Ubuntu distribution, the exploration of the several platforms and many “how to”, rule the 4-days agenda where the open source and open technologies are in the air.

The Olga Cadaval Cultural centre in Sintra, is the main stage of a busy agenda filled with several talks and more technical sessions, but at Ubucon Europe there’s also room for networking and cultural visits, a curious fusion between spaces full of history, like the Pena Palace or the Quinta da Regaleira, and one of the youngest “players” in the world of software.

For 4 days, the international Ubuntu Community gathers in Sintra for an event open to everyone, where the open source principles and open technology are dominating. The Ubucon Europe Conference begun Thursday, October 10th, and extends until Sunday, October 13th, keeping an open doors policy to everyone who wants to

Afterall, what is the importance of Ubucon? The number of participants, which should be around 150, doesn’t tell the whole story of what you can learn during these days, as the SAPO TEK had the opportunity to check this morning.

Organised by the Ubuntu Portugal Community, with the National Association for Open Software, the Ubuntu Europe Federation and the Sintra Municipality, the conference brings to Portugal some of the biggest open source specialists and shows that Ubuntu is indeed alive, even if not yet known by most people, and still far from the “world domain” aspired by some.

15 years of Ubuntu

This year is Ubuntu’s 15th birthday after its creation in 2004 by Mark Shuttleworth who gathered a team of Debian developers and founded Canonical, in South Africa, with the purpose of developing a Linux distribution easy to use. He called it Ubuntu, a word that comes from the Zulo and Xhosa languages meaning “I am because we are” which shows its social dimension.

The millionaire Mark Shuttleworth declared at the time “my motivation and goal is to find a way of creating a global operating system for desktops which is free in every way, but also sustainable and with a quality comparable to any other that money can buy”.

And in the last 15 years Ubuntu hasn’t stop growing, following trends and moving from the desktop and servers to the Cloud, the IoT and even phones. Canonical ended up withdrawing from this last one, leaving the development on UBport’s hands.

“Ubuntu has never been better”, states Tiago Carrondo, head of the Ubuntu Portugal Community, explaining that Cloud usage is growing every month and the same is happening on the desktop. “The community has proved being alive and participative” and Ubucon is an example of that capacity to deliver and to be involved in projects.

A new version of Ubuntu is going to be launched in two weeks (October 19th) and in April, next year, it’s time for Ubuntu 20.04, the new LTS version which is generating expectations and it’s the focus of several talks during Ubucon.

An operating system not just for ‘geeks’

But is this a subject just for some “geeks” who don’t mind getting their hands dirty and mess with coding to adapt the operating system to their needs? Gustavo Homem, CTO of Ângulo Sólido, ensures Ubuntu is increasingly being used by companies and in the cloud Azure, AWS and DigitalOcean is among the most used operating systems, highlighting the ease of use, flexibility and security.

The Ângulo Sólido uses Ubuntu internally and with their clients, from desktops to routers and Cloud solutions, and during Ubucon it presented the more and the least expected uses for Ubuntu, where some hacks with mixing desks take part.

It’s in the Cloud where Ubuntu has grown the most, due to the freedom of the operating system, because at the level of computer’s desktops and laptops it depends on the manufactures willingness to sell devices with a pre-installed operating system, or without any, leaving room for ubuntu’s using.

However, even if it’s easy and more and more prepared to connect to every peripherals and it supports most of the software on the market, Ubuntu is far from being recognised by the majority of computer users, so its use is reserved to a restrict group of people with more technical training and knowledge.

In Cell phones, where there was a movement for creating an operating system in 2014 which could be an alternative to android and IOS, the abandonment of the project by Canonical didn’t help creating a mass movement involving manufactures. The UBports community continues developing the concept and coding, and during Ubucon showed some news and developments with Fairphone and Pine64, but it’s still far from becoming a solid operating system, in which you can fully trust, as Jan Sprinz admitted.

In the audience of the talk which SAPO TEK attended, there were many Ubuntu Touch users, the mobile operating system, but with doubts and concerns, such as the availability of the most used apps. Nevertheless the operating system is cherished, and there was even someone comparing it to a pet, which may destroy the leaving room and chew the shoes, but the owner never stops loving it.

How do you do an Ubucon?

“We wanted to make a memorable Ubucon”, explains Tiago Carrondo, the face of the organisation who, during the last few months dedicated much of his time to the preparation of all the logistics, part of a very small but very committed team, as he stated to SAPO TEK.

The European event is now on its 4th edition and it arose spontaneously, inside the community, and after Germany (Essen), France (Paris) and Spain (Xixón), Portugal is the 4th country hosting the community with the purpose of “having an Ubucon without rain” and from here, the community goes, in 2020 to a new location, which should be revealed this week but now still a well-kept secret.

Characterising Ubuntu Portugal as a community of people, Tiago Carrondo explains that companies are “friends”, and appear as associates and sponsors for the event, where there are also connections with educational institutes.

The centre of the organisation and purpose of Ubucon are the people, so there’s a very big social component, allowing volunteers working in Ubuntu’s projects during the entire year to meet face to face and share experiences and knowledge. For that reason, the schedule was designed to start a little later than usual, around 10 am, and to finish early with a long pause for lunch.

The conference ends tomorrow, but those who want to attend the last presentations in Olga Cadaval Cultural Centre in Sintra, can still do it. By registering or by simply showing up at the venue, because the organisation policy is open doors and respect for privacy.

Those who didn’t have the chance to assist will be able to watch everything in video over the next few weeks. Tiago Carrondo explains that they didn’t want to stream it, but everything is being recorded to be edited and will be available soon.

13 October, 2019 09:06PM

hackergotchi for Maemo developers

Maemo developers

Implementing "Open with…" on MacOS with Qt

I just released PhotoTeleport 0.12, which includes the feature mentioned in the title of this blog post. Given that it took me some time to understand how this could work with Qt, I think it might be worth spending a couple of lines about how to implement it.

In the target application

The first step (and the easiest one) is about adding the proper information to your .plist file: this is needed to tell MacOS what file types are supported by your application. The official documentation is here, but given that an example is better than a thousand words, here's what I had to add to PhotoTeleport.plist in order to have it registered as a handler for TIFF files:

  <key>CFBundleDocumentTypes</key>
  <array>
    <dict>
      <key>CFBundleTypeExtensions</key>
      <array>
        <string>tiff</string>
        <string>TIFF</string>
        <string>tif</string>
        <string>TIF</string>
      </array>
      <key>CFBundleTypeMIMETypes</key>
      <array>
        <string>image/tiff</string>
      </array>
      <key>CFBundleTypeName</key>
      <string>NSTIFFPboardType</string>
      <key>CFBundleTypeOSTypes</key>
      <array>
        <string>TIFF</string>
        <string>****</string>
      </array>
      <key>CFBundleTypeRole</key>
      <string>Viewer</string>
      <key>LSHandlerRank</key>
      <string>Default</string>
      <key>LSItemContentTypes</key>
      <array>
        <string>public.tiff</string>
      </array>
      <key>NSDocumentClass</key>
      <string>PVDocument</string>
    </dict>
    …more dict entries for other supported file formats…
  </array>

This is enough to have your application appear in Finder's "Open with…" menu and be started when the user selects it from the context menu, but it's only half of the story: to my big surprise, the selected files are not passed to your application as command line parameters, but via some MacOS-specific event which needs to be handled.

By grepping into the Qt source code, I've found out that Qt already handles the event, which is then transformed into a QFileOpenEvent. The documentation here is quite helpful, so I won't waste your time to repeat it here; what has hard for me was to actually find that this functionality exists and is supported by Qt.

In the source application

The above is only half of the story: what if you are writing an application which wants to send some files to some other application? Because of the sandboxing, you cannot just start the desired application in a QProcess and pass the files as parameters: again, we need to use the Apple Launch Services so that the target application would receive the files through the mechanism described above.

Unfortunately, as far as I could find this is not something that Qt supports; sure, with QDesktopServices::openUrlExternally() you can start the default handler for the given url, but what if you need to open more than one file at once? And what if you want to open the files in a specific application, and not just in the default one? Well, you need to get your hands dirty and use some MacOS APIs:

#import <CoreFoundation/CoreFoundation.h>
#import <ApplicationServices/ApplicationServices.h>

void MacOS::runApp(const QString &app, const QList<QUrl> &files)
{
    CFURLRef appUrl = QUrl::fromLocalFile(app).toCFURL();

    CFMutableArrayRef cfaFiles =
        CFArrayCreateMutable(kCFAllocatorDefault,
                             files.count(),
                             &kCFTypeArrayCallBacks);
    for (const QUrl &url: files) {
        CFURLRef u = url.toCFURL();
        CFArrayAppendValue(cfaFiles, u);
        CFRelease(u);
    }

    LSLaunchURLSpec inspec;
    inspec.appURL = appUrl;
    inspec.itemURLs = cfaFiles;
    inspec.asyncRefCon = NULL;
    inspec.launchFlags = kLSLaunchDefaults + kLSLaunchAndDisplayErrors;
    inspec.passThruParams = NULL;

    OSStatus ret;
    ret = LSOpenFromURLSpec(&inspec, NULL);
    CFRelease(appUrl);
}

In Imaginario I've saved this into a macos.mm file, added it to the source files, and also added the native MacOS libraries to the build (qmake):

LIBS += -framework CoreServices

You can see the commit implementing all this, it really doesn't get more complex than this. The first parameter to the MacOS::runApp() function is the name of the application; I've verified that the form /Applications/YourAppName.app works, but it may be that more human-friendly variants work as well.

0 Add to favourites0 Bury

13 October, 2019 06:51PM by Alberto Mardegan (mardy@users.sourceforge.net)

hackergotchi for Netrunner

Netrunner

Netrunner Rolling vs. Supporting Manjaro

Last month Manjaro announced their future plans, where BlueSystems is a part of it. In times where Linux becomes more and more complex and mature as a real alternative for existing proprietary software, we think bundling efforts is the best way for any participant to increase the chance for long term success, seeking collaboration and […]

13 October, 2019 09:10AM by Netrunner Team

hackergotchi for Qubes

Qubes

Qubes Canary #21

We have published Qubes Canary #21. The text of this canary is reproduced below. This canary and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View Qubes Canary #21 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/canaries/canary-021-2019.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past canaries:

https://www.qubes-os.org/security/canaries/



                    ---===[ Qubes Canary #21 ]===---


Statements
-----------

The Qubes core developers who have digitally signed this file [1]
state the following:

1. The date of issue of this canary is October 13, 2019.

2. There have been 51 Qubes Security Bulletins published so far.

3. The Qubes Master Signing Key fingerprint is:

    427F 11FD 0FAA 4B08 0123  F01C DDFA 1A3E 3687 9494

4. No warrants have ever been served to us with regard to the Qubes OS
Project (e.g. to hand out the private signing keys or to introduce
backdoors).

5. We plan to publish the next of these canary statements in the first
three weeks of January 2020. Special note should be taken if no new canary
is published by that time or if the list of statements changes without
plausible explanation.

Special announcements
----------------------

None.

Disclaimers and notes
----------------------

We would like to remind you that Qubes OS has been designed under the
assumption that all relevant infrastructure is permanently
compromised.  This means that we assume NO trust in any of the servers
or services which host or provide any Qubes-related data, in
particular, software updates, source code repositories, and Qubes ISO
downloads.

This canary scheme is not infallible. Although signing the declaration
makes it very difficult for a third party to produce arbitrary
declarations, it does not prevent them from using force or other
means, like blackmail or compromising the signers' laptops, to coerce
us to produce false declarations.

The news feeds quoted below (Proof of freshness) serves to demonstrate
that this canary could not have been created prior to the date stated.
It shows that a series of canaries was not created in advance.

This declaration is merely a best effort and is provided without any
guarantee or warranty. It is not legally binding in any way to
anybody. None of the signers should be ever held legally responsible
for any of the statements made here.

Proof of freshness
-------------------

Sun, 13 Oct 2019 19:51:40 +0000

Source: SPIEGEL ONLINE - International (https://www.spiegel.de/international/index.rss)
Far-Right Terrorism: Deadly Attack Exposes Lapses in German Security Apparatus
Opinion: This Isn't the Drill, It's the Catastrophe
The PiS Dynasty: Kaczynski Party in Control Ahead of Polish Vote
Time To Act: Trump's Impeachment Inquiry Is Imperative for the World
Predictable Chaos: Europe Braces for the Effects of Brexit

Source: NYT > World News (https://rss.nytimes.com/services/xml/rss/nyt/World.xml)
Hundreds of ISIS Supporters Flee Detention Amid Turkish Airstrikes
12 Hours. 4 Syrian Hospitals Bombed. One Culprit: Russia.
Typhoon Hagibis: Helicopters and Boats Rescue the Stranded
Police Officer is Stabbed in Hong Kong During Flash-Mob Protests
Pullback Leaves Green Berets Feeling ‘Ashamed,’ and Kurdish Allies Describing ‘Betrayal’

Source: BBC News - World (https://feeds.bbci.co.uk/news/world/rss.xml)
Turkey-Syria offensive: US to evacuate 1,000 troops as Turkey advances
Hong Kong protests: President Xi warns of 'crushed bodies'
Black woman shot dead by Texas police through bedroom window
Simone Biles wins record 24th world medal
Hunter Biden to step down from China board amid Trump attacks

Source: Reuters: World News (http://feeds.reuters.com/reuters/worldnews)
Exclusive: U.S. could pull bulk of troops from Syria in matter of days - officials
Exit polls project Tunisian landslide win for Kais Saied
Poland's ruling nationalists set to win election: exit poll
U.S. to pull last troops from north Syria as Turkey presses offensive against Kurds
Russia takes part in talks between Syria and Kurdish-led SDF

Source: Blockchain.info
0000000000000000000a3b269b65134283e4f4e089768704b80727a31bdadd14

Footnotes
----------

[1] This file should be signed in two ways: (1) via detached PGP
signatures by each of the signers, distributed together with this
canary in the qubes-secpack.git repo, and (2) via digital signatures
on the corresponding qubes-secpack.git repo tags. [2]

[2] Don't just trust the contents of this file blindly! Verify the
digital signatures!

13 October, 2019 12:00AM

October 12, 2019

hackergotchi for rescatux

rescatux

Rescatux 0.72-beta1 released

Download

Rescatux 0.72-beta1 ISO (673 MB)
(Torrent)MD5SUM: 0950dcca0256fb1a9bfbb8c06da55b3a

Summary

This is another beta version of Rescatux. The last Rescatux beta was released on May 2019. That’s about five months ago.

This new version features two important fixes: Now Grub recovery in tmpfs enabled distributions such as Ubuntu will work again. Some uefi systems with secure boot disabled that happened to crashed Rescapp no longer crash it.

Additionally Rescapp menu has been reworked. Finally Rescatux live cd uefi boot menu and bios boot menu have been reworked.

There’s an ongoing joke on the Internet on fixing someone else writing something wrong on the Internet. Hopefully our new background is ugly enough for someone to reach us on 2019 Background should be improved issue and correct us.

Rescatux 0.72-beta1 new background

What’s new on Rescatux

  • Added xterm package so that external programs can be opened from within rescapp
  • Move testdisk package dependency from Rescapp to Rescatux
  • New background and associated boot menues improved
  • Based on Debian Buster which now it’s officially stable

What’s new on Rescapp

Rescatux 0.72-beta1 featuring Rescapp 0.54b1
  • Added Xterm self test
  • There’s no longer a back button. Improved menu.
  • Fixed: Non secure boot UEFI mode make rescapp to crash before its menu.
  • Do not mount neither the root partition nor the tmp partition when performing chroot grub operations.
    That reenables being able to fix Ubuntu grub properly.
  • Rescapp was ported from python2 to python3
  • Improved compatibility with Fedora (Big thanks to cjg67)
  • Removed gddrescue and myrescue dependencies
  • Move testdisk package dependency from Rescapp to Rescatux
  • Move gparted package dependency from Rescapp to Rescatux
  • Add missing filesystem dependencies (debian package)
  • INSTALL: Added optional debian package runtime requirements

Known bugs

  • Gparted package is not installed. So trying to start it from the expert tools option will fail.

12 October, 2019 08:21PM by adrian15

October 11, 2019

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Onboarding edge applications on the dev environment

Adoption of edge computing is taking hold as organisations realise the need for highly distributed applications, services and data at the extremes of a network. Whereas data historically travelled back to a centralised location, data processing can now occur locally allowing for real-time analytics, improved connectivity, reduced latency and ushering in the ability to harness newer technologies that thrive in the micro data centre environment.

In an earlier post, we discussed the importance of choosing the right primitives for edge computing services. When looking at use-cases calling for ultra-low latency compute, Kubernetes and containers running on bare metal are ideal for edge deployments because they offer direct access to the kernel, workload portability, easy upgrades and a wide selection of possible CNI choices.

While offering clear advantages, setting up Kubernetes for edge workload development can be a difficult task – time and effort better spent on actual development. The steps below walk you through an end-to-end deployment of a sample edge application. The application runs on top of Kubernetes with advanced latency budget optimization.  The deployed architecture includes Ubuntu 18.04 as the host operating system, Kubernetes v1.15.3 (MicroK8s) on bare-metal, MetalLB load balancer and CoreDNS to serve external requests.

Let’s roll

Summary of steps:

  1. Install MicroK8s
  2. Add MetalLB
  3. Add a simple service – Core DNS

Step 1: Install MicroK8s

Let’s start with the development workstation Kubernetes deployment using MicroK8s by pulling the latest stable edition of Kubernetes.

$ sudo snap install microk8s --classic
microk8s v1.15.3 from Canonical✓ installed
$ snap list microk8s
Name      Version Rev  Tracking Publisher   Notes
microk8s  v1.15.3 826  stable canonical✓  classic

Step 2: Add MetalLB

As I’m deploying Kubernetes on the bare metal node, I chose to utilise MetalLB, as I won’t be able to rely on the cloud to provide LBaaS service. MetalLB is a fascinating project supporting both L2 and BGP modes of operation, and depending on your use case, it might just be the thing for your bare metal development needs. 

$ microk8s.kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
namespace/metallb-system created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created

Once installed, you need to make sure to update the iptables configuration to allow IP forwarding and configure your metalLB with networking mode and address the pool you want to use for load balancing. The config files need to be created manually, please see listing 1 below for reference.

$ sudo iptables -P FORWARD ACCEPT

Listing 1 : MetalLB configuration (metallb-config.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.2.32/28

Step 3: Add a simple service

Now that you have your config file ready, you continue with CoreDNS sample workload configuration. Especially for edge use cases, you usually want to have fine-grained control over how your application is exposed to the rest of the world. This includes ports as well as the actual IP address you would like to request from your load balancer. For the purpose of this exercise, I use .35 IP addresses from 10.0.2.32/28  subnet and create Kubernetes service using this IP.

Listing 2: CoreDNS external service definition (coredns-service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: coredns
spec:
  ports:
  - name: coredns
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    app: coredns
  type: LoadBalancer
  loadBalancerIP: 10.0.2.35

For the workload configuration itself, I use a simple DNS cache configuration with logging and forwarding to Google’s open resolver service.

Listing 3: CoreDNS ConfigMap (coredns-configmap.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
data:
  Corefile: |
    .:53 {
     forward . 8.8.8.8
     cache
     log
    }

Finally, the description of our Kubernetes deployment calling for 3 workload replicas, latest CoreDNS image and configuration I’ve defined in our ConfigMap.

Listing 4: CoreDNS Deployment definition  (coredns-deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns-deployment
labels:
app: coredns
spec:
replicas: 3
selector:
matchLabels:
app: coredns
template:
metadata:
labels:
app: coredns
spec:
containers:
- name: coredns
image: coredns/coredns:latest
imagePullPolicy: IfNotPresent
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

Deploy

With all the service components defined, prepared and configured, you’re ready to start the actual deployment and verify the status of Kubernetes pods and services.

$ microk8s.kubectl apply -f metallb-config.yaml 
configmap/config created
$ microk8s.kubectl apply -f coredns-service.yaml
service/coredns created
$ microk8s.kubectl apply -f coredns-config.yaml
configmap/coredns created
$ microk8s.kubectl apply -f coredns-deployment.yaml
deployment.apps/coredns-deployment created
$ microk8s.kubectl get po,svc --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/coredns-deployment-9f8664bfb-kgn7b 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-lcrfc 1/1 Running 0 10s
default pod/coredns-deployment-9f8664bfb-n4ht6 1/1 Running 0 10s
metallb-system pod/controller-7cc9c87cfb-bsrwx 1/1 Running 0 4h8m
metallb-system pod/speaker-s9zz7 1/1 Running 0 4h8m
NAMESPACE   NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
default service/coredns LoadBalancer 10.152.183.89 10.0.2.35 53:31338/UDP 34m
default service/kubernetes ClusterIP 10.152.183.1 443/TCP 4h29m

Once all the containers are fully operational, you can evaluate how your new end to end service is performing. As you can see, the very first request takes around 50ms to get answered (which aligns with usual latency between my ISP access network and Google DNS infrastructure), however, subsequent requests provide significant latency reduction as expected from a local DNS caching instance.

$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 50 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 0 ms
$ host -a www.ubuntu.com  10.0.2.35
Trying "www.ubuntu.com"
Using domain server:
Name: 10.0.2.35
Address: 10.0.2.35#53
[...]
Received 288 bytes from 10.0.2.35#53 in 1 ms

CoreDNS is an example of a simple use case for distributed edge computing, proving how network distance and latency can be optimised for better user experience by changing service proximity. The same rules apply to exciting services such as AR/VR, GPGPU-based inference AI and content distribution networks.

The choice of proper technological primitives, flexibility to manage your infrastructure to meet service requirements and process to manage distributed edge resources in scale will become critical factors for edge cloud adoption. This is where MicroK8s comes in, to reduce the complexity and cost of development and deployment without sacrificing quality.

End Note

So you’ve just on-boarded an edge application, now what? Take MicroK8s for a spin with your use case(s) or just try to break stuff. If you’d like to contribute or request features/enhancements, Please shout out on our Github, Slack #MicroK8s or Kubernetes forum.

11 October, 2019 10:14PM

hackergotchi for Tails

Tails

Call for testing: 4.0~rc1

Tails 4.0 will be the first version of Tails based on Debian 10 (Buster). It brings new versions of most of the software included in Tails and some important usability and performance improvements.

You can help Tails by testing the release candidate for the upcoming version 4.0!

We are very excited about it and cannot wait to hear your feedback :)

Changes and upgrades

Major changes to included software

  • Update Tor Browser to 9.0a7, based on Firefox 68.1.0esr.

  • Update Electrum to 3.3.8, which works with the current Bitcoin network.

  • Update Linux to 5.3.2.

  • Update tor to 0.4.1.6.

Usability improvements to Tails Greeter

We improved various aspects of the usability of Tails Greeter, especially for non-English users.

  • To make it easier to select a language, we curated the list of proposed languages by removing the ones that had too little translations to be useful.

  • We also simplified the list of keyboard layouts.

  • We fixed the Formats setting, which was not being applied.

  • We prevented additional settings to be applied when clicking on Cancel or Back.

Fixed problems

  • Fix the delivery of WhisperBack reports. (#17110)

  • Dozens of other problems — literally. For more details, read our changelog.

Known issues

  • Spellchecking only works for English. (#17150)

    To fix it, set spellchecker.dictionary_path to /usr/share/hunspell in about:config.

  • Unsafe Browser tabs have the "Private Browsing" name and the Tor Browser's icon. (#17142)

  • The On-screen keyboard does not allow to input any accentuated char. (#17021)

  • Other open tickets for Tails 4.0

See the list of long-standing issues.

How to test Tails 4.0~rc1?

We will provide security upgrades for Tails 4.0~rc1 like we do for regular versions of Tails.

Keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

Please, report any new problem to tails-testers@boum.org (public mailing list).

Get Tails 4.0~rc1

To upgrade your Tails USB stick and keep your persistent storage

  • Automatic upgrades are available from 4.0~beta1 and 4.0~beta2.

  • You can do a manual upgrade.

To download 4.0~rc1

Direct download

BitTorrent download

To install Tails on a new USB stick

Follow our installation instructions:

All the data on this USB stick will be lost.

What's coming up?

Tails 4.0 is scheduled for October 22.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

11 October, 2019 11:19AM

hackergotchi for Ubuntu developers

Ubuntu developers

Laura Czajkowski: FOSDEM Community Devroom 2020 CFP open

We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February 2020 at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

== tl;dr ==

  • Community DevRoom takes place on Sunday, 2nd February 2020
  • Submit your papers via the conference abstract submission system, Pentabarf, at https://penta.fosdem.org/submission/FOSDEM20
  • Indicate if your session will run for 30 or 45 minutes, including Q&A. If you can do either 30 or 45 minutes, please let us know!
  • Submission deadline is 27 November 2019 and accepted speakers will be notified by 11 December 2019
  • If you need to get in touch with the organizers or program committee of the Community DevRoom, email us at community-devroom@lists.fosdem.org

== IN MORE DETAIL ==

We are happy to let everyone know that the Community DevRoom will be held this year at the FOSDEM Conference. FOSDEM is the premier free and open source software event in Europe, taking place in Brussels from 1-2 February at the Université libre de Bruxelles. You can learn more about the conference at https://fosdem.org.

The Community DevRoom will take place on Sunday 2nd February 2020.

Our goals in running this DevRoom are to:

* Connect folks interested in nurturing their communities with one another so they can share knowledge during and long after FOSDEM

* Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration

* Provide concrete advice on dealing with squishy human problems

* To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2020.

We would seek proposals on all aspects of creating and nurturing communities for free software projects. 

 

== TALK TOPICS ==

Here are some topics we are interested in hearing more about this year:

 

1) Is there any real role for community in corporate software projects?

Can you create a healthy and active community while still meeting the needs of your employer? How can you maintain an open dialog with your users and/or contributors when you have the need to keep company business confidential? Is it even possible to build an authentic community around a company-based open source project? Have we completely lost sight of the ideals of community and simply transformed that word to mean “interested sales prospects?”

 

2) Creating Sustainable Communities

With the increased focus on the impact of short-term and self-interested thinking on both our planet and our free software projects, we would like to explore ways to create authentic, valuable, and lasting community in a way that best respects our world and each other.  We would like to hear from folks about how to support community building in person in sustainable ways, how to build community effectively online in the YouTube/Instagram era, and how to encourage corporations to participate in community processes in a way that does not simply extract value from contributors. If you have recommendations or case studies on how to make this happen, we very much want to hear from you.

 

We are particularly interested to hear about academic research into FOSS Sustainability and/or commercial endeavors set up to address this topic.

 

3) Bringing free software to the GitHub generation

Those of us who have been in the free and open source software world for a long time remember when the coolest thing you could do was move from CVS to SVN, Slack ended in “ware”, IRC was where you talked to your friends instead of IRL (except now no one talks in IRL anyway, just texts), and Twitter was something that birds did. Here we are in 2020, and clearly things have changed.

How can we bring more younger participants into free software communities? How do we teach the importance of free software values in an era where freely-available code is ubiquitous? Will the ethical underpinnings of free software attract millenials and Gen Z to participate in our communities when our free software tends to require lots of free time? 

We promise we are not cranky old fuddy duddies. Seriously. It’s important to us that the valuable experiences we had in our younger days working in the free software community are available to everyone. And we want to know how to get there.

 

4) Applying the principles of building free software communities to other endeavors

What can the lessons about decentralization, open access, open licensing, and community engagement teach us as we address the great issues of our day? We have left this topic not well defined because we would like people to bring whatever truth they have to the question. Great talks in this category could be anything from “why to never start a business in Silicon Valley” to “working from home is great and keeps C02 out of the air.” Let your imagination take you far  – we are excited to hear from you.

 

5)  How can free software protect the vulnerable

At a time when some of the best accessibility features are built as proprietary products, at a time when surveillance and predictive policing lead to persecution of dissidents and imprisonment of those who were guilty before proven innocent, how can we use free software to protect the vulnerable? What sort of lobbying efforts would be required to make certain free software – and therefore fully auditable – code becomes a civic requirement? How do we as individuals, and actors at employers, campaign for the protection of vulnerable people – and other living things – as part of our mission of software freedom. 

 

6) Conflict resolution

How do we continue working well together when there are conflicts? Is there a difference in how types of conflicts best get resolved, e.g. ”this code is terrible” vs. “we should have a contributor agreement”? We are especially interested in how tos / success stories from projects that have weathered conflict. 

We are now at 2020 and this issue still comes up semi-daily. Let’s share our collective wisdom on how to make conflict less painful and more productive. 

 

Again, these are just suggestions. We welcome proposals on any aspect of community building!

 

== PREPARING YOUR SUBMISSION & DEADLINES ==

 

=== LENGTH OF PRESENTATION ===

We are looking for talk submissions between 30 and 45 minutes in length, including time for Q&A. In general, we are hoping to accept as many talks as possible so we would really appreciate it if you could make all of your remarks in 30 minutes – our DevRoom is only a single day –  but if you need longer just let us know.

=== ANYTHING EXTRA YOU WOULD LIKE US TO KNOW ===

Beyond giving us your speaker bio and paper abstract, make sure to let us know anything else you’d like to as part of your submission. Some folks like to share their Twitter handles, others like to make sure we can take a look at their GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!

=== SUBMISSION INSTRUCTIONS ===

== KEY DATES ==

  1. CFP opens 11 October 2019
  2. Proposals due in Pentabarf 27 November 2019
  3. Speakers notified by 11 December 2019
  4. DevRoom takes place 2 February 2020at FOSDEM

Community DevRoom Mailing List: community-devroom@lists.fosdem.org

 

11 October, 2019 10:26AM

Didier Roche: Ubuntu ZFS support in 19.10: ZFS on root

ZFS on root

This is part 2 of our blog post series on our current and future work around ZFS on root support in ubuntu. If you didn’t yet read the introductory post, I strongly recommend you to do this first!

Here we are going to discuss what landed by default ubuntu 19.10.

Upstream ZFS On Linux

We are shipping ZFS On Linux version 0.8.1, with features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and a lot of performance enhancements. You can see more about 0.8 and 0.8.1 released on the ZOL project release page directly. 0.8.2 didn’t make it on time for a good integration and tests in Eoan. So, we backported some post-release upstream fixes as they fit, like newer kernel compatibility, to provide the best user experience and reliability. Some small upstream fixes and feedback were contributed by our team to upstream ZFS On Linux project.

ZFS On Linux logo

Any existing ZFS on root user will automatically get those benefits as soon as they update to Ubuntu 19.10.

Installer integration

The ubiquity installer is now providing an experimental option for setting up ZFS on root on your system. While ZFS has been a a mature product for a long time, the installer ZFS support option is in alpha and people opting in should be conscious about it. It’s not advised to run this on production system or a system wherre you have critical data (apart if you have regular and verified backups, which we all do, correct?). To be fully clear, there may be breaking changes in the design as long as the feature is experimental, and we may, or may not, provide transition path to the next layout.

With that being said, what does ZFS on root means? It means that most of your system will run on ZFS. Basically even your “/” directory, is installed on ZFS.

Ready to jump in, despite all those disclaimers? If so, you download an ubuntu 19.10 ISO and you will see that the disk partitioning screen in Ubiquity has an additional option (please read the Warning!):

ZFS option at the format screen

Yes, the current experimental support is limited right now to a whole disk installation. If you have multiple disks, the next screen will ask you to pick which one:

if more than one disk is available, choose which one you will pick

You will then get the “please confirm we’ll reformat your whole disk” screen.

still some quirks to iron out, like formatting confirmation page

… and finally the installation will proceed as usual:

installation in progress

In case you didn’t notice yet, this is experimental (what? ;)) and we have some known quirks, like the confirmation screen showing that it’s going to format and create an ext4 partition. This is difficult to fix for ubuntu 19.10 (for the technical users interested in details, what we are actually doing is creating multiple partitions in order to let partman handle the ESP, and then, overwrite the ext4 partition with ZFS, so it’s technically not lying ;)). It’s something we will fix before getting out of the experimental phase, hopefully next cycle.

Partitions layout

We’ll create the following partitions:

rpool

One ZFS partition for the “rpool” (as root pool), which will contain your main installation and user data. This is basically your main course and the one we’ll detail the dataset layout in the next article as we have a lot to say about it.

bpool

Another ZFS partition for your boot pool named “bpool”, which contains kernels and initramfs (basically your /boot without the EFI and bootloader). We have to separate this from your main pool as grub can’t support all ZFS features that we want to enable on the root pool, and so, your pool would be otherwise unreadable by your boot loader, which will sadly result in unbootable system! Consequently, this pool runs a different version of ZFS pool version (right now, version 28, but we are looking for next cycle to upgrade to version 5000, with some features disabled). Note that due to this, even if zpool status proposes that you to upgrade your bpool, you should never do that or you won’t be able to reboot. We will work on a patch to prevent this to happen.

ESP partition

There is the ESP partition (mounted as /boot/efi). Right now, it’s only created if you have a UEFI system, but we might get it created in Ubiquity systematically in the future, so that people who disabled secure boot and enable it later on can have a smooth transition.

grub partition

A grub partition (mounted as /boot/grub), which is formatted as ext4. This partition isn’t a ZFS one because it’s global to your machine, so the content and state should be shared between multiple installations on the same system. In addition, we don’t want to reference a grub menu which can be snapshotted and roll backed, as it means the grub menu won’t give access to “future system state” after a particular revert. If we succeed in having an ESP partition systematically created in the future, we can move grub itself to it unconditionally next cycle.

Continuing work on pure ZFS system

We are planning to continue reporting feedback upstream (probably post 19.10 release, once we have more room for giving detailed information and various use-case scenarios) as our default dataset layout is quite advanced (more on that later) and current upstream mount ordering generator doesn’t really cope with it. This is the reason why we took the decision to disable our revert GRUB feature for pure ZFS installation (but not Zsys!) in 19.10, as some use case could lead to unbootable systems. This is a very alpha experiment, but we didn’t want to risk user’s data on purpose.

But this is far from being the end of our road to our enhanced ZFS support in Ubuntu! Actually, the most interesting and exciting part (from a user’s perspective) will come with Zsys.

Zsys, ZFS System handler

Zsys is our work in progress, enhanced support of ZFS systems. It allows running multiple ZFS installations in parallel on the same machine and managing complex ZFS dataset layouts, separating user data from system and persistent data. It will soon provide automated snapshots, backups, system managements.

However, as we first wanted to have feedback in Ubuntu 19.10 about pure ZFS systems, we didn’t seed it by default. It’s available though an apt install zsys for the adventurous audience, and some Ubuntu flavors already jumped on the band wagon where it will be installed by default! Even if you won’t immediately see differences, this will unleash some of our grub, adduser and initramfs integration that are baked right in 19.10.

The excellent Ars Technica review by Jim Salter was wondering about the quite complex dataset layout we are setting up. We’ll shed some light on this on the next blog post which will explain what Zsys is really, what it does bring to the table and what our future plans are.

The future of ZFS on root on Ubuntu is bright, I’m personally really excited about what this is going to bring to both server and desktop users! (And yes, we can cook up some very nice features for our desktop users with ZFS)!

If you want to join the discussion, feel free to hop in our ubuntu discourse dedicated topic.

11 October, 2019 07:36AM

hackergotchi for ArcheOS

ArcheOS

I made my own surgical guide using OrtogOnBlender!




Those who follow my work know that I develop an addon called OrtogOnBlender, a learning tool for surgical planning.

I have had the honor of using it to teach many people and also develop surgical guides for the fields of human and veterinary health.

The fact is that these past few weeks, for the first time, I have been using this technology in my own body.

It all started when my dentist asked me to do a CT-scan of an injury that insisted on not completely healing.

Coincidentally, I was teaching a computer graphics course in my city, and my students included radiology and endodontic surgery specialists.


In commenting on my need, I was instructed to take the exam and took the opportunity to proceed with a broad approach. In addition to the tomography of the teeth, they also digitized them in 3D (intraoral digitization). I am very grateful to the staff of the Santa Izabel Clinic, especially Dr. Carlos Augusto Abascal Shiguihara and Dr. Gabriela Zorron Cavalcanti, since I was extremely well attended there.


The first thing we did when we received the CT-scan was to isolate the lesion and reconstruct it in 3D using the Slicer 3D semi-automatic segmentation option. It is evident that the work was followed from the beginning by the surgeon, Dr. Roosevelt Macedo of Statto Clinic, also located in the city I currently live, Sinop-MT, more or less in central Brazil.



Once the lesion was isolated and positioned in a 3D space, I was then able to reconstruct the tomography directly by OrtogOnBlender and import the lesion (in red).


To improve the fit of the future surgical guide, I aligned the teeth from intraoral scanning with those of tomography.


Now we had the teeth, root and lesion very well positioned.



Using OrtogOnBlender's guide creation tools, we designed a structure that fit the teeth while maintaining a safe distance from the gums.


The purpose of the guide was to inform the surgeon of the exact projection of the lesion so that he could access it laterally when drilling the bone.


Here we have a bottom view of the model where we see the tentacular aspect of the guide.


We exported the model as STL and it was printed in high resolution 3D on the premises of Santa Izabel Clinic.

Fine print, but would the model fit properly?


I went to Dr. Roosevelt Macedo's clinic for a test and the model fit perfectly!

We arranged the day of surgery and prepared myself for it.


Snap test at surgery.


Lateral perforation. The image has been edited with grayscale and jagged to avoid shocking readers.


As expected the guide worked very well and allowed the surgeon to find and remove the injury.

I greatly thank the staff of Statto Clinic for the excellent treatment offered.

More than thank the doctors and health experts, I also thank my friend and project partner, Adriano Barreto, who organized the course that kicked off this event.

I hope this is the beginning of a history of using 3D technology more effectively and presently, not only in major cities, but also in more inland cities like mine. It is a great honor and joy to participate in all this, and of course, thanks to you who have read this far.

If OrtogOnBlender interests you, be sure to read the official documentation and download the system that runs on Windows, Mac OS X and Linux:


A big hug!

11 October, 2019 12:37AM by cogitas3d (noreply@blogger.com)

October 10, 2019

hackergotchi for Cumulus Linux

Cumulus Linux

Our docs: now open for your contributions!

You may have noticed our technical documentation has a new look and feel. The reason? We recently migrated to a new platform, Hugo, a really fast static site generator. All our written content is formatted in Markdown and the source code is stored in a public GitHub repository. When we merge a release branch into the master branch, the site automatically gets rebuilt, which takes about 5 minutes from provisioning to deploying the new build, so we can quickly update the site when we come across an issue.

What does this all mean for you? We encourage you to participate if you have the opportunity and desire — and we certainly welcome your pull requests! Feel free to update anything you see that is incorrect or that could be written more clearly. If your time is limited, you can always file a bug against the docs too.

We also accept your original content! If you have an automation solution or a unique Cumulus Linux deployment you’d like to share, feel free to write about it and we’ll host it in the Network Solutions section of the Cumulus Linux user guide. You can read our contributor guide for guidelines on how you can participate.

This is your documentation, and we would love it if you helped us shape it to suit your needs.

Please note we’re still adding new features, like a feedback mechanism and a tool for creating offline content, but feel free to submit your feedback to us now.

10 October, 2019 03:58PM by Pete Bratach

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S12E27 – Exile

This week we’ve been playing LEGO Worlds and tinkering with Thinkpads. We round up the news and goings on from the Ubuntu community, introduce a new segment, share some events and discuss our news picks from the tech world.

It’s Season 12 Episode 27 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

10 October, 2019 02:00PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

HowTo: Web-based Linux Terminal Server with 2FA

Timo Denissen of the Professional Service Team of Univention described in February with the blog article “Desktops with Guacamole remote control” how computers can be remote controlled via the browser. In this How To I would like to show how this principle can be extended with the help of privacyIDEA and xRDP to a terminal server environment which can be used completely in the browser, integrated into the domain of the UCS and secured by 2-factor authentication.
I assume in the HowTo that a functional UCS Master already exists. I run this virtualized using Proxmox. I use a second VM for the terminal server environment.
The following steps are described in detail in this HowTo:

  1. Prepare LinuxMint with xRDP
  2. Installing and configuring privacyIDEA and RADIUS on the UCS Master
  3. Integrate xRDP with privacyIDEA
  4. Install and configure Guacamole with RADIUS Plugin

Prepare LinuxMint with xRDP

I use a LinuxMint 19.1 XFCE installation to build the terminal server on. LinuxMint runs as a virtual machine on a Proxmox cluster and does not have its own graphics card with 3D support. If this would be available and passed through to the VM, Cinnamon can also be used as a desktop. The basic installation is carried out normally with the installation wizard and the user local-admin is created.
After the restart I log on to the virtual console and configure the static IP address 10.0.0.2 and set the UCS Master (IP address 10.0.0.1) as DNS server.
I update the package sources and for administration I install openssh-server (basically on all VM’s) and store my SSH public key in the admin user accounts in ~/.ssh/authorized_keys.
In the next step I integrate LinuxMint according to the Univention instructions in the documentation for Ubuntu clients, which also works unchanged for LinuxMint. According to documentation the Ubuntu Domain Join Client is compatible with LinuxMint by now.
On the desktop I can already log in as a domain user for testing purposes.
Next I install the xRDP server:

sudo apt-get install xrdp xorgxrdp xrdp-pulse audio installer

To let xRDP use the XFCE desktop, I replace the last two lines of the file /etc/xrdp/startwm.sh

test -x /etc/X11/Xsession && exec /etc/X11/Xsession
exec /bin/sh /etc/X11/Xsession

by

usr/bin/startxfce4

Now the login works with an RDP client like Remmina from my own Linux desktop or with the Windows Remote Desktop Client.
I will not go any further into configuration of the desktop here.
The first step has now been taken. We can access our LinuxMint desktop via RDP.

Install and set up privacyIDEA

I install privacyIDEA via the Univention App Center on my UCS Master. For this I need the apps “RADIUS”, “privacyIDEA” and “privacyIDEA RADIUS”. privacyIDEA RADIUS integrates into the RADIUS server of the UCS, which is based on FreeRADIUS.
To allow the LinuxMint computer to establish a connection to the UCS Radius server, I make it know as a client to freeradius. For this I make the following entry in the file /etc/freeradius/3.0/clients.conf as user root on the UCS server

client linuxmint.example.com {
ipaddr = 10.0.0.2
secret = changeme
}

Afterwards I restart the freeradius server.

service freeradius restart

After installing privacyIDEA I login to privacyIDEA Administration via the UCS Portal. There I log in with the account “Administrator@admin” and the administrator password of the UCS domain. After the info dialog I create a new policy under “Configuration” -> “Policies” to configure the user login with OTP.

attribute name

attribute value

policy name

login_userpass_otp

scope

authentication

action

Miscellaneous

otppin: userstore

auth_cache: 2m

user realm

example.com

user resolver

users

priority

1

I still need a second policy to be able to pass on user details from the LDAP.

attribute name

attribute value

policy name

return_user_details

scope

authorization

action

Miscellaneous

add_user_in_response: check

user realm

example.com

user resolver

users

priority

1

Since I turned on auth_cache with the first policy, I need a cron job that cleans it up every day to keep the database clean. The purpose of auth_cache will be explained later.
I create the file /etc/cron.daily/pi-authcache-cleanup on the UCS server with the following content:

#!/bin/sh
/opt/privacyidea/privacyidea-venv/bin/pi-manage authcache cleanup
Then I activate the privacyIDEA RADIUS module on the UCS server via the Univention Configuration Registry web interface or command line:
ucr set privacyidea/radius/enable=1

The privacyIDEA documentation describes how the RADIUS configuration can be tested from the command line. For this purpose the software package freeradius-utils must be installed on the LinuxMint computer. To enable testing of RADIUS without OTP a second policy can be created with scope “authentication” in which the option “passthru” is set to “userstore”. If the test is successful, the policy can be deactivated, a TOTP token can be rolled out for a user (e.g. with the apps privacyIDEA Authenticator or FreeOTP Authenticator for Android and iOS) and tested again. This time the OTP value is appended directly to the password.

The second step is completed! We can authenticate users of the UCS domain with domain password + OTP via RADIUS.

Integrate xRDP with privacyIDEA

Next I will connect the authentication of xRDP to privacyIDEA. Since the RADIUS server already knows the LinuxMint as a client, I use the PAM RADIUS module.

sudo apt-get install libpam-radius-auth
In the file /etc/pam_radius_auth.conf I add the RADIUS server and the secret.
Now I add the following statement to the file /etc/pam.d/xrdp-sesman
auth required pam_radius_auth.so
so that it looks like this:
#%PAM-1.0
auth sufficient pam_radius_auth.so
@include common-auth
@include common-account
@include common-session
@include common-password
After a restart of LinuxMint I can now log in with an RDP client with username and password + OTP.
Alternatively, the privacyIDEA PAM module could also be used here.


Find out more about Guacamole!

Ensure 24/7 access to your systems from all around the globe by using Guacamole. Conveniently available in our UCS AppCenter. Read more about the neat features of Guacamole in this article.

Guacamole_logo

Setting up Guacamole with RADIUS extension

The Guacamole version, which is included in the UCS App-Store, comes without the RADIUS module. Therefore we will build Guacamole with Docker based on the GitHub repository. Although this currently requires a little more manual work, RADIUS enables integration with privacyIDEA and thus central management of the second factors. For Guacamole you could also set up your own VM with Docker. To keep the HowTo simple, I decided to use Docker and Guacamole on the LinuxMint as well.
Therefore we first install Docker and docker-compose:
sudo apt-get install docker.io docker-compose
and download the current source code of the Guacamole Client:
cd ~
mkdir dev
wget https://github.com/apache/guacamole-client/archive/master.zip
unzip master.zip
Next we create the docker-compose configuration for Guacamole:

cd ~
mkdir -p docker/guacamole
cd docker/guacamole
mkdir certs dbinit mysql-data

cat >docker-compose.yml <<EOF
version: '3'
services:
guacamole:
image: guacamole:latest
build:
context: ../../dev/guacamole-client-master
args:
- BUILD_PROFILE=lgpl-extensions
restart: always
# Set your hostname to the FQDN under which your
# sattelites will reach this container
hostname: linuxmint.example.com
env_file: ./env.secrets
environment:
- MYSQL_HOSTNAME=mariadb
- MYSQL_DATABASE=guacamole_db
- MYSQL_USER=guacamole_user
- GUACD_HOSTNAME=guacd
- RADIUS_EXT_LINKNAME=1-guacamole-auth-radius
- RADIUS_HOSTNAME=10.0.0.1
- RADIUS_AUTH_PROTOCOL=pap
- VIRTUAL_HOST=linuxmint.example.com
- LOGBACK_LEVEL=warn
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"
volumes:
- ./dbinit:/root/dbinit
networks:
- guac-tier
guacd:
image: guacamole/guacd
restart: always
environment:
- GUACD_LOG_LEVEL=warning
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"
networks:
- guac-tier
mariadb:
image: mariadb:10.4
restart: always
env_file: ./env.secrets
environment:
- MYSQL_DATABASE=guacamole_db
- MYSQL_USER=guacamole_user
volumes:
- ./dbinit/initdb.sql:/docker-entrypoint-initdb.d/initdb.sql
- ./mysql-data:/var/lib/mysql
networks:
- guac-tier

nginx-proxy:
image: jwilder/nginx-proxy
restart: always
ports:
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
networks:
- guac-tier

networks:
guac-tier:
EOF

Let’s take a look at what the docker-compose configuration does. First of all the Guacamole container is defined and that the image will be built from the source code we checked out before. The build profile is important to include the Radius module in that image.

services:
guacamole:
image: guacamole:latest
build:
context: ../../dev/guacamole-client-master
args:
- BUILD_PROFILE=lgpl-extensions

The following environment variables tell the Radius module where the Radius server is running and which protocol to use.

- RADIUS_EXT_LINKNAME=1-guacamole-auth-radius
- RADIUS_HOSTNAME=10.0.0.1
- RADIUS_AUTH_PROTOCOL=pap

To make Guacamole accessible via HTTPS, a Nginx is used as reverse proxy before. The docker image used for this generates its configuration for each container that defines the environment variable VIRTUAL_HOST.

- VIRTUAL_HOST=linuxmint.example.com

For the Guacamole containers the logfile rotation is configured, so that the logging of the Guacamole containers does not fill my hard disk.

logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "10"

The MySQL database container and the Nginx container are also defined. In a separate file we define the passwords and reference it from docker-compose. These should in any case be replaced by secure passwords.

cat >env.secrets <<EOF
MYSQL_ROOT_PASSWORD=changeme
MYSQL_PASSWORD=changeme
RADIUS_SHARED_SECRET=changeme
EOF

Now I can build the Guacamole Client Docker Image:

sudo docker-compose build

Before I start the environment, I generate the database initialization script using the Guacamole image.

sudo docker run --rm guacamole /opt/guacamole/bin/initdb.sh --mysql > ./dbinit/initdb.sql

I also need the certificate for the Nginx Reverse Proxy. I generate this on the UCS Master and copy it to the LinuxMint VM. The file must be named by the fully qualified domain name (FQDN) and the file extension .crt or .key. The FQDN must match the value of the VIRTUAL_HOST variable in the docker-compose.yml file.

ssh root@10.0.0.1
cd /etc/univention/ssl
univention-certificate new -name linuxmint.example.com -days 365
cd linuxmint.example.com
scp cert.pem local-admin@10.0.0.2:/home/local-admin/docker/guacamole/certs/linuxmint.example.com.crt
scp private.key local-admin@10.0.0.2:/home/local-admin/docker/guacamole/certs/linuxmint.example.com.key

Now the whole environment can be started and the log output can be monitored:

sudo docker-compose up -d
sudo docker-compose logs -f

I wait until the log output calms down and does not bring much new anymore. Then I open the Guacamole web interface: https://linuxmint.example.com/guacamole

On the web interface I can now log in with user name and password + OTP and get an empty connection overview. Authentication works with this. Now you have to configure the authorization in Guacamole. Therefore I log off again and log on with the user “guacadmin” with password “guacadmin”. First I change the password via the user menu in the upper right corner -> “Settings” -> “Preferences”.

Then I create a new connection with the name “LinuxMint” via the user menu -> “Settings” -> “Connections”. The following values are essential:

attribute name

attribute value

name

LinuxMint

protocol

RDP

Parameters -> Network

host name

10.0.0.2

Parameters -> Authentication

username

${GUAC_USERNAME}

password

${GUAC_PASSWORD}

Parameters -> Display

Color depth

True color (24-bits)

In the user menu -> “Settings” -> “Users” I create another user with my user name from the domain without password. I now assign the connection to the user. Alternatively, I could create a group in Guacamole, assign the connection to the group, and add the user to the group. Unfortunately the Guacamole RADIUS module does not support groups yet. A corresponding feature request can be found in the Issue Tracker of the project.
For users who only have permission for one connection, Guacamole opens the connection immediately after login. The variables in the username and password cause Guacamole to pass the username and password used to log in to Guacamole when the connection is opened.
The authentication cache of privacyIDEA, which we defined in the “login_userpass_otp” policy, is now relevant here. Since the password contains an OTP, the automatic login to xRDP would return an error, since a one-time passcode can only be used once according to its name. To make the login process more comfortable for the users, I decided to use the Auth-Cache. With this privacyIDEA accepts the OTP multiple times for the configured 2 minutes. You can test if a lower settings works for your environment.
In an environment with a limited number of users, security can be further improved by restricting access to the NGINX before Guacamole, e.g. with client certificate authentication.

Summary

That’s it. Using xRDP, privacyIDEA and Guacamole, a web-based open source remote desktop environment with 2-factor authentication is up and running.

Although access to local files and printers of the client with Guacamole (see Guacamole FAQ) is not yet possible as it is with Windows RDP, the solution offers companies an open source alternative for a remote desktop environment with a good security level.

Der Beitrag HowTo: Web-based Linux Terminal Server with 2FA erschien zuerst auf Univention.

10 October, 2019 10:20AM by Florian Roestel

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Chromium in Ubuntu – deb to snap transition

We have recently announced that we are transitioning the Chromium deb package to the snap in Ubuntu 19.10. Such a transition is not trivial, and there have been many constructive discussions around it, so here we are summarising why we are doing this, how, and the timeline.

Why

Chromium is a very popular web browser, the fully open source counterpart to Google Chrome. On Ubuntu, Chromium is not the default browser, and the package resides in the ‘universe’ section of the archive. Universe contains community-maintained software packages. Despite that, the Ubuntu Desktop Team is committed to packaging and maintaining Chromium because a significant number of users rely on it. 

Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded.

Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases.

Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04 and the upcoming 19.10 − and for all supported architectures (amd64, i386, armhf, arm64).

Additionally, ensuring Chromium even builds (let alone runs) on older releases such as 16.04 can be challenging, as the upstream project often uses new compiler features that are not available on older releases. 

In contrast, a snap needs to be built only once per architecture, and will run on all systems that support snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.

While this change in packaging for Chromium can allow us to focus developer resources elsewhere, there are additional benefits that packaging as a snap can deliver. Channels in the Snap Store enable publishing multiple versions of Chromium easily under one name. Users can switch between channels to test different versions of the browser. The Snap Store delivers snaps automatically in the background, so users can be confident they’re running up to date software without having to manually manage their updates. We can also publish specific fixes quickly via branches in the Snap Store enabling a fast user & developer turnaround of bug reports. Finally the Chromium snap is strictly confined, which provides additional security assurances for users.

In summary: there are several factors that make Chromium a good candidate to be transitioned to a snap:

  • It’s not the default browser in Ubuntu so has lower impact by virtue of having a smaller user-base
  • Snaps are explicitly designed to support a high frequency of stable updates
  • The upstream project has three release channels (stable, beta, dev) that map nicely to snapd’s default channels (stable, beta, edge). This enables users to easily switch release of Chromium, or indeed have multiple versions installed in parallel
  • Having the application strictly confined is an added security layer on top of the browser’s already-robust sand-boxing mechanism

How

The first release of the Chromium snap happened two years ago, and we’ve come a long way since then. The snap currently has more than 200k users across Ubuntu and more than 30 other Linux distributions. The current version has a few minor issues that we’re working hard to address, but we felt it’s solid and mature enough for a transition. We feel confident that it is time to start transitioning users of the development release (19.10) of Ubuntu to it. We are eager to collect feedback on what works and what doesn’t ahead of the next Long Term Support release of Ubuntu, 20.04.

In 19.10, the chromium-browser deb package (and related packages) have been made a transitional package that contains only wrapper scripts and a desktop file for backwards compatibility. When upgrading or installing the deb package on 19.10, the snap will be downloaded from the Snap Store and installed. 

Special care has been taken to not break existing workflows and to make the transition as seamless as possible:

  • When running the snap for the first time, an existing Chromium user profile in $HOME/.config/chromium will be imported (provided there is enough disk space)
  • The chromium-browser and chromedriver executables in /usr/bin/ are wrappers that call into the respective snap executables
  • chromedriver has been patched so that existing selenium scripts should keep working without modifications
  • If the user has set Chromium as the default browser, the chromium-browser wrapper will take care of updating it to the Chromium snap
  • Similarly, existing pinned entries in desktop launchers will be updated to point to the snap version (implemented for GNOME Shell and Unity only for now, contributions welcome for other desktop environments)
  • The apport hook has been updated to include relevant information about the snap package and its dependencies

When

If you’re experimenting with Ubuntu 19.10 then you can try Chromium as a snap and test the transition from the deb package right now. However, you don’t need to wait until the release on the 17th of October to start using the snap and sharing your feedback. Simply run the following commands to be up and running:

snap install chromium
snap run chromium

Once 19.10 is released, we will carefully consider extending the transition to other stable releases, starting with 19.04. This won’t happen until all the important known issues are addressed, of course.

Now is the perfect time to put the snap to the test and report issues and regressions you encounter.

We appreciate all the feedback and commentary we’ve been sent over the last few months as we announced this project. We honestly believe delivering applications as snaps provides significant advantages both to developers and users. We know there may be some rough edges as we work towards the future and will continue to listen to our users as we chart this new journey.

10 October, 2019 09:12AM

October 09, 2019

Ubuntu Blog: Kubectl and friends as a snap

At Canonical, we build solutions to simplify the lives of our users. We want to reduce complexity, costs, and barriers to entry. When we built the Canonical Distribution of Kubernetes (CDK) and MicroK8s, we made sure it aligned with our mission. We built snaps like kubectl for various Kubernetes clients and services to ensure a harmonious ecosystem.

From user feedback, requests and going over the exciting use cases our users and partners are experimenting with, sometimes you just need to get up and running. Kubernetes on a Raspberry Pi anyone? This is why we provide Kubernetes components such as kubectl, kubefed, kubeadm, etc. as snaps and open to use for your use cases. 

How to install Kubectl

A single-line command is all you need; you can just snap install these use them right away:

$ sudo snap install kubectl --classic

kubectl 1.16.0 from 'canonical' installed

$ kubectl version --clientClient Version: version.Infoversion.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T05:14:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

The installation for kubefed and kubeadm is the same:

sudo snap install kubefed --classic
sudo snap install kubeadm --classic

What’s next?

Check these out and let us know what you think! The source code for these Kubernetes snaps can be found on our Github repo if you’d like to contribute or report an issue. For feedback or requests chat with us on the Kubernetes Slack, in the #microk8s channel, Kubernetes Forum or tag us @canonical, @ubuntu on Twitter (#MicroK8s). We are excited to hear from you!

09 October, 2019 10:14PM

Ubuntu Blog: A reference architecture for secure IoT device Management

One of the key benefits of IoT is the ability to monitor and control connected devices remotely. This allows operators to interact with connected devices in a feedback loop, resulting in accelerated decisions. These interactions are mediated by a device management interface, which presents data in a user-friendly UI. The interface also serves as a client to remotely control devices in the field. Device management is, therefore, a key component of IoT solution stacks, with a significant impact on the ROI of such deployments.

However, there is no one size fits all when it comes to device management solutions. IoT solutions are deployed in various contexts. The purpose, the devices, and the users involved vary from one deployment to another, even within the same industry. It is, therefore, challenging to find a ready-made device management solution perfectly suitable to any given deployment.

Security is the critical requirement that these deployments invariably share, for it must be implemented in line with the best practices. Secure authentication and communication encryption are indispensable for the management of mission-critical device fleets.

The challenge

When it comes to IoT device management, the core challenge is the following: how to implement a solution that is both secure and perfectly suited to the intended use case? Our answer entails three main elements:

  • Microservices implementing management functions
  • A secured implementation of the MQTT protocol for communication
  • Orchestration of the microservices with Kubernetes
This image has an empty alt attribute; its file name is rqKevwQE_pLueNSsRXQvDXqNexLjzgQY_RWJ5FA_UxWuy-FWifRH5zf4XzZ-tGasNXkvJHb49cCArxCZyZ4IpymDz1_Ai0ZqDOmwui9-yT78b1TcUcAQ3Knd17VhWEF0gsO4K77S

Our device management solution exposes devices running Ubuntu Core over a simple UI. These devices are authenticated and communicate securely with an edge cloud hosting their digital twins. All the elements of this solution are open-source. We will elaborate on how these elements combine to deliver a comprehensive reference solution for IoT device management.

Building blocks

At the device level, our reference architecture builds on snaps, the universal application container format portable across multiple Linux distributions. Being strictly confined and transactionally upgraded, snaps offer security guarantees that are important for mission-critical applications. Ubuntu Core, the distribution of the Ubuntu open-source operating system dedicated to the internet of things, is fully built on snaps. In addition to the security benefits that Ubuntu Core brings, it runs a daemon that exposes a REST API. Devices are therefore accessible via an API; a key pre-requisite for remote management delivered out of the box with Ubuntu Core.

Hardware requirements

The multiple services that make up this solution are to be executed at the edge. Therefore, implementation requires hardware, suitable for usage conditions in fields of operations. Such hardware will need sufficient computational capabilities to perform as a worker or master node in an IoT cluster. The capabilities required will vary depending on the complexity of the intended deployment. However, computational power will have a direct effect on the CAPEX of the deployment, since it affects the number of IoT devices that can be served by a single gateway. The more devices supported by a single gateway, the lower the total investment cost.

Securing communication

Device management requires a server infrastructure to mediate communications between devices and the management interface clients. MQTT is the communication protocol implemented in our reference architecture. MQTT is an ISO standard pub-sub messaging transport protocol, which makes it adequate for constrained bandwidth.

In this implementation, a client application is installed devices as a snap. A broker service is installed on the server-side. Communications occur through an encrypted port. Messages are entirely encrypted over the wire. Furthermore, client-server authentication is carried out using certificates issued by a single authority. The private keys remain securely stored on-device, they are never communicated over the wire. This measure adds a layer of security to protect privacy and integrity of exchanged
messages.

Service orchestration at the edge

The device management solution is made of several components implemented as microservices hosted at the edge. Each service plays a key role in the overall solution. These roles are described in this section.

MQTT broker

This service implements a message broker based on the MQTT protocol. Devices connect to this service through a client. The broker receives and sends messages in channels subscribed to by devices in a publisher-subscriber pattern.

Identity service

To maintain security and integrity, access to channels hosted by the broker should only be granted to trusted devices. The identity service vets devices requesting access to these channels. This vetting is done by verification of devices public keys against a registry of pre-authorised devices. This verification is carried out upon the first connection. If successful, the service issues certificates and authentification details, which are cached into devices for subsequent requests.

Device twin

Once connected and authenticated, devices can post telemetry data to channels hosted by the broker. Telemetry data is transmitted in time-series to the broker and stored on the server. Digital ‘twins’ can thus be effectively created for each managed device.

Management UI

Graphical web interfaces are more practical to manage fleets of IoT devices. An open-source management user interface was created within the scope of the reference solution. It allows access to telemetry, remote management, as well as authentication.

Kubernetes

Kubernetes can be used to deploy the services described above at large scale, with availability levels adequate for mission-critical applications. The deployment can be carried out on edge gateways.

Conclusion

We have described how a simple and secure device management solution can be assembled from open-source components. Openness allows for customisation. A more detailed description of the implementation of each component will be provided in an upcoming Canonical whitepaper. Links to the source code of our implementation will also be shared in that whitepaper, for anyone to reuse and improve.

09 October, 2019 08:30PM

hackergotchi for Cumulus Linux

Cumulus Linux

The case for open standards: an M&A perspective

Very few organizations use IT equipment supplied by a single vendor. Where heterogeneous IT environments exist, interoperability is key to achieving maximum value from existing investments. Open networking is the most cost effective way to ensure interoperability between devices on a network.

Unless your organization was formed very recently, chances are that your organization’s IT has evolved over time. Even small hardware upgrades are disruptive to an organization’s operations, making network-wide “lift and shift” upgrades nearly unheard of.

While loyalty to a single vendor can persist through regular organic growth and upgrade cycles, organizations regularly undergo mergers and acquisitions (M&As). M&As almost always introduce some level of heterogeneity into a network, meaning that any organization of modest size is almost guaranteed to have to integrate IT from multiple vendors.

While every new type of device from every different vendor imposes operational management overhead, the impact of heterogeneous IT isn’t universal across device types. The level of automation within an organization for different device classes, as well as the ubiquity and ease of use of management abstraction layers, both play a role in determining the impact of heterogeneity.

The Impact of Standards

Consider, for a moment, the average x86 server. Each server vendor offers a different Lights Out Management (LOM) system to perform low-level tasks on the server, such as installing a hypervisor, microvisor, or operating system.

For IT operations teams that have yet to embrace automation, the different LOM systems are an irritation, but not a significant challenge. For those that have embraced automation, heterogeneity among server vendors has traditionally been a one-time investment to incorporate the new vendor’s LOM, and even that’s changing with the introduction of the Redfish API, and subsequent standardization of LOM automation interfaces.

Similarly, the rise of hypervisors abstracted the details of hardware away from workloads. X86 virtualization not only enabled significant workload consolidation, it dramatically reduced the management overhead per workload. So much so that it would be nearly impossible to operate modern enterprise IT without it. Enterprises just have too much IT to do things the old fashioned way.

The past decade has told a similar tale for the storage industry: Standardization and abstraction drove commoditization. In turn, vendors were forced to invest in ease of use in order to remain relevant. Competition can do amazing things, and in the tech industry the most important result of competition is the adoption of standards.

Standards, Automation, and Openness

The adoption of IT automation in networking lags behind compute, storage, cloud, and so forth. This isn’t to say that network automation isn’t possible, but in many cases it’s still significantly more difficult.

The adoption of network automation has lagged in part because the dominant network vendors have been slow to embrace it. Decades of using proprietary standards to drive lock-in makes for a hard habit to kick; however, vendors no longer have a choice but to standardize and interoperate.

No one networking vendor dominates enough of the market, and it’s likely that no one vendor ever will. Different companies will use different networking vendors. As stated earlier, M&A activity will result in heterogeneous networks, and organizations have no choice but to make them work.

Like all other aspects of IT, networking has grown more complicated as organizations scale. IT automation in other parts of the organization has allowed those teams to respond quickly to change, and this has increased the rate of change with which networking teams have to deal.

Without automation, networking teams simply cannot keep up with the rest of IT. The result is either network administrator burnout, other IT teams working around networking, or—more frequently—some combination of both.

Automation of networks is difficult, if not impossible, without open standards to allow networking devices to interoperate, so networking vendors have been forced—however reluctantly—to learn to play ball.

Choosing the Platform for Your Future

It’s important not to conflate basic implementation of open standards with open networking, or openness more generally. Networking vendors that aren’t truly open networking vendors view the “natural state” of a network as consisting entirely of their products. To these vendors, interoperability is a means to an end. That end is nothing more than giving customers a path to “sunset” networking equipment from other vendors.

Cumulus Networks has a more inclusive view of the datacenter. Diversity is inevitable. Open networking means more than open standards. It means allowing customers to choose between multiple hardware vendors. It means using a network operating system based on Linux, and other open source technologies.

Open networking isn’t about creating lock-in. It’s not about treating the inevitable network heterogeneity of M&A activity as an aberration. Open networking is about building a sustainable platform for IT automation that acknowledges and embraces heterogeneity.

The IT automation crafted today will still be used by your organization’s infrastructure teams for at least the next decade. The pace of change isn’t going to slow, and time is running out to address the elephant in the room.

Automation is a necessity for IT operations to be sustainable in the long term, and building IT automation without open networking is like building a house without a foundation. The alternative—monoculture—is simply unsustainable.

09 October, 2019 04:36PM by Katherine Gorham

hackergotchi for Ubuntu developers

Ubuntu developers

Oliver Grawert: Attaching a CPU fan to a RPi running Ubuntu Core

When I purchased my Raspberry Pi4 I kind of expected it to operate under similar conditions as all the former Pi’s I owned …

So I created an Ubuntu Core image for it (you can find info about this at Support for Raspberry Pi 4 on the snapcraft forum)

Runnig lxd on this image off a USB3.1 SSD to build snap packages (it is faster than the Ubuntu Launchpad builders that are used for build.snapcraft.io, so a pretty good device for local development), I quickly noticed the device throttles a lot once it gets a little warmer, so I decided I need a fan.

I ordered this particular set  at amazon, dug up a circuit to be able to run the fan at 5V without putting too much load on the GPIO managing the fan state … luckily my “old parts box” still had a spare BC547 transistor and an 1k resistor that I could use, so I created the following addon board:

fancontrol.png

Circuit

fan-hw.png

Finished addon board (with pic how it gets attached)

So now I had an addon board that can cool the CPU, but the fan indeed needs some controlling software, this is easily done via some small shell script by echoing 0 or 1 into /sys/class/gpio/gpio14/value … this script can be found on my github account as fancontrol.sh

Since we run Ubuntu Core we indeed want to run the whole thing as a snap package, so lets quickly create a snapcraft.yaml file for it:

name: pi-fancontrol
base: core18
version: '0.1'
summary: Control a raspberry pi fan attached to GPIO 14
description: |
  Control a fan attached to a GPIO via NPN transistor
  (defaults to GPIO 14 (pin 8))

grade: stable
confinement: strict
architectures:
  - build-on: armhf
    run-on: armhf
  - build-on: arm64
    run-on: arm64

apps:
  pi-fancontrol:
    command: fancontrol.sh
    daemon: simple
    plugs:
      - gpio
      - hardware-observe

parts:
  fancontrol:
    plugin: nil
    source: .
    override-build: |
      cp -av fancontrol.sh $SNAPCRAFT_PART_INSTALL/

The image is based on core18, so we add a base: core18 entry. It is very specific to the Raspberry Pi, so we also add an architectures: block that makes it only build and run on arm images. Now we need a very simple apps: entry that spawns the script as a daemon, allows it to access the info about temperature via the hardware-observe interface and also allows it to write to the gpio interface we connect the snap to, to echo the 0/1 values into the sysfs node for the GPIO. A simple fancontrol part that just copies the script into the snap package, and off we go !

The whole code for the pi-fancontrol snap can be found on github and there is indeed a ready made snap for you to use in the snap store at https://snapcraft.io/pi-fancontrol

You can easily install it with:

snap install pi-fancontrol
snap connect pi-fancontrol:gpio pi4-devel:bcm-gpio-14
snap connect pi-fancontrol:hardware-observe

… and your fan should start to fire up every time your CPU temperature goes above 50 degrees….

09 October, 2019 03:10PM

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

What’s new in OTX

AT&T Alien Labs and the Open Threat Exchange (OTX) development team have been hard at work, continuing our development of the OTX platform. As some of you may have noticed, we’ve added some exciting new features and capabilities this last year to improve understanding within the OTX community of evolving and emerging threats. Malware analysis to benefit all The biggest (and latest) new feature within OTX is the ability to submit samples to be analyzed in our backend AT&T Alien Labs systems. (Alien Labs is the threat intelligence unit of AT&T Cybersecurity.) You can now upload files and URLs for analysis, with access to results within minutes. Submissions can be made through the OTX portal (as shown below) or programmatically through the API. From the Submit Sample page, you’ll be able to see all of your submissions with a...

Amy Pace Posted by:
Amy Pace

Read full post

       

09 October, 2019 01:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Designing an open source machine learning platform for autonomous vehicles

Self-driving cars are one of the most notable technology breakthroughs of recent years. The progress that has been made from the DARPA challenges in the early 2000s to Waymo’s commercial tests is astounding. Despite this rapid progress, much still needs to be done to reach full autonomy without humans in the loop – an objective also referred to as SAE Level 5. Infrastructure is one of the gaps that need to be bridged to achieve full autonomy.

Embedding the full compute power needed to fully automatise vehicles may prove challenging. On the other hand, relying on the cloud at scale would pose latency and bandwidth issues. Therefore, vehicle autonomy is a case for edge computing. But, how to distribute and orchestrate AI workloads, data storage, and networking at the edge for such a safety-critical application? We propose an open-source architecture that will address these questions.

Challenges in the field

Embedding compute into vehicles

Cars are increasingly becoming computers on wheels, and as such, they will need to be powered by an embedded operating system. The need for advanced security is trivial for automotive applications. Due to the complexity of full autonomy, the optimum OS will need to be open-source, so as to leverage technical contributions from a broad range of organisations. Therefore, key characteristics that such an OS will need to have are security and openness. Considering these requirements, Ubuntu Core is a choice operating system candidate for vehicles of the future.

Deep learning services at the edge

Embedding computers on board will certainly make vehicles smarter. But how smart does an embedded computer needs to be to make decisions in real-time in an environment as complex as real-world traffic? The answer is extremely smart, much more so than any embedded mobile computer currently is capable of. Vehicles will need to map their dynamic environment in real-time and at high speed. Obstacle avoidance and path planning decisions will need to be taken every millisecond. It would take more hardware capabilities than are currently practical to embed in every single vehicle to tackle these challenges. Therefore, distributing AI compute workloads between embedded computers, edge gateways, and local data centers would be a more promising approach.

If, for instance, environment mapping workloads are run by the vehicle’s embedded computer, motion planning workloads are better executed at the edge. This means that cars would continuously send localisation data they collect to edge nodes installed in the successive areas they are passing by. Edge nodes would be context-aware since they would store information specific to the area they are located in. Strong with context-specific information and aggregated data collected from passing cars, edge nodes would be much more efficient at optimising motion planning than vehicle embedded computers.

Global optimisation and model training in the cloud

Some tasks that are crucial for autonomous driving are performed in the most optimal way in a central core. Path planning, for instance, if performed solely at the vehicle level would lack information pertaining to the overall state of traffic. However, if a central core supports vehicular path planning workloads, it could leverage traffic data aggregated over several areas. Global traffic information would then be extracted from this data and fed back to individual vehicles for better coordinated planning.

Since mapping, avoidance and planning decisions are made based on machine learning models, continuous training of these models is required in order to achieve near-perfect accuracy. Central clouds are best equipped to support model training tasks. The deployment of improved models would then be orchestrated from the core cloud to edge nodes, and finally vehicle embedded computers. Furthermore, central cores could be resorted to for transfer learning. The efficiency and accuracy of the machine learning models could be drastically improved by storing knowledge gained from a traffic situation at one location and applying it in a similar situation at another location.

A machine learning toolkit for automotive

Introducing Kubeflow

In order to implement an open-source machine learning platform for autonomous vehicles, data scientists can use Kubeflow: the machine learning toolkit for Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning workflows simple, portable and scalable. It consists of various open-source projects which can be integrated to work together. This includes Jupyter notebooks and the TensorFlow ecosystem. However, since the Kubeflow project is growing very fast, its support is soon going to expand over other open-source projects, such as PyTorch, MXNet, Chainer, and more.

Kubeflow allows data scientists to utilize all base machine learning algorithms. This includes regression algorithms, pattern recognition algorithms, clustering and decision making algorithms. With Kubeflow data scientists can easily implement tasks which are essential for autonomous vehicles. These tasks include object detection, identification, recognition, classification, and localisation. 

Getting Kubeflow up and running

As Kubeflow works on top of K8s, the Kubernetes cluster has to be deployed first. This may be challenging, however, as gateways are tiny devices with limited resources. MicroK8s is a seamless solution to this problem. Designed for appliances and IoT, MicroK8s enables the implementation of Kubernetes at the network edge. For experimenting and testing purposes, you can get MicroK8s up and running on your laptop in 60 seconds by executing the following command:

sudo snap install microk8s –classic

This assumes you have snapd installed. Otherwise, refer to snapd documentation first. You can then follow this tutorial to install Kubeflow on top of MicroK8s.

However, as some operations are performed in the network core, data scientists have to be able to use Kubeflow in the core too. Although, MicroK8s could be used again, for such data center environments Charmed Kubernetes is a better option. Designed for large-scale deployments, Charmed Kubernetes is a flexible and scalable solution. By using charms for Kubernetes deployment in the core, data scientists can benefit from full automation, model-driven approach and simplified operations.

Conclusions and next steps

An open-source machine learning platform for autonomous vehicles can be designed based on Kubeflow, running on the Kubernetes cluster. While MicroK8s are perfectly suitable for the edge, Charmed Kubernetes fits better for the network core.

To learn more about Kubeflow, visit its official website.

Found an issue with MicroK8s? Report it here

09 October, 2019 02:47AM

October 08, 2019

Ubuntu Blog: The State of Robotics – September 2019

The Ubuntu robotics team presents, The State of Robotics. A monthly blog series that will round up exciting news in robotics, discuss projects using ROS, and showcase developments made by the Ubuntu robotics team and community. Every day, people make contributions to the world of robotics. And every day, work goes unnoticed that could change the course of projects across the globe. This could be anything from a software patch to putting the final touches on a ROS enabled robot dog. But no longer. The hope here is that this will become a highlight reel and a community piece where folks will be able to find out what’s going on in the world of ROS and Ubuntu robotics.

This, our first instalment, will be heavy on the Ubuntu robotics’ teams work. But in the future, if you are working on a project using ROS and or Ubuntu and you want us to talk about it, let us know. Send a summary of your work to robotics.community@canonical.com, and it might feature in next month’s blog! Now, let’s discuss September.

ROS 1 and ROS 2 security scanning and bug fixing

Despite being very much in use, ROS 1 inevitably still has certain bugs and CVEs (common vulnerabilities and exposures). The Ubuntu Security team went through several core ROS 1 and 2 packages, using a variety of scanning tools, and their own eyeballs, to look for bugs (especially those people could exploit). They found 31 bugs in ROS 1, and have already fixed 21 of them (with six more fixes in review). Three of these were logged as CVEs, each of them an overflow in ros_comm, a pivotal ROS 1 component. In ROS 2, they discovered six bugs, with five corrected so far in collaboration with eProsima, including infinite loops and buffer overflows in Fast RTPS, the default DDS implementation.

The Ubuntu Security team does this proactive security scanning and bug fixing continuously to ensure the safety of ROS users.  In this blog series, there will be monthly updates on the fixes and patches that were implemented that month to make ROS a more secure environment. Going forward this will be the numbers and a discussion of any significant fixes. Because at Canonical, we know that robot security matters.

Robotics competitions

Two robotics competitions were announced this month that look really exciting. In the future the Ubuntu robotics team will be getting involved, but for now that’s just a dream:

First up is Nasa’s Space Robotics Challenge Phase 2. This challenge was announced recently for teams to solve a lunar in-situ resource utilisation (ISRU) mission. It requires the development of software to allow a virtual team of robotic systems to operate autonomously to successfully achieve specific tasks. To sign up and for more information, click here.

Secondly, he Turtlebot3 Autorace is an annual autonomous driving competition held in Korea. For three years it has been a stage for excellent ideas and creative thinking around adapting the highly versatile Turtlebot3. The officials at Turtlebot3 Autorace 2019 have not disclosed much information yet. But the competition is already assured to be a whole lot of fun as the official platform is the Turtlebot3!

Canonical takes the helm of the ROS 2 Security working group

Canonical has ramped up its security effort on ROS 2 throughout the year. Things really took off in June when the Ubuntu Robotics and Ubuntu Security teams joined the Montreal Snapcraft Summit to host a ROS 2 security hackathon. They invited engineers from the community and companies like Open Robotics and Amazon, and everyone sat at the same table and hacked away on SROS 2.

Fast forward to August to see Canonical joining the ROS2 Technical Steering Committee. Now in September, Canonical has stepped into a leadership role as Chair of the ROS 2 Security working group to help coordinate efforts around the entirety ROS 2’s security features.

Teleop_tools arrive in ROS 2 Dasing!

The Ubuntu robotics team is happy to announce that teleop_tools is now ported to ROS 2 Dashing! As its name suggests, the teleop_tools package is a collection of tools for the teleoperation of a robot from your mouse, keyboard or joystick. It is easy to get:

[sudo] apt-get install ros-dashing-teleop-tools.

Big in Japan, ROSCon Japan!

Canonical made a showing at this year’s ROSCON Japan. Our own Ted Kern presented a lightning talk behind the motivations for adding mypy support to ament_lint, the Canonical attendees demoed a robot running ROS snaps on Ubuntu Core, and there were loads of conversations with individuals in the Japanese robotics community – thanks to our friends and colleagues with Ubuntu Japan for interpreting. It was enlightening to see how much was happening with ROS in the Japanese maker space and the corporate space alike. Read about it in more detail in the ROSCon Japan 2019 blog post. 

Are cheat sheets really cheating?

Because ROS 2 is all new and shiny, it is sometimes difficult to remember all those new command lines. Not that they are super complicated, but they are many! That’s why the Ubuntu Robotics team has put together some cheat sheets to help you remember. One of them concerns ROS 2 CLI tools and the other concerns colcon. Feel free to share them, print and pin them above your screen. Of course, feel free to contribute to them too, they’re all on Github.

catkin_make_isolated: support ignored packages

When building a ROS 1 snap, a developer can build a subset of the packages in the workspace, or build the entire workspace. However, it’s not unusual to have only a few packages in the workspace not built properly. In fact, one of our customers recently ran into exactly that scenario. The typical way to disable packages in a workspace is to create a `CATKIN_IGNORE` file within the package. But that disables it for all users of the workspace, and in this case, they just wanted to disable it in the snap. It turns out that `catkin_make_isolated` doesn’t support such an option. So the Ubuntu Robotics team added one. Once it’s released, snapcraft will be able to utilise it too.

Snapcraft: add the ability to ignore packages in a colcon workspace

Once a customer needed the ability to ignore packages in ROS 1 (using Catkin), the Ubuntu Robotics team realised that customers would also want that functionality when building a ROS 2 snap. So, they added support for it to the Colcon plugin as well. (Thankfully Colcon already supported it, no need for upstream changes).

“Grrr what’s that message body again?”

It can get frustrating when topics need publishing, services need calling, or an action goal needs sending from the terminal in ROS 2. But “grrr” no longer, the Ubuntu robotics team has good news. They landed a series of pull requests (299, 300, 301) that enables autocompletion for the topic/request/goal message body! This means you can publish call and send action goals quicker and easier until your heart’s content.

Shopify acquires 6 River Systems

ROS-powered 6 River Systems makes what are called collaborative robots. (i.e. work alongside flesh and blood staff) Robots that help workers find items in a warehouse. It was just acquired by Shopify, which means, in Open Robotics’ words: “ROS will be handling packages headed all over the globe!”

Outro

September was a busy month, and October is already at full speed so watch out for next months post. This series is put together entirely by the Ubuntu Robotics team, but they don’t want to take all of the limelight. Although the work they do is worth talking about and it will be on stage in this series, it should also be a stage for the community. Developers, tinkerers and hackers. If there’s a project you are working on or that you think should be talked about, let us know. If its ROS and or Ubuntu related, we’d love to hear about it and get in touch. Send a summary torobotics.community@canonical.com, and if we want to feature it next month, we’ll be in touch. Thanks for reading.

08 October, 2019 03:48PM

hackergotchi for Tails

Tails

Tails report for September, 2019

Releases

These are some of the changes that were introduced in Tails 3.16:

  • Removed pre-generated Pidgin accounts
  • Removed LibreOffice Math
  • Upgraded Tor Browser to 8.5.5

Code

  • We started to integrate Tor Browser 9.0 (#16356).

  • We started working on the upgrade to Thunderbird 68 (#16771).

  • We did lots of work to improve the reliability of our test suite.

  • We did some initial research and tests on using Portals to improve the UX of saving downloaded files from Tor Browser (#10422, #15678).

  • We did some initial research on redesigning, in a Wayland-compatible way, our current sudo-based privilege separation model (#12213 and subtasks).

  • We improved the UX of the Greeter and fixed a few of its most annoying bugs (#16095 and the tickets it blocks). This work will hopefully land in time for Tails 4.0 :)

Documentation and website

  • We rewrote completely the instructions to backup the persistent volume.

  • We documented how to do right-click on Mac. (#15718)

  • We proposed to use Trimage to compress better the images on our website. (#17099)

  • We agreed on having a "People" page. (#17046)

User experience

  • We published a job offer on illustrations on what is Tails and how it works.

  • We did some stats on how good people upgrade their Tails, with data from April 2019. (#17069#note-4)

    • 16.6% of boots (3800/day) had no direct automatic upgrade path to the latest version because they were more than 3 months old.

    • 3.8% of boots (860/day) were stuck before 3.6, which had broken all automatic upgrades. The impact of breaking all automatic upgrades in 3.6 was still huge even 1 year later.

Hot topics on our help desk

Infrastructure

  • We upgraded our very old Jenkins to the current LTS version (#10068). This in turn allowed us to implement a bunch of improvements and bugfixes that had been blocked by this postponed upgrade.

  • We kept working on making our web translation platform ready for prime-time. We're almost there!

Funding

  • We submitted a joint grant proposal with Tor and the Guardian Project to the DRL Internet Freedom program.

  • We added Monero and Zcash as cryptocurrencies in which you can donate to Tails.

  • We received a $1 000 donation from TOP10VPN.

  • We prepared most of the content for our upcoming donation campaign: banner, blog posts, email, tweets, etc. (#16096)

Outreach

Past events

  • sajolida attended the 1st Café Internet on September 19 in Mexico City.

Upcoming events

  • intrigeri will facilitate a "Discover Tails and translate it into your own language" session at the Mozilla Festival on October 26-27 in London (UK).

Translations

All the website

  • de: 38% (2189) strings translated, 10% strings fuzzy, 35% words translated
  • es: 54% (3095) strings translated, 4% strings fuzzy, 45% words translated
  • fa: 30% (1721) strings translated, 11% strings fuzzy, 32% words translated
  • fr: 90% (5148) strings translated, 2% strings fuzzy, 89% words translated
  • it: 33% (1887) strings translated, 8% strings fuzzy, 29% words translated
  • pt: 24% (1383) strings translated, 9% strings fuzzy, 20% words translated

Total original words: 59107

Core pages of the website

  • de: 66% (1178) strings translated, 15% strings fuzzy, 68% words translated
  • es: 88% (1572) strings translated, 4% strings fuzzy, 88% words translated
  • fa: 34% (606) strings translated, 14% strings fuzzy, 31% words translated
  • fr: 94% (1669) strings translated, 3% strings fuzzy, 94% words translated
  • it: 62% (1114) strings translated, 17% strings fuzzy, 63% words translated
  • pt: 45% (801) strings translated, 14% strings fuzzy, 47% words translated

Total original words: 16513

Metrics

  • Tails has been started more than 746 622 times this month. This makes 24 887 boots a day on average.

How do we know this?

08 October, 2019 03:06PM

hackergotchi for Whonix

Whonix

Whonix Networking Implementation - Developer Documentation - Feedback Wanted!

@Patrick wrote:

The diff between Debian buster and Whonix related to changes to network configuration.

Whonix Networking Implementation - Developer Documentation was just now updated.

On above page you can find all changes that Whonix applies related to networking:

  • Location of the files on the disk in installed Whonix.
  • The location of the file in Whonix source code on the disk.
  • A link to the web version of the file on github.
  • A comment if
    • not installed by default
    • gateway or workstation only
    • Non-Qubes-Whonix or Qubes-Whonix only
  • And most importantly, a summary of what that file is supposed to do.

That might give you a pretty good overview how Whonix implements its networking. By following the links to the actual files and reviewing them, you might gather enough information so you could create your own Whonix manually. That may not be necessary but it can never hurt to have more people who understand Whonix well since through this review process, issues might be revealed and fixed.


Feedback Wanted!

Does this wiki page make it easier to understand how networking is implemented in Whonix?

Anything about the formatting that could be improved? Such as should each file get its own chapter or is that too much?

If the first category networking which got documented here is helpful, also other categories can be documented. And of course, it would also be trivial to have a wiki page “all-in-one” which documents all changes by Whonix to Debian.


Qubes-Whonix (package qubes-whonix) is not yet fully documented on that wiki page but the there are extensive comments in the source code.


This time this will be easier to maintain and keep updated.

There were previous attempts to document how Whonix is implemented. But since source code changes over time (packages are reorganized, source files move around), it was too much effort to keep the design documentation in sync, so that didn’t happen. Also it was too much. Whonix does not only reconfigure the network but also enhances other parts such as security and usability. These pages were too long and therefore not convenient enough. Therefore not too many people were reading it.


The way this works is having a simple markup as comments.

For example /etc/network/interfaces.d/30_non-qubes-whonix contains:

#### meta start
#### project Whonix
#### category networking
#### non_qubes_whonix_only yes
#### gateway_only yes
#### description
## network interfaces configuration eth0 (external network interface) and eth1 (internal network interface)
##
## static network configuration
##
## eth0
#address 10.0.2.15
#netmask 255.255.255.0
#gateway 10.0.2.2
##
## eth1
#address 10.152.152.10
#netmask 255.255.192.0
#### meta end

These comments are then processed by packaging-helper-script function pkg_descr_creator and pkg_descr_merger which autogenerates a wiki source code that can simply be copied/pasted to the wiki.

The field #### category allows to reuse the same documentation for different categories. For example is /etc/sysctl.d/tcp_hardening.conf network configuration or security configuration? It’s both. Therefore it can be mentioned on a wiki page which documents Whonix networking implementation as well as on another wiki page which documents any security related changes by Whonix.

Posts: 3

Participants: 2

Read full topic

08 October, 2019 01:55PM by @Patrick

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Blog: Container registry for Kubernetes with GitLab

Container orchestration solutions such as Kubernetes allow development teams to be quick and agile with their software deployments.

One of the main features of Kubernetes is the ability to reduce the deployment of version piece of software down to a simple image tag which can be applied at the end of a command.” – said Tytus Kurek, Product Manager for Charmed Kubernetes at Canonical.

This opens the doors to streamlined deployments but creates another problem. How do we streamline? We can do this manually, but it’s not very streamlined. Or we can do this automatically, but we need to be smart. We can’t just deploy as soon as a new version is released. We need to check it first. This is where a container registry and CI/CD come in.

Prerequisites

Before we get started, we have to find ourselves a healthy Kubernetes cluster. Canonical offers both Charmed Kubernetes and MicroK8s solutions which are fully compliant with the upstream Kubernetes project. While Charmed Kubernetes is suitable for large-scale deployments in data centers or public clouds, MicroK8s was designed for workstation and edge appliances. You can install MicroK8s on your laptop at no cost by following this quick tutorial. If you’re on Windows or Mac you may need to follow the Multipass guide first to get a VM with Ubuntu.

Using GitLab as a container registry for Kubernetes

Apart from Kubernetes, we will also need GitLab – a web-based DevOps lifecycle tool. GitLab has the ability to store up to 10 GB in a container registry for projects. You can incorporate the building of these containers into your own CI/CD pipeline or you can use Gitlab’s own CI/CD functionality to do this for you.

Setting up the container registry

Creating the container registry on GitLab involves completing the following steps:

  • Create a project – you can create a new project or use an existing one.
  • Create a Dockerfile – create a Dockerfile for an image to be built and stored in GitLab.
  • Enable Container Registry – enable Container Registry feature in GitLab’s settings.
  • Build an image – build an image from the Dockerfile; make sure you can successfully launch a container from this image.
  • Push the image – push the image to the project’s repository in GitLab.
  • Create a token – create a token that will be used by Kubernetes when pulling the image from GitLab.
  • Pull the image – at this point, you can start using images stored in GitLab when creating deployments in Kubernetes.

This is now as simple as executing the following command:

kubectl create deployment gitlabrepositories --image=registry.gitlab.com/<YOUR_USERNAME>/gitlabregistries

As the whole process requires a bunch of manual steps, we decided to create a detailed tutorial that you can follow step-by-step to get your container registry for Kubernetes created in GitLab.

Take The Tutorial

Conclusions

Using GitLab as a container registry for Kubernetes allows you to streamline your application deployments. You can check out GitLab’s documentation on how to take your newly learned skills and apply them to your own CI/CD or create one in GitLab.

08 October, 2019 01:24PM

October 07, 2019

The Fridge: Ubuntu Weekly Newsletter Issue 599

Welcome to the Ubuntu Weekly Newsletter, Issue 599 for the week of September 29 – October 5, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

07 October, 2019 10:05PM

Jono Bacon: Mary Thengvall on Developer Relations, Reporting, and Growth

In recent years, developers have become an increasingly important audience for organizations to build relationships with. Not only are developers actively building technology, but they are also often helping to shape decisions inside of businesses that cover product, awareness, and beyond.

As such, Developer Relations has become an increasing focus for many organizations. How though, do you build real relationships with developers?

Mary Thengvall has been actively involved in Developer Relations for a number of years in her experience at O’Reilly, Chef, Sparkpost, and as an independent consultant. She is the author of The Business Value Of Developer Relations and maintains DevRel Weekly.

In this episode of Conversations With Bacon, we unpick what Developer Relations is, Mary’s ascent in the industry, and how this work can and should be integrated into a business. Mary also shares her perspectives on what success looks like, how technical DevRel people should be, where this work should ideally report, and much more.

A really fascinating discussion and well worth a listen!

Listen


   Listen on Google Play Music
   

The post Mary Thengvall on Developer Relations, Reporting, and Growth appeared first on Jono Bacon.

07 October, 2019 09:07PM

hackergotchi for Elive

Elive

Elive 3.7.14 beta released + 64 BIT

After a long time of development, the Elive Team is proud to announce the release of the beta version 3.7.14 !
This new version includes:

  • Updated to a Debian-Buster base, with kernel 5.2.9 and lots of updated applications
  • Hardware support for UEFI, SecureBoot, nvme disks, optional 64BIT builds, etc…
  • Dedicated dock-bar with multiple features!
  • Audio player directly on the desktop with controls and covers
  • Enhanced stability stronger than ever before
  • Temporary desktop for development based on E16, offering an
  • Check more in the Elive Linux website.

    07 October, 2019 07:04PM by Thanatermesis

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Blog: Five Key Kubernetes Resources for IoT

    IoT workloads are moving from central clouds to the edge, for reasons pertaining to latency, privacy, autonomy, and economics. However, workloads spread over several nodes at the edge are tedious to manage. Although Kubernetes is mostly used in the context of cloud-native web applications, it could be leveraged for managing IoT workloads at the edge. A key prerequisite is lightweight and production-grade k8s distributions like MicroK8s, running on Ubuntu Core. In this blog, we describe the most compelling Kubernetes resources for IoT.

    ReplicaSets

    A ReplicaSet is a resource that ensures pods are always kept running. Should a pod disappear for any reason, the ReplicaSet notices the missing pod and creates a replacement.

    Implementation

    ReplicaSets may be used for backing up mission-critical devices. In this schema, a worker node running a critical workload would be backed up with an additional one running in idle. In case of failure, the ReplicaSet in the master node would reschedule the workload on the backup. This redundancy would reduce the probability of unavailability of critical workloads.

    Possible use cases

    • Smart city: this approach can increase the availability of security or surveillance applications. A ReplicaSet would for instance reschedule a camera application on a backup camera node in the event of a failure.
    • Energy management: for battery-powered devices, this approach could double the time between failures. In this case, an idle backup would be activated when the primary node runs out of power. This would halve the cost of maintenance. The savings could be significant for difficulty accessible installations like wind farms.

    DaemonSets

    DaemonSets are used to run a pod on all cluster nodes. This is a contrast to ReplicaSets that are used for deploying a set number of pods anywhere in a cluster. Pods run with DaemonSets could contain infrastructure-related workloads that perform system-level operations, such as logging or monitoring. Alternatively, DaemonSets can run a pod on a target node in the cluster.

    Implementation

    DaemonSets can manage workloads on a cluster made of various groups of IoT devices. If a label is attached to each group, DaemonSets will run group-specific workloads. This is achieved by creating a DaemonSet on the master node, with a label selector for target worker nodes. Should a node be a single target for a workload, a unique label should be attached to it.

    Possible use cases

    • Manufacturing: with DaemonSets, large-scale manufacturing execution systems can be powered by Kubernetes. This will be achieved by running containerised event monitoring workloads in pods deployed on factory industrial machines.
    • Security: some security workloads need to be executed in a reliable manner on every node. DaemonSets would be the right resource for such cases.

    Jobs

    The job resource is intended for running tasks that terminate after execution. Associated containers will not be restarted when processes within finish successfully. This contrasts with ReplicaSets and DaemonSets that run continuous workloads that are never considered complete.

    Job resources run pods immediately after creation. However, some jobs need to run at a specific time in the future, or repeatedly in a fixed time interval. These types of jobs are referred to as CronJobs in Linux based operating systems. Kubernetes supports them too.

    Implementation

    A job manifest must be created on the master node, to schedule a workload on workers. For single completable tasks, the sequence of execution of workloads can be specified. For CronJobs, the schedule and the periodicity should be specified.

    Possible use cases

    • Automotive: deep learning applications for autonomous cars generate volumes of data. This data is collected on geographically-spread edge nodes. Jobs could be implemented to periodically upload collected data from edge gateways to a central data center. Thus could data collected over several locations be periodically centralised. Data centralisation eases model refinement and transfer learning.

    HostPath Volumes

    Kubernetes makes it possible for pods to access the file system of their host node. Access is enabled through hostPath Volumes. A hostPath Volume points to a specific file or directory on the node’s file system. Pods using the same path in their hostPath Volume see the same files.

    Implementation

    A Volumes is mounted by specifying its mounting location in pod’s definition manifest. The hostPath volume will point at a file or directory within the node’s file system. The file or directory will be shared between pods. One can also mount device files with hostPath volumes to create an interface between pods and peripheral devices.

    Potential use cases

    • Remote sensing: a hostPath Volumes can increase the performance of remote sensing applications by multiplexing input data from sensing devices. In the case of an image sensor for instance, this resource cold run several processing workloads in parallel on the same data stream. This could mean running two different image recognition models on the same camera stream, and therefore extracting richer meaning from a single input.

    Deployments

    The deployment resource provides a mechanism to update applications. Deployments can perform rolling updates on pods in a single command. On command, the deployment will delete pods with the old application. A ReplicaSet will subsequently launch a pod running the updated application. This process will be repeated until all the pods with the old application are replaced by pods running the updated application.

    Availability during updates is especially important in mission-critical IoT installations. This is the case because downtime causes lost sales. Deployments enable zero-downtime updates for mission-critical applications.

    Implementation

    To implement rolling updates one needs to create a Deployment resource through a manifest. The label-selector and pod templates will be specified in the manifest. Additionally, the manifest will contain a field, stating the chosen deployment strategy.

    To update a Deployment, one only needs to modify the pod template. Kubernetes automatically takes all the steps necessary to change the system state to what is declared in the resource.

    Possible use cases

    • Maintenance operations: Deployments simplify otherwise tedious and costly software updates on remotely installed IoT devices. Maintenance actions are carried out with simple commands, rather than manual intervention by skilled operators. Rolling updates are executed in a non-disruptive way, with no impact on device availability.

    Conclusion

    The Kubernetes resources discussed above can improve availability, reliability, maintainability, and performance of IoT installations. However, to take advantage of these capabilities, the right software stack needs to be chosen. This stack could be built upon Ubuntu Core and microk8s, a lightweight Kubernetes distribution suitable for IoT.

    07 October, 2019 04:31PM

    hackergotchi for Tails

    Tails

    Launching our donation campaign for 2020

    The mission of Tails is to empower people worldwide by giving out an operating system that protects from surveillance and censorship.

    We build liberating technology to put people in control of their digital lives, keeping in mind that the most vulnerable and oppressed people are also the most in need of privacy and security:

    • Journalists and whistleblowers use Tails to denounce the wrongdoings of governments and corporations.
    • Activists use Tails to avoid surveillance and organize their struggles for liberatory social change.
    • Human-rights defenders use Tails to avoid censorship and report human-rights violations.
    • Domestic violence survivors use Tails to escape surveillance in their homes.
    • Privacy-concerned citizens use Tails to avoid online tracking.

    And we give out Tails for free because nobody should have to pay to be safe while using computers.

    However, Tails needs funds to keep up the fight and we know that people who need Tails the most cannot always donate: because they would get in trouble for giving to an anti-surveillance tool or simply because they don't have the money.

    Why supporting Tails is more important than ever

    The 2019 World Press Freedom Index compiled by Reporters Without Borders (RSF) shows how hatred of journalists has degenerated into violence, contributing to an increase in fear. The number of countries regarded as safe, where journalists can work in complete security, continues to decline, while authoritarian regimes continue to tighten their grip on the media.

    According to the 2019 Report by United Nations' Special Rapporteur David Kaye, surveillance of individuals – often journalists, activists, opposition figures, critics and others exercising their right to freedom of expression – thrives because of weak controls on exports and transfers of surveillance technology to repressive governments. This surveillance is known to lead to arbitrary detention, sometimes to torture and possibly to extrajudicial killings.

    These technologies include targeted malware, online interception of network communications, and deep packet inspection; all technologies that Tails is one of the best tool to protect from while reducing the risk of dangerous mistakes.

    A legal framework to regulate this surveillance industry, as recommended by David Kaye, might be useful for the future. But digital freedom tools like Tails are more needed than ever right now, in an act of empowerment and self-defense.

    You are our best guarantee

    We often hear complaints about software projects that are meant to fight surveillance, like Tor and Tails, getting funds from the US government. We share this concern and we will never be at ease as long as the well-being of our project depends on such funding.

    This is why it's so important to be sustained by users like you: our independence depends on you.

    We are extremely proud that our primary source of funding in the last years has been donations from passionate people like you. Let's keep it this way!

    In 2017–2019, our money came from:

    • Passionate people like you
      (36%)
    • Foundations & NGOs
      (30%)

      like Mozilla or the Handshake Foundation
    • Entities related to the US government
      (25%)

      like the Open Technology Fund or the ISC Project
    • Private companies
      (9%)

      like DuckDuckGo or Lush

    New anonymous ways to donate

    Because we know that being able to donate anonymously is very important to some of you, we are adding this year 3 new ways to donate anonymously to Tails.

    You can send us:

    • Monero:

      4B93hjotwmMQeaZ799D84XTxhqUGqqjfveUTB1GeduAKNeH47WDyn5eb8P2mtScErGbsbL5X3J6vUPAVPrw8j5pMFh6dAwY

    • Zcash:

      zs1ayrt0wckfpkddxqsqv9af6n7vtuspnv3t59w7e9mvykyznmcr3h9vep8emte2lgak8d5s0q65q2

    • Cash by post:

      Weber
      Merseburger Strasse 95
      04177 Leipzig
      Germany

    Please take a minute to donate to Tails today!

    07 October, 2019 03:00PM

    hackergotchi for Freedombone

    Freedombone

    Improving Notifications

    The notifications system within Freedombone has been updated and can now send alerts for Epicyon DMs or replies and also will work with the Matrix app.

    If you have Matrix installed then notifications will appear under a section called System Alerts within Riot. There is also a bug within Synapse such that if you close the Server Notices room then it won't be re-generated when a new notification happens and the only current way to fix it is to restart the matrix daemon or reboot the Freedombone server. So if you are using the Riot (web or Android) app remember not to close that room.

    07 October, 2019 01:53PM

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Add Seafile, Wekan and Zammad to Your corporate IT easily via the App Center

    We are pleased to announce a prominent addition to our App Center. Since the beginning of October the German dropbox alternative Seafile, the practical Kanban board of Trello competitor Wekan and the German Zendesk competitor Zammad are available in the App Center. This adds three popular and practical business applications to our Univention Corporate Server (UCS) offering, which you can add to your company’s IT by a simple click.

    The Three Apps at a Glance:

    Seafile Professional Server – Safe, Synchronize and Share Data

    Share data with your colleagues with just a few clicks and edit them together – whether in centralized or decentralized teams. Seafile also offers a file-based wiki that simplifies and structures the exchange of information within workgroups. Seafile Professional can be operated on your own systems and can be used for free by up to three users.

    Seafile in the App Center Catalog

     

    Wekan – Manage Tasks Clearly

    Wekan is a web-based Kanban board solution comparable to Trello. The main difference is that you can operate Wekan in your own infrastructure, while the Trello boards are in the cloud of the provider Atlassian. The concept is of course the same. The Univention app consists of the official Wekan Docker image. The connection to the UCS Identity Management is already set up and users can be conveniently activated for Wekan by the administrator in the UCS Management System.

    Wekan in the App Center Catalog

     

    Zammad – Modern Helpdesk and Support System

    What’s special: thanks to the intuitive user interface, the solution is easy to use and requires no expensive training. A modern dashboard informs you and your team in real time about the current situation, gives information about existing tickets, their processing status and the last actions. An integrated full text search allows you to search even large databases in seconds for the information you need. Customers can also view the status of their requests at any time. Zammad is open source software and can be operated on its own systems.

    Zammad in the App Center Catalog

    Der Beitrag Add Seafile, Wekan and Zammad to Your corporate IT easily via the App Center erschien zuerst auf Univention.

    07 October, 2019 11:15AM by Philip Seufert

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky 5.9

    SparkyLinux 5.9 “Nibiru” is out. This is a quarterly update of live/install media of the stable line, which is based on Debian 10 “Buster”.

    The base system has been upgraded from Debian stable repos as of October 4, 2019.
    It works on the Linux kernel 4.19.67 LTS.
    As usually, new iso/img images provide small bug fixes and improvements as well.

    Sparky project page and Sparky forums got new skins; no big changes about the colors but they are much mobile devices friendly now.

    Nemomen finished translating Sparky tools to Hungarian (thank’s a lot), but many of them still waiting for adding to packages.

    Make sure our forums have been moved to a subdomain: https://forum.sparkylinux.org

    System reinstallation is not required; if you have Sparky 5.8 installed make full system upgrade:
    sudo apt update
    sudo apt full-upgrade

    or via the System Upgrade tool.

    Sparky 5 is available in the following flavors:
    – amd64 & i686: LXQt, Xfce, MinimalGUI (Openbox) & MinimalCLI (text mode)
    – armhf: Openbox & CLI (text mode)

    New stable iso/img images can be downloaded from the download/stable page.

    07 October, 2019 10:19AM by pavroo

    October 06, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubucon Europe 2019: Sintra Parks and Palaces Cultural Visit Program

    Welcome to Sintra at UbuconEU19 (Pre Ubucon activities)

    If you registered to make any or all the visits please check the details below:

    Before reading further:

    • some visit where changed/edited/removed, this is the final plan;
    • When you arrive you should always look for someone (Jaime Pereira) with an Ubucon Europe 2019’s poster;
    • You will receive an ID at the beginning of each visit, at the end don’t forget to return the ID to Jaime;
    • Water, some food and comfortable shoes are almost mandatory;
    • Any issue please contact Jaime:

    October 7th – National Palace of Sintra
    Meeting spot is at 9:00 AM we will be waiting for you at the main door of the National Palace, also known as the “Palace of the Village” (above the staircase).
    https://www.openstreetmap.org/#map=19/38.79771/-9.39086

    October 7th – Quinta da Regaleira
    Meeting spot is at 14:45 PM we will be waiting for you at the main gate of “The Quinta da Regaleira”
    https://www.openstreetmap.org/#map=19/38.79643/-9.39613

    October 8th – Park and National Palace of Pena
    Meeting spot is at at 9:00 AM we should take bus 434 in front of Sintra train station.
    https://www.openstreetmap.org/#map=19/38.79987/-9.38512

    October 8th – Countess of Edla Chalet
    After visiting the “Pena Palace” and continuing in the Park we will visit the “Countess of Edla’s Chalet.
    If you only registered to visit the Chalet:
    Meeting spot is at 11:30 AM in front of the chalet.
    https://www.openstreetmap.org/#map=19/38.78513/-9.39918

    October 9th – Monserrate Park and Palace
    Meeting spot is at 9:15 AM we should take bus 435 in front of Sintra train station at 9:15 AM.
    https://www.openstreetmap.org/#map=19/38.79987/-9.38512

    Thank you and see you there.

    06 October, 2019 01:54PM

    Ubucon Europe 2019: Ubucon Europe is around the corner!

    Ubucon Europe is already next week and we want to ask you to register for our events!

    If you are still undecided if you want to come, have a look at our schedule, we have a ton of cool talks and workshops spanning various topics spanning from community management to security and how to develop applications on UBports.

    Besides the talks and workshops, we have also social activities happening during the conference days in the evening. Make sure to register for the social events if you want to have a taste of Portugal and get to know each other better and discuss those nifty patches coming in for your projects. You can view more specific information for the various social events here:

    Register as soon as possible to pay for a cheaper price and have a place guaranteed on the venues that require registration.

    See you all very soon!

    06 October, 2019 11:20AM

    October 05, 2019

    hackergotchi for SparkyLinux

    SparkyLinux

    Payment day coming (2019)

    Donate Like every year Sparky needs YOUR help now!

    As you know, SparkyLinux is a non-profit project so it does not earn money.
    And probably some of you know that we have to pay bills for hosting server (vps), domains, the power (electricity), broadband (internet connection), etc., etc., from our personal, home budget.

    The time for paying for our server coming quickly again so help us sending donation now!

    This year we need a little more than last year: 1500 PLN for a newer and faster VPS which has to be paid until November 9, 2019. Due to often overloading the present one, it has to be replaced with a newer and a little faster one. I already made some improvements to make it working faster, but a new one is required anyway. There is also a new skin at the project and forums pages to make them much mobile friendly.

    And as every month, we also need 500 PLN for other bills to cover most of our needs.

    So all together, this month, we need your donations for about 2000 PLN / ~500 Euros / ~540 USD), please.

    We also asked for donations our community at Linuxiarze.pl and ArchiveOS.org as well. Our virtual server hosts a few web pages, all IT / Open Source / Linux related: SparkyLinux.org, Linuxiarze.pl, ArchiveOS.org and SoftMania.org.

    So please donate now to keep Sparky alive, any donation will be very helpful.

    You are the best community in the Universe so I really believe we CAN do it!

    Visit the donation page to find how to send out money.
    Aneta & Paweł

     

    05 October, 2019 09:14PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubucon Europe 2019: We welcome a new Gold sponsor – Ângulo Sólido!

    Ângulo Sólido is a Portuguese IT consulting company focusing on deploying and maintenance of IT systems based on Linux.

    Some of the services that they offer are:

    • Professional Systems Administration.
    • Desktops and VDI on Linux.
    • Networking and Internet access.
    • Web Development.

    If you appreciate well planned IT systems with low failure rates and a dedicated partner to help you with your needs, make sure to check their website.

    05 October, 2019 07:13PM

    hackergotchi for Xanadu developers

    Xanadu developers

    Lista de sitios para descargar juegos de Wii

    “Wii es una videoconsola producida por Nintendo y estrenada el 19 de noviembre de 2006 en Norteamérica y el 8 de diciembre del mismo año en Europa. Perteneciente a la séptima generación de consolas,​ es la sucesora directa de Nintendo GameCube y compitió con la Xbox 360 de Microsoft y la PlayStation 3 de Sony. Nintendo afirmó que Wii está destinada a una audiencia más amplia a diferencia de las otras dos consolas.​ Desde su debut, la consola superó a sus competidoras en cuanto a ventas,​ y, en diciembre de 2009, rompió el récord como la consola más vendida en un solo mes en Estados Unidos.

    La característica más distintiva de la consola es su mando inalámbrico, el Wii Remote, el cual puede usarse como un dispositivo de mano con el que se puede apuntar, además de poder detectar movimientos en un plano tridimensional. Otra de sus peculiaridades era el servicio WiiConnect24, que permitía recibir mensajes y actualizaciones a través de Internet en modo de espera.​ Adicionalmente, la consola puede sincronizarse con la portátil Nintendo DS, lo cual permite que Wii aproveche la pantalla táctil de la Nintendo DS como mando alternativo.”

    Después de un poco de historia acá les dejo una pequeña lista de los mejores sitios para descargar los roms:

    Si encuentras algún enlace roto, fraudulento o quieres sugerir algún sitio no dudes en colocarlo en los comentarios. Adicionalmente los enlaces no disponibles o que contengan malware se colocan como tachados temporalmente y si no se corrige el problema se eliminan en la siguiente actualización de la publicación.

    Saludos…

    05 October, 2019 06:25PM by sinfallas

    October 04, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Blog: Robotics security: What is SROS 2?

    We at Canonical have been hard at work on the security features of version 2 of the Robot Operating System (ROS 2). However, if we lift our collective heads up out of the weeds it’s easy to see folks completely misunderstanding how security works today in ROS 2. We’ve written some design articles to help distill all the moving pieces into something comprehensible, but I wanted to do the same here in a slightly less formal way.

    ROS 2 uses DDS, an existing protocol with its own specs. It’s out of scope to explain DDS, but there are two important points to make for the purpose of this post:

    1. DDS is just a spec; there are various DDS vendors that create both open source and commercial DDS implementations.
    2. DDS supports various security features by way of its DDS-Security spec.

    Both of those specs are just as much fun to read as any other spec, so let’s paint a simplified picture of how DDS-Security works.

    What is DDS-Security?

    DDS-Security is, again, just a spec, and an optional one at that (not all DDS implementations support it). It expands upon DDS by defining a set of plugins with specific purposes and APIs, as well as defining a set of built-in plugin implementations that must be supported by a given DDS implementation in order to be considered in compliance with the spec. It defines five plugins:

    1. Authentication: Verify the identity of a given domain participant
    2. Access control: Enforce restrictions on the DDS-related operations that can be performed by a participant
    3. Cryptography: Handle all required encryption/signing/hashing
    4. Logging: Provide ability to audit DDS-Security-related events
    5. Data tagging: Provide the ability to add tags to data samples

    ROS 2’s security features only utilize the first three of these plugins. Why? Because those last two aren’t required in order to be compliant with the spec, and thus not all DDS implementations support them. We need to cater to the lowest common denominator, here. Let’s talk a little more about those first three plugins.

    Authentication

    The Authentication plugin is central to the entire plugin architecture defined by DDS-Security. It provides the concept of a confirmed identity without which further enforcement would be impossible (e.g. it would be awfully hard to make sure a given ROS node could only access specific topics if it was impossible to securely determine which node it was).

    The builtin plugin (which is what ROS 2 uses) uses the proven Public Key Infrastructure (PKI). It requires a public and private key per domain participant, as well as an x.509 certificate that binds the participant’s public key to a specific name. Each x.509 certificate must be signed by (or have a signature chain to) a specific Certificate Authority (CA) that the plugin is configured to trust.

    Access control

    The Access control plugin deals with defining and enforcing restrictions on the DDS-related capabilities of a given domain participant. It’s not apparmor or cgroups, it’s really about limiting a particular participant to a specific DDS domain, or only allowing the participant to read from or write to specific DDS topics, etc.

    The builtin plugin (which is again what ROS 2 uses) also uses PKI. It requires two files per domain participant:

    • Governance file: A signed XML document specifying how the domain should be secured
    • Permissions file: A signed XML document containing the permissions of the domain participant, bound to the name of the participant as defined by the authentication plugin

    Both of these files must be signed by a CA which the plugin is configured to trust. This may be the same CA as the Authentication plugin trusts, but that isn’t required.

    Cryptographic

    The Cryptographic plugin is where all the cryptography-related operations are handled: encryption, decryption, signing, hashing, etc. Both the Authentication and Access control plugins utilize this plugin to verify signatures, etc. This is also where the functionality to encrypt DDS topic communication resides.

    So what is SROS 2? It’s ROS 2’s integration with DDS-Security

    SROS 2 stands for “Secure Robot Operating System 2.” Understandably, this tends to make folks assume it’s some sort of ROS 2 fork that is somehow secure, and that’s not actually the case. The name is historical: SROS was an effort back in ROS 1 to lock it down, and it was essentially a fork. In ROS 2, the term “SROS 2” is given to the collective set of features and tools in various parts of ROS 2 that are used to enable integration with DDS-Security. So what is that “set of features and tools”?

    ROS client library (RCL)

    Most of the user-facing runtime support for SROS 2 is contained within the ROS Client Library, a core component of ROS 2. It’s responsible for coordinating the enablement of DDS-Security for each DDS implementation that supports it. It supports three main features:

    • Security files for each domain participant
    • Permissive and strict enforcement options
    • A master on/off switch for all security features

    For more details on each of these please read the ROS 2 DDS-Security article.

    SROS 2 CLI

    Configuring a ROS 2 system to be secure in RCL involves a lot of technology that may be unfamiliar to the average roboticist (PKI, DDS governance and permissions files and their syntax, etc.). The SROS2 CLI includes the ros2 security tool to help set up security in a way that RCL can use. It includes the following features:

    • Create CAs for the Authentication and Access control plugins
    • Create a directory tree containing all security files (keypairs, governance and permissions files, etc.)
    • Create new identity for a given node instance, generating a keypair and signing its x.509 certificate
    • Create a governance file that will encrypt all DDS traffic by default
    • Support specifying permissions in familiar ROS terms which are then automatically converted into low-level DDS permissions
    • Support automatically discovering required permissions from a running ROS system

    Conclusion

    I hope that helps clear up any misunderstandings you’ve had about what SROS 2 is and how it works today. We’re always here to help if you have any questions!

    This article originally appeared on Kyle Fazzari’s blog.

    04 October, 2019 10:28PM

    hackergotchi for VyOS

    VyOS

    Software and Support Subscriptions Update

    Finally we have finished updating our website (there’s still an ongoing work on improving it though, and your suggestions are welcome!) And it's time to update pricing for our services and subscriptions and also tell bit more about both

    04 October, 2019 02:26PM by Daniil Baturin (daniil@sentrium.io)

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    The Decision has been made: UCS 5.0 is coming!

    While we were planning the upcoming UCS development stage, we decided to start working on the next major version: UCS 5.0 is planned for next year. In this article I would like to let you take a look behind the scenes and share some of our plans with you.

    It’s been almost 5 years since we released UCS 4.0. During this time, UCS has evolved a lot. At the same time, we’ve continued to maintain the old version’s features. While most of them are popular with our users, others are not. There are also some things we would do differently if we had to do them again. By jumping to the next major version, we would like to get rid of some relics and implement several new features at the same time. We’re still at the very beginning, so not all decisions are final yet – but true to the motto “be open” I would like to share some of our ideas and plans in this blog post.

    Changing to Debian 10 “Buster”

    The new release will see an update of the base, and we will upgrade to Debian 10 (codename “Buster”), released in September 2019. In addition to all the upgraded packages, we will continue to reduce the differences between Debian and UCS in the main distribution. For example, Debian now has working UEFI Secure Boot (partly thanks to our support), so UCS no longer needs to make any adjustments.

    Migration to Python 3

    Most of our software at Univention is being developed in the Python scripting language. UCS 4.x uses the Python 2 runtime environment, although Python 3 has been around for quite some time. Python 3 is going to be the standard for implementations in UCS 5.0, so that we and our partners can benefit from the possibilities of the new version. You can see the first steps of this transition in UCS 4.4 as some of the packages have already been converted. Integrations or projects that make use of Python-based UCS interfaces such as UDM hooks or listener modules should run some checks until the release of UCS 5.0 to ensure Python 3 compatibility.


    Logo UCS (Univention Corporate Server)

    The UCS Online Demo gives you a quick and easy overview of the most important functions of Univention Corporate Server.

    Start the online demo now!


    64-bit only

    For some years now, we haven’t been offering 32-bit installation images for UCS (“i386 packages”), although we still offer updates for UCS 4.x. Introducing UCS 5.0, this support is coming to an end. To migrate 32-bit systems in your UCS Domain you should replace them one by one with new 64bit systems. Administrators should have completed the transition before upgrading to UCS 5.0. For all 64-bit environments, we will continue to support the upgrade of existing installations.

    Bye-bye, unused Features!

    Some of the UCS features are hardly used or not used at all. So we decided to take a closer look and only keep those features we definitely want to support in UCS 5.0. We’ve already made up our mind about those two features – they need to go:

    • Access to EC2 instances via UVMM: UCS 4.x includes an extension for the UCS Virtual Machine Manager (UVMM) to manage instances in Amazon AWS-compliant IaaS environments. We will remove this integration.
    • Support for NT-compatible Domains: UCS 4.x contains integrations, packages, and scripts for Windows NT domains in combination with Samba. We will not adopt these for UCS 5.

    What is really used?

    We will certainly remove more things from our distribution, because we believe we shouldn’t be spending our time maintaining unused features. So, can we please have your opinion and some answers to the following question:

    Does a server system like UCS really need the KDE desktop environment?

    Our product managers and developers appreciate your feedback. Thanks!

    New Features

    The new major release will not just see fundamental changes, but also contain innovative new features for the management system. Parts of these are going to be published for UCS 4.4 as well, stay tuned for upgraded apps and errata updates. We will report back in future blog posts.

    Outlook and Roadmap

    We’ve only just started working on UCS 5.0, so we can’t really announce a release date yet. One thing is for sure: it won’t happen before the next Univention Summit (January 23 and 24 2020, Bremen). Until then, we’re going to publish lots of exciting new features for UCS 4.4 and, of course, more announcements for UCS 5.0.

    Der Beitrag The Decision has been made: UCS 5.0 is coming! erschien zuerst auf Univention.

    04 October, 2019 02:06PM by Ingo Steuwer

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Blog: The smart dump plugin

    As you probably already know, snapcraft supports a range of plugins, designed to aid developers in making their snaps in an easier, faster, more transparent fashion. Plugins work with different programming languages and build tools, like Python, Java, Rust, Cmake, and others. By making complex things simpler, they can accelerate your projects.

    One of the available plugins is the dump plugin, which “just dumps” contents from a specified source into your snap. Sounds quite simple, but this plugin can be extremely useful when you want to quickly package and test your application. Let’s see.

    The dump plugin supports numerous formats

    A typical use case where the dump plugin brings value is for developers who already have created and packaged code in the past. For instance, if you have already built your application as a Debian package, you may be dismayed by the notion of having to start all over again when building snaps. But this does not have to be so.

    You can specify existing files available for different distributions as the source in the plugin declaration in the snapcraft.yaml file, and the contents will be automatically unpacked for the particular part. You may want to do this if you require a particular library that is not available in the distribution repository archives, or you need an old library that is no longer supported – a practical requirement for legacy applications.

    Moreover, you may have a standalone application that bundles all its dependencies inside a single archive, and you just want to see whether it will work when packaged as a snap. Here, the dump plugin can be quite useful, as it lets you create and test a snap within minutes. It’s not only Debian packages, though. The dump plugin works with a range of sources, including rpms as well as zip and tar archives!

    Practical example

    A good test case would be the old WYSIWYG HTML/CSS editor called KompoZer. Active development of this application has ceased in around 2008, and it was available in Ubuntu archives until Pangolin (Ubuntu 12.04). Users who want to install this tool still will have to manually satisfy multiple dependencies and then install several Debian packages from Launchpad. These packages could conflict with the libraries on their systems.

    The alternative is to package KompoZer as a snap. Without going into full details on how to package KompoZer, the dump plugin lets us quickly grab and extract the archives. In the snapcraft.yaml:

    • Define a part (e.g. kompozer-data).
    • Specify source for this particular part.
    • Specify any stage packages that this part will require – the runtime libraries that the application depends upon (typically the long list of dependencies that you would have to manually satisfy).
    • Provide any pull or build overrides – in some cases, applications may be built with hardcoded paths that would require breaking out of the snap confinement. You can manually modify these, as we will see in a moment.

    Here’s a code sample:

    parts:
    kompozer-main:
        plugin: dump
        source: https://launchpad.net/ubuntu/+archive/primary/+files/kompozer_0.8~b3.dfsg.1-0.1ubuntu2_amd64.deb
        stage-packages:
          - libatk1.0-0
          - libc6
          - libcairo2
          - libfontconfig1

      kompozer-data:
        plugin: dump
        source: https://launchpad.net/ubuntu/+archive/primary/+files/kompozer-data_0.8~b3.dfsg.1-0.1ubuntu2_all.deb
        override-pull: |
          snapcraftctl pull
          ln -sf ../../../../etc/kompozer/profile usr/share/kompozer/defaults/profile
          ln -sf ../../../../etc/kompozer/pref usr/share/kompozer/defaults/syspref

    What do we have here?

    First, we have the kompozer-main part. We extract the contents with the dump plugin, and then list all the different stage packages. If the application depends on libraries that are not available in the distribution archives, you can specify them too as separate parts, and grab them using the dump plugin.

    For the second part – kompozer-data, we have a slightly different declaration. Namely, originally, this package comes with two hardcoded symbolic links pointing to /etc, which is not possible with a strictly confined snap.

    Failed to copy '/root/parts/kompozer-data/build/usr/share/kompozer/defaults/syspref': it's a symlink pointing outside the snap.

    So we override the pull process and manually re-link these – the override declaration is just a list of shell commands.

        source: https://launchpad.net/ubuntu/+archive/primary/+files/kompozer-data_0.8~b3.dfsg.1-0.1ubuntu2_all.deb
        override-pull: |
          snapcraftctl pull
          ln -sf ../../../../etc/kompozer/profile usr/share/kompozer/defaults/profile
          ln -sf ../../../../etc/kompozer/pref usr/share/kompozer/defaults/syspref

    There could be some trial and error during the snap build process, and if you need help with how to go through these more quickly, please consult our tutorial on making development faster and then the follow-up guide with additional tips and tricks on this topic.

    Architectures

    Furthermore, you can also specify different sources to match the system architecture, like amd64 and i386, which means a single snap will work for users on these different platforms. This is quite handy, especially for legacy software.

        source:
          - on amd64: https://link/library_amd64.deb
          - on i386: https://link/library_i386.deb

    Conclusion

    The dump plugin may look like a simple helper tool, but it gives developers quite a bit of flexibility in how they package and test their software. If you already have compiled code, you can use it for quick & dirty tests, to assess compatibility, to retrieve libraries and legacy components that are not available in standard software channels, or even build custom applications that may contain data samples, tutorials or similar.

    As always, we value feedback and comments, so we can make snapcraft even more extensible and useful to developers. If you’d like to share your ideas or suggestions, please join our forum for a discussion.

    Photo by Christopher Burns on Unsplash.

    04 October, 2019 09:10AM

    hackergotchi for Maemo developers

    Maemo developers

    Bussator: implementing webmentions as comments

    Recently I've grown an interest to the indieweb: as big corporations are trying to dictate the way we live our digital life, I'm feeling the need to take a break from at least some of them and getting somehow more control over the technologies I use.

    Some projects have been born which are very helpful with that (one above all: NextCloud), but there are also many older technologies which enable us to live the internet as a free distributed network with no owners: I'm referring here to protocols such as HTTP, IMAP, RSS, which I perceive to be under threat of being pushed aside in favor of newer, more convenient, but also more oppressive solutions.

    Anyway. The indieweb community is promoting the empowerment of users, by teaching them how to regain control of their online presence: this pivots arund having one's own domain and use self-hosted or federated solutions as much as possible.

    One of the lesser known technologies (yet widely used in the indieweb community) is webmentions: in simple terms, it's a way to reply to other people's blog posts by writing a reply in your own blog, and have it shown also on the original article you are replying to. The protocol behind this feature is an recommendation approved by the W3C, and it's actually one of the simplest protocol to implement. So, why not give it a try?

    I already added support for comments in my blog (statically generated with Nikola) by deploying Isso, a self-hosted commenting system which can even run as a FastCGI application (hence, it can be deployed in a shared hosting with no support for long-running processes) — so I was looking for a solution to somehow convert webmentions into comments, in order hot to have to deal with two different commenting systems.

    As expected, there was no ready solution for this; so I sat down and hacked up Bussator, a WSGI application which implements a webmention receiver and publishes the reply posts as Isso comments. The project is extensible, and Isso is only one of the possible commenting systems; sure, at the moment it's indeed the only one available, but there's no reason why a plugin for Static Man, Commento, Remark or others couldn't be written. I'll happily accept merge requests, don't be shy — or I can write it myself, if you convince me to (a nice Lego box would make me do anything 0 Add to favourites0 Bury

    04 October, 2019 08:36AM by Alberto Mardegan (mardy@users.sourceforge.net)

    October 03, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubucon Europe 2019: Ubucon Europe welcomes a new Gold sponsor – Libretrend!

    We warmly welcome Libretrend, the Portuguese company behind the Librebox, a thin client that has the Intel Management Engine removed and boots with coreboot.  All the hardware has been selected to be the most powerful and blob-free as possible.

    You can visit their booth at Ubucon to have a look at the hardware that they are selling, with some surprises that are gonna be displayed there as well as on-stage during the event.

    03 October, 2019 05:20PM

    Ubuntu Blog: ROSCon Japan 2019!

    ROSCon Japan 2019 was a resounding success. We took in the keynote speech from Ryan Gariepy, Co-founder and CTO of Clearpath Robotics. We demoed the first iteration of a Robotics arm from Niryo. Our own Ted Kern gave a lightning talk on type-checked Python in ROS2, and we spoke to lots of individuals in the Japanese robotics community. Let’s talk about it.

    Keynote Speech

    The keynote speech was very interesting. Ryan discussed the growing issue of building autonomous vehicles and autonomous robots safely. He gave everyone eight steps to creating safe and human-friendly robots. He proposed intermediary safety standards for people working in the newest areas of robotics and discussed how engineers can proactively design for safety. Statistics and risk assessments were given as reference points for justification and as useful examples of documentation. Effectively, he said when you design robots that work alongside humans, they should be undeniably more safe than if a human were to do the same job. You can hear it yourself in full on the ROSCon JP Youtube channel.

    Our Robotic Arm Snap Demo

    We assembled our Robotic arm that we were demoing from 3D-printed pieces around a Raspberry Pi (and Dynamixel motors.) It was ported to run with Ubuntu Core and Snaps. It did its job of moving dominos as instructed. But what we showcased was how easy and straightforward it was to update and fix your robot with snaps. It was a great first iteration and we got the message across to some people. But you can look forward to more demos in the future. You’ll be able to catch our second iteration at the next ROSCon in Macau where we will be sporting something more interactive and Ubuntufull. 

    ROS2 Lightning Talk

    Later in the day Ted, a software developer for Ubuntu Robotics, gave his lightning talk titled: “ROS2: Typed Checked Python, Static typing in the test toolchain.” It only lasted a few minutes but sparked conversation. We’ve discussed on the blog before about how to use the `ament_mypy` package that we have contributed. But Ted focused on motivating the adoption of type hinting as a good Python practice, and the need to be aware of “optional” language features and external tools due to support of languages beyond ROS 1’s Python 2 and C++.

    The ROSCon Japan Community

    Beyond all of these things, we were there to talk to the community. We wanted to know about their projects and what they are working on with ROS. We’re very thankful that our colleges in Japan were there to help with interpretation because it meant we got to speak to almost everyone at the conference. Most were already users of Ubuntu and ROS/ROS 2 and were more interested in our roadmaps and plans for ROS 2 development. But there were a surprising number of users who didn’t know about our blog, twitter or website content. Which can be found in Japanese now by the way. We told them to take a look. Each platform talks about our role in ROS development and maintaining the best Robotics security out there.

    If you’ve read this far and you’re interested in what’s going on, but you don’t know what ROSCon Japan is, let me tell you. ROSCon JP, the ROS Convention in Japan is what they call a “developer meeting” for people in the ROS community. It was a great opportunity for ROS developers across the country, from beginners to experts, to learn the latest topics and network with the broader ROS community. You can read more about the whole event on their website, ROSCon JP. This event was in Tokyo but coming up soon is the international ROS Developer Conference, ROSCon 2019. This one will be in Macau from October 31 to November 1.

    03 October, 2019 02:01PM

    Ubuntu Podcast from the UK LoCo: S12E26 – Interstate ’76

    This week we’ve been tourists in our home town, we review the Dell Precision 3540 Developer Edition laptop, bring you some command line love and go over all your wonderful feedback.

    It’s Season 12 Episode 26 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

    03 October, 2019 02:00PM

    Stephen Michael Kellat: Situational Clarification

    Yes, that was me reappearing after an absence of just over five years on IRC. Yes, I did in fact utter an old-time TV catch phrase that should be recognizable.

    No, I’m certainly not answering the phone now. That is to say, I am not answering the phone on behalf of others now. It is not as if it is a calamity but a change had to come about. Goodbyes were said but were quieter than what others might engage in.

    Change begins. It seems a bit late to contribute to the Eoan cycle. As to the F cycle I am back in a position I haven’t been in a for a few years. This could be interesting, I suppose.

    I do have some film editing I eventually have to get finished too, I suppose…

    03 October, 2019 03:47AM