May 25, 2018

Cumulus Linux

NetDevOpEd: Disaggregation is key to software vulnerability management

One critical decision that executives need to make when assessing their data center architecture is their approach to software vulnerability management across all network components. Vulnerability management primarily revolves around selecting an efficient and modern software management strategy. There are several ways to execute on a software management strategy, and I believe disaggregation is a critical first step in doing it right.

In this post, I want to take a minute to first share my thoughts on the vulnerability management trends I’ve noticed. I will argue that a) you need to prioritize the network in how you manage vulnerabilities and b) disaggregation is the only way to do it properly. We’ll also take a look at the reasons why I think we never had the right framework to manage software delivery, making vulnerability management a challenge on platforms that are closed in nature.

Operations at the core of vulnerability management

Three weeks ago, I joined 40,000 security professionals in San Francisco to attend the biggest gathering of security conscious professionals — RSA Conference. While there were several presentations and moments from the event that stood out, one that caught my eye was a presentation that discussed challenges in the industry on managing security vulnerabilities (I highly recommend checking out the entire presentation to get a better view of issues in the industry). The speaker defined vulnerability management as a maintenance and management task that begins and ends with operations. So how can a business ensure state-of-the art operations to battle vulnerabilities?

I have been advocating infrastructure security in the world of networking for a while because it has always been an afterthought, reserved for topics like DDoS, firewalls and network admission, and rarely for securing the network infrastructure itself. When it comes to infrastructure security, the key to success is selecting an architecture that aims to simplify operations by leveraging state-of-the-art methods for software management and automation. And that’s where the network comes in.

Recently, we have seen a staggering 600% increase in attacks targeting routers, switches and IoT devices. In fact, last month, Homeland security and the FBI issued an official alert towards state-sponsored cyber actors targeting network infrastructure devices. It’s time for executives to start focusing on the security of their networks.

Servers and disaggregation: what happened?

The server world is a parallel universe to networking, but it moves much faster. Historically, servers quickly changed gears towards having more choices when it comes to hardware and software bundling, the server world took the step to disaggregate and simplify their software a long time ago. Instead of disaggregating the platform, re-using existing innovations, and adopting solutions from the server and application world into their platform, legacy hardware vendors decided to build their own operating system.

Refusing to move to modern platforms, they left themselves and their consumers in a tricky situation, with a big bloated set of problems — problems that are deeply inherited in the delivery mechanism of their application: the operating system. In that way, legacy networking platforms became the lowest common denominator among servers, storage, security, voice, video and all sorts of technologies that an enterprise adopts around networking.

Since everything connects to the network and all facets of IT moves together, having a legacy network architecture present in an enterprise, is a subtle constraint that forces operations to build everything around the limitations and pace of the lowest common denominator — the network.

Dependency on proprietary patches is a dangerous game

Security vulnerabilities are usually software bugs, and it is the nature of software to have bugs. But in those occasions, where we have a batch announcement of 34 vulnerabilities that need to patched right away, C-level executives need to reconsider their software management strategy for platforms they operate.

In my opinion, legacy networking vendors lack innovation in different areas — a lot of development took place in routing and switching protocols, but not as much for software operation and management. Software development and innovation on an operating system built in the nineties will naturally incur some technical debt and with the nature of any debt, it can accumulate ‘interest’, making it harder to innovate later on. A small example that is relevant to all networking folks is the challenge of managing software, configuration and operation data on legacy platforms.

Why does this matter? Well, let’s say you’re dealing with a scenario like the one previously mentioned: you’ve got 34 vulnerabilities in your network and you need a patch ASAP. With legacy vendors, you’re dealing with a waiting game. You’re completely dependent on your proprietary vendor knowing about the issues, detecting the cause of the issues, fixing the issues and then sending you a new package containing the fix. With absolutely zero visibility into the closed network, there’s not even a way for you to solve the issue on your own. If you’re looking to minimize security vulnerabilities, you need a more immediate solution than what proprietary support offers.

Thanks to the disaggregated nature of Linux networking, you don’t have to worry about putting your network’s security in the hands of someone else. In addition to having the ability to look into the network and fix issues immediately, you are also able to leverage the entire Linux community. The most widely cited benefit of having a community of 50,000 behind you is security. Hundreds, maybe thousands of engineers are looking for a way to remediate the issue. Within hours, a glitch can be found, diagnosed and patched.

Automating software deployment is critical for vulnerability management

Proprietary vendors’ need for their model to rely on vendor lock-in means that customers don’t have the flexibility they need to leverage automation tooling, such as Ansible, Puppet and Chef, as they see fit. If automation is a critical part of securing your infrastructure, it follows that being restricted on how you leverage automation could pose some serious risks. Fortunately, you don’t have to worry about that with disaggregation. The freedom, customization and ability to leverage existing tooling that separating software from hardware allows lets you build your automation solution as you see fit — no need to depend on proprietary solutions that don’t cater to your specific needs.

The process to manage software on modern platforms becomes efficient when the operating system in place is built with standard application packaging and automation tooling in mind from the beginning. Using older platforms that never had a chance to incorporate all the modern architecture innovations, makes it difficult to deploy fixes, which is why choosing anything other than an open network poses security risks. If you’re worried about your legacy platform isn’t doing the job (and it probably isn’t), consider these important risk assessment questions:

1. Is your platform running a modular, multi-user operating system with security natively built into it?
2. Are patches frequently released, and how easy is it to install a patch?
3. If the vendor delays issuing of patches, are we capable of patching software on our own?
4. Is there a standard package format for these patches?
5. Can my automation and security auditing frameworks work with these packages of patches?
6. Upgrading software on my network requires planning and validation — do we have tools to do that?

Legacy networks aren’t built for automation the way open alternatives are. So if you ask yourself these six questions and realize your platform isn’t optimized for automation, you may be seriously risking your infrastructure security.

In conclusion

From my perspective, it’s pretty clear that it’s time to start taking infrastructure security more seriously. And if automation, software management and punctual patches are the keys to effective software vulnerability management, then there’s no question that a disaggregated solution is best optimized for network security. Disaggregation and vulnerability management go hand-in-hand — so get onboard with open solutions if you want to keep your infrastructure safe.

Still curious about what open versus closed networking looks like, you should definitely check out our networking how-to video series. Watch as our web-scale networking experts show you side-by-side how configuring with a traditional NOS compares to configuring with an open NOS like Cumulus Linux.

The post NetDevOpEd: Disaggregation is key to software vulnerability management appeared first on Cumulus Networks Blog.

25 May, 2018 10:03PM by Ahmed Elbornou

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

On May 21st 2018, Emma DE2 1.02 makes Linux available to everyone !

On May 21st 2018, the Emmabuntüs Collective is happy to announce the release of the new Emmabuntüs Debian Edition 2 1.02 (32 and 64 bits), based on Debian 9.4 stretch distribution and featuring the XFCE desktop environment. This distribution was originally designed to facilitate the reconditioning of computers donated to humanitarian organizations, starting with the [...]

25 May, 2018 07:36PM by shihtzu

Le 21 mai 2018, Emma DE2 1.02 met Linux à la portée de tou.te.s !

Le Collectif Emmabuntüs est heureux d’annoncer la sortie pour le 21 mai 2018, de la nouvelle Emmabuntüs Debian Édition 2 1.02 (32 et 64 bits) basée sur la Debian 9.4 Stretch et XFCE. Cette distribution a été conçue pour faciliter le reconditionnement des ordinateurs donnés aux associations humanitaires, notamment, à l’origine, aux communautés Emmaüs (d’où [...]

25 May, 2018 07:03PM by shihtzu

hackergotchi for Univention Corporate Server

Univention Corporate Server

Automated Maintenance of Linux Desktop Clients in the UCS Domain with opsi

The well-known open source client management system opsi can not only deal with Microsoft Windows clients but also with Linux. As Univention announced the discontinuation of Univention Corporate Client (UCC), I want to present you opsi as an alternative for the fully automated installation, maintenance and inventory of Linux desktop clients in your domain. In addition, opsi can also do the same for complete Linux and UCS systems.

In the following, I explain you briefly how this works.

Fully automatic installation of an Ubuntu client with opsi

The opsi product “ubuntu16-04” allows the fully automatic installation of an Ubuntu client. This includes the installation of the opsi-linux-client-agent, which is responsible for the further maintenance and configuration of the Linux system. Of course, the opsi-linux-client-agent can also be installed on already existing Linux systems later on.

Screenshot of opsi with Ubuntu 16-04

The installation of the opsi package l-ubu-ucs-domjoin can now be requested via the opsi management interface. This uses the ‘Univention Domain Join Assistant’, which has recently been introduced in the article New Domain Join Assistant Allows Foolproof Integration of Ubuntu Clients into UCS Domains.

The opsi product installs it on the Ubuntu system and then invokes the command line version ‘univention-domain-join-cli’. The necessary command line parameters can be set via the opsi management interface.

Screenshot about installation of Domain Join Assistant for UCS in opsi

After a successful join, the product announces ‘success’. The curious ones among you can see the details about the course of the process in the log window of the opsi management interface.

Screenshot of the log window in opsi's management interface

In case of doubt, I recommend to take a look at the documentation of the Univention Domain Join Assistant at:
https://github.com/univention/univention-domain-join/blob/ubuntu18.04/README.md

As opsi combines the automatic installation of an Ubuntu system and its automated join into the UCS domain, opsi represents an interesting replacement for the discontinued Univention Corporate Client.

In addition, being a general client management tool, opsi can centrally take care of the entire configuration and maintenance of Linux clients, Linux server / UCS systems as well as Windows clients and servers.

The ‘l-ubu-ucs-domjoin’ package can be downloaded at:
https://download.uib.de/opsi4.0/products/contribute/full-package/l-ubu-ucs-domjoin_4.1.0.0-1 .opsi

For further information on opsi, you can also visit the Univention App Catalog.

opsi in the Univention App Catalog

Der Beitrag Automated Maintenance of Linux Desktop Clients in the UCS Domain with opsi erschien zuerst auf Univention.

25 May, 2018 01:03PM by Maren Abatielos

May 24, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S11E12 – Twelve Years a Slave - Ubuntu Podcast

This week we make an Ubuntu Core laptop, discuss whether Linux on the desktop is rubbish, bring you a virtual private love and go over your feedback.

It’s Season 11 Episode 12 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
  • We discuss whether desktop Linux is rubbish and has failed.

  • We share a Virtual Private Lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image credit: Mike Wilson

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

24 May, 2018 02:15PM

LiMux

München als Smart City – die App nun auch für Android

Die München SmartCity App steht nun auch für alle Android-Geräte zum Download bereit. Im Auftrag der Landeshauptstadt München entwickeln die Portalgesellschaft des offiziellen Stadtportals muenchen.de und die Münchner Verkehrsgesellschaft (MVG) die bestehenden muenchen.de Apps zur … Weiterlesen

Der Beitrag München als Smart City – die App nun auch für Android erschien zuerst auf Münchner IT-Blog.

24 May, 2018 10:32AM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Matthew Helmke: Ubuntu Unleashed 2019 and other books presale discount

Starting Thursday, May 24th the about-to-be released 2019 new edition of my book, Ubuntu Unleashed, will be listed in InformIT’s Summer Coming Soon sale, which goes through May 29th. The discount is 40% off print and 45% off eBooks, no discount code will be required. Here’s the link: InformIT Summer Sale.

24 May, 2018 04:59AM

hackergotchi for Qubes

Qubes

QSB #40: Information leaks due to processor speculative store bypass (XSA-263)

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #40: Information leaks due to processor speculative store bypass (XSA-263). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #40 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-040-2018.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View XSA-263 in the XSA Tracker:

https://www.qubes-os.org/security/xsa/#263



             ---===[ Qubes Security Bulletin #40 ]===---

                             2018-05-24


  Information leaks due to processor speculative store bypass (XSA-263)

Summary
========

On 2018-05-21, the Xen Security Team published Xen Security Advisory
263 (CVE-2018-3639 / XSA-263) [1] with the following description:

| Contemporary high performance processors may use a technique commonly
| known as Memory Disambiguation, whereby speculative execution may
| proceed past unresolved stores.  This opens a speculative sidechannel
| in which loads from an address which have had a recent store can
| observe and operate on the older, stale, value.

Please note that this issue was neither predisclosed nor embargoed.
Consequently, the Qubes Security Team has not had time to analyze it in
advance of issuing this bulletin.

Impact
=======

According to XSA-263, the impact of this issue is as follows:

| An attacker who can locate or create a suitable code gadget in a
| different privilege context may be able to infer the content of
| arbitrary memory accessible to that other privilege context.
| 
| At the time of writing, there are no known vulnerable gadgets in the
| compiled hypervisor code.  Xen has no interfaces which allow JIT code
| to be provided.  Therefore we believe that the hypervisor itself is
| not vulnerable.  Additionally, we do not think there is a viable
| information leak by one Xen guest against another non-cooperating
| guest.
| 
| However, in most configurations, within-guest information leak is
| possible.  Mitigation for this generally depends on guest changes
| (for which you must consult your OS vendor) *and* on hypervisor
| support, provided in this advisory.

In light of this, XSA-263 appears to be less severe than the related
Spectre and Meltdown vulnerabilities we discussed in QSB #37 [2].

Patching
=========

The specific packages that resolve the problems discussed in this
bulletin are as follows:

  For Qubes 3.2:
  - Xen packages, version 4.6.6-41

  For Qubes 4.0:
  - Xen packages, version 4.8.3-8

The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:

  For updates from the stable repository (not immediately available):
  $ sudo qubes-dom0-update

  For updates from the security-testing repository:
  $ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.

In addition, Intel Corporation has announced that microcode updates
will be available soon [3]:

| Variant 3a is mitigated in the same processor microcode updates as
| Variant 4, and Intel has released these updates in beta form to OEM
| system manufacturers and system software vendors. They are being
| readied for production release, and will be delivered to consumers
| and IT Professionals in the coming weeks.

This bulletin will be updated once the Intel microcode updates are
available. No microcode update is necessary for AMD processors.

Credits
========

See the original Xen Security Advisory.

References
===========

[1] https://xenbits.xen.org/xsa/advisory-263.html
[2] https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-037-2018.txt
[3] https://www.intel.com/content/www/us/en/architecture-and-technology/facts-about-side-channel-analysis-and-intel-products.html

--
The Qubes Security Team
https://www.qubes-os.org/security/

24 May, 2018 12:00AM

Partnering with the Freedom of the Press Foundation

We’re pleased to announce that the Freedom of the Press Foundation (FPF) has become a Qubes Partner. We look forward to continuing to work with the FPF on an integrated SecureDrop Workstation based on Qubes OS. For more about what this collaboration entails and our next steps together, please see today’s announcement on the SecureDrop blog.

24 May, 2018 12:00AM

May 23, 2018

Cumulus Linux

Automating Cumulus Linux with Ansible

Automating your network can seem like a daunting task. But the truth is that automating Cumulus Linux with Ansible can be easier than many of the things you’re probably already automating.

In this post, I’ll show you how to get started on your network automation journey using a simple, four-step process:

  1. Pick one small network task to automate
  2. Configure it manually on a small scale
  3. Mimic the manual configuration in Ansible
  4. Expand the automation to additional network devices

To illustrate, I’ll be using the following simple, bare-bones topology based on the Cumulus Reference topology. You can follow along by spinning up your own virtual data center for free using Cumulus in the Cloud.

Pick one network task to automate

The first step is to pick one thing to automate. Just one! The only caveat is that it needs to be something you understand and are comfortable with. Trying to automate a feature you’ve never used is sure to scare you away from automation forever, unless of course you have someone guiding you through the process.

Preferably, pick something that’s quick and simple when done manually. Configuring the OSPF routing protocol between two switches falls into this category. When done manually, it’s literally only three lines of configuration on each device. Let’s start by manually creating an OSPF adjacency between the switches spine01 and leaf01.

First, we’ll issue the following commands on the spine01 switch:

cumulus@spine01:~$ net add ospf router-id 192.168.0.21
cumulus@spine01:~$ net add ospf network 192.168.0.0/16 area 0.0.0.0
cumulus@spine01:~$ net commit

The net add ospf router-id 192.168.0.21 command assigns spine01 the router ID (RID) of 192.168.0.21, which matches its unique management IP. OSPF uniquely identifies each device by its RID, so by setting the RID manually, we can easily identify this switch later on.

The command net add ospf network 192.168.0.0/16 area 0.0.0.0 enables OSPF on the management interface (eth0, which is in the 192.168.0.0/16 subnet), and places it in area 0.0.0.0, which is the OSPF backbone area. To keep things clean and simple, we’ll place everything in the backbone area.

Now let’s do the same thing for leaf01:

cumulus@leaf01:~$ net add ospf router-id 192.168.0.11
cumulus@leaf01:~$ net add ospf network 192.168.0.0/16 area 0.0.0.0
cumulus@leaf01:~$ net commit

If both switches are configured correctly, they should form an adjacency. We can verify this by using the net show command from leaf01:

cumulus@leaf01:~$ net show ospf neighbor
 
Neighbor ID   Pri State  Dead Time Address     Interface         RXmtL RqstL DBsmL
192.168.0.21  1 Full/DR  32.278s 192.168.0.21  eth0:192.168.0.11     0     0     0

The output shows an adjacency with 192.168.0.21 (spine01), which means everything is working as expected! Spine01 and leaf01 are now OSPF neighbors.

It’s fine to validate a manual configuration by issuing a net show command on each device. But when we get around to automating this and adding more switches, we’ll need a way to check everything in one fell swoop using a single command. This is where NetQ comes in.

Validating your configuration with NetQ

NetQ is invaluable for validating any Cumulus Linux configuration, whether manual or automated. NetQ keeps a record of every configuration and state change that occurs on every device.

Let’s go back to the management server and use NetQ to view information on the current OSPF topology:

cumulus@oob-mgmt-server:~$ netq show ospf

cumulus@oob-mgmt-server:~$ netq check ospf
Total Sessions: 2, Failed Sessions: 0

NetQ gives you a real-time view of the OSPF states of leaf01 and spine01. At a glance, you can see that leaf01 and spine01 are both in OSPF area 0.0.0.0, and have a full adjacency. The netq show command gives you the same data you’d get by running a net show command on each switch, but quicker and with a lot less typing!

Now that we’ve manually gotten OSPF up and running between two switches, let’s look at how to automate this process using Ansible.

Automating OSPF using Ansible

We’re going to create an Ansible Playbook to automate the exact configuration we just performed manually. When it comes to automation platforms in general, and particularly Ansible, there are many ways to achieve the same result. I’m going to show you a simple and straightforward way, but understand that it’s not the only way.

The first step is to create the folder structure on the management server to store the Playbook. You can do this with one command:

cumulus@oob-mgmt-server:~$ mkdir -p cumulus-ospf-ansible/roles/ospf/tasks

This will create the following folders:

cumulus-ospf-ansible
cumulus-ospf-ansible/roles
cumulus-ospf-ansible/roles/ospf
cumulus-ospf-ansible/roles/ospf/tasks

Specifying the switches to automate

Next, in the cumulus-ospf-ansible directory, we’ll create the hosts file to indicate which switches we want Ansible to configure. For now, spine01 and leaf01 are the only ones we want to automate. Incidentally, I prefer the nano text editor to edit the hosts file, but you can use a different one if you’d like.

cumulus@oob-mgmt-server:~$ cd cumulus-ospf-ansible
cumulus@oob-mgmt-server:~$ nano hosts

[switches]
spine01 rid=192.168.0.21
leaf01 rid=192.168.0.11

This places both switches into a group called switches. The rid= after each name indicates the unique OSPF router ID for the switch. Ansible will use this value when executing the actual configuration task, which we’ll set up next.

Creating the task

In the cumulus-ospf-ansible/roles/ospf/tasks folder, create another file named main.yaml:

cumulus@oob-mgmt-server:~/cumulus-ospf-ansible$ cd roles/ospf/tasks
cumulus@oob-mgmt-server:~/cumulus-ospf-ansible/roles/ospf/tasks$ nano main.yaml

---
- name: Enable OSPF
  nclu:
   commands:
   - add ospf router-id {{ rid }}
   - add ospf network {{ item.prefix }} area {{ item.area }}
   atomic: true
   description: "Enable OSPF"
 loop:
   - { prefix: 192.168.0.0/16, area: 0.0.0.0 }

The task named Enable OSPF uses the Network Command Line Utility (NCLU) module that ships with Ansible 2.3 and later. Let’s walk through this.

Look at the two lines directly under commands:

commands:
- add ospf router-id {{ rid }}
- add ospf network {{ item.prefix }} area {{ item.area }}

These commands look familiar! They’re very similar to the ones we issued manually, but with a few key differences.

First, the net command is missing from the beginning because the NCLU module adds it implicitly.

Second, instead of static values for the router ID, subnet prefix, and OSPF area, the variable names are surrounded by double braces. The rid variable comes from the hosts file, while the other two variables (item.prefix and item.area) come from the main.yaml file itself, under the loop section.

The atomic: true statement flushes anything in the commit buffer on the switch before executing the commands. This ensures that no other pending, manual changes inadvertently get committed when you run the Playbook.

Speaking of the Playbook, we have only one step left before we’re ready to run it!

Creating the play

In the cumulus-ospf-ansible folder, create a file named setup.yaml which will contain the play:

cumulus@oob-mgmt-server:~/cumulus-ospf-ansible/roles/ospf/tasks$ cd ../../..
cumulus@oob-mgmt-server:~/cumulus-ospf-ansible$ nano setup.yaml

- hosts: switches
  roles:
  - ospf

This file instructs Ansible to run the configuration directives in the cumulus-ospf-ansible/roles/ospf/tasks/main.yaml file against the devices in the switches group. All that’s left to do now is run the Playbook!

Running the Playbook

Now for the moment you’ve been waiting for! Issue the following command to run the Playbook:

cumulus@oob-mgmt-server:~/cumulus-ospf-ansible$ ansible-playbook -i hosts setup.yaml

PLAY [switches] *******************************************************************************

TASK [Gathering Facts] ************************************************************************
 ok: [spine01]
 ok: [leaf01]

TASK [ospf : Enable OSPF] *********************************************************************
 ok: [spine01] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 ok: [leaf01] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})

PLAY RECAP ************************************************************************************
 leaf01 : ok=2 changed=0 unreachable=0 failed=0
 spine01 : ok=2 changed=0 unreachable=0 failed=0

The last two lines say changed=0 because the automated configuration is identical to what we configured manually. Hence, there’s nothing to change. This is a good indication that the Playbook works as expected, and we can safely add more switches to the automation process.

Expanding the automation

Next, let’s add the rest of the switches and corresponding RIDs to the hosts file:

[switches]
 spine01 rid=192.168.0.21
 leaf01 rid=192.168.0.11
 leaf02 rid=192.168.0.12
 leaf03 rid=192.168.0.13
 leaf04 rid=192.168.0.14
 spine02 rid=192.168.0.22

Now run the playbook again:

cumulus@oob-mgmt-server:~/cumulus-ospf-ansible$ ansible-playbook -i hosts setup.yaml

PLAY [switches] **********************************************************************

TASK [Gathering Facts] ***************************************************************
 ok: [leaf02]
 ok: [leaf04]
 ok: [leaf03]
 ok: [spine01]
 ok: [leaf01]
 ok: [spine02]

TASK [ospf : Enable OSPF] ************************************************************
 ok: [spine01] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 ok: [leaf01] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 changed: [leaf02] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 changed: [leaf04] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 changed: [leaf03] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})
 changed: [spine02] => (item={u'prefix': u'192.168.0.0/16', u'area': u'0.0.0.0'})

PLAY RECAP *****************************************************************************
 leaf01 : ok=2 changed=0 unreachable=0 failed=0
 leaf02 : ok=2 changed=1 unreachable=0 failed=0
 leaf03 : ok=2 changed=1 unreachable=0 failed=0
 leaf04 : ok=2 changed=1 unreachable=0 failed=0
 spine01 : ok=2 changed=0 unreachable=0 failed=0
 spine02 : ok=2 changed=1 unreachable=0 failed=0

All of the switches except leaf01 and spine01 have changed. Let’s use NetQ to validate those changes.

cumulus@oob-mgmt-server:~$ netq check ospf
Total Sessions: 6, Failed Sessions: 0

This shows all the OSPF state changes for each switch. Although the output is a little mixed up, you can see that each switch is listed five times; one time for each adjacency. Based on this, we can tell that the automated configuration worked for all of the switches!

But to get a clearer picture, let’s look just at the changes on leaf01 within the last five minutes. Of course, NetQ can do this as well.

NetQ shows leaf01 becoming fully adjacent with four other switches. Not coincidentally, this is the number of switches whose configurations changed! Because the adjacency with spine01 was made more than five minutes ago and hasn’t changed, it doesn’t show up in the output.

Give it a shot!

Cumulus Linux, NetQ and Ansible work together seamlessly to give you a complete automation solution. Everything you need is already there! Start by automating a simple task you’re comfortable with, and only on a handful of devices. From there, you can move onto more complex tasks, adding more devices as you go.

Although automating something as critical as your network can be intimidating, it’s well worth it. Automation can dramatically reduce the number of accidental configurations, fat-finger mistakes, and unauthorized changes. When you combine network automation with NetQ, you’ll almost never need to log into a switch to manually check its configuration or status. Not only that, with NetQ you get a detailed log of every change that occurs on your network devices – whether it’s a configuration change or a state change such as an interface going down or a route flapping. And when an improper configuration does occur, you can just rerun the appropriate Playbook to put everything back in order.

The post Automating Cumulus Linux with Ansible appeared first on Cumulus Networks Blog.

23 May, 2018 07:59PM by Ben Piper

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Call for nominations for the Technical Board

The current 2-year term of the Technical Board is over, and it’s time for electing a new one. For the next two weeks (until 6 June 2018) we are collecting nominations, then our SABDFL will shortlist the candidates and confirm their candidacy with them, and finally the shortlist will be put to a vote by ~ubuntu-dev.

Anyone from the Ubuntu community can nominate someone.

Please send nominations (of yourself or someone else) to Mark Shuttleworth <mark.shuttleworth at ubuntu.com> and CC: the nominee. You can optionally CC: the Technical Board mailing list, but as this is public, you *must* get the agreement of the nominated person before you CC: the list.

The current board can be seen at ~techboard.

Originally posted to the ubuntu-devel-announce mailing list on Wed May 23 18:19:18 UTC 2018 by Walter Lapchynski on behalf of the Ubuntu Community Council.

23 May, 2018 06:54PM

Xubuntu: New Wiki pages for Testers

During the last few weeks of the 18.04 (Bionic Beaver) cycle, we had 2 people drop by in our development channel trying to respond to the call for testers from the Development and QA Teams.

It quickly became apparent to me that I was having to repeat myself in order to make it “basic” enough for someone who had never tested for us, to understand what I was trying to put across.

After pointing to the various resources we have, and other flavours use – it transpired that they both would have preferred something a bit easier to start with.

So I asked them to write it for us all.

Rather than belabour my point here, I’ve asked both of them to write a few words about what they needed and what they have achieved for everyone.

Before they get that chance – I would just like to thank them both for the hours of work they have put in drafting, tweaking and getting the pages into a position where we can tell you all of their existence.

You can see the fruits of their labour at our updated web page for Testers and the new pages we have at the New Tester wiki.

Kev
On behalf of the Xubuntu Development and QA Teams.

“I see the whole idea of OS software and communities helping themselves as a breath of fresh air in an ever more profit obsessed world (yes, I am a cynical old git).

I really wanted to help, but just didn’t think that I had any of the the skills required, and the guides always seemed to assume a level of knowledge that I just didn’t have.

So, when I was asked to help write a ‘New Testers’ guide for my beloved Xubuntu I absolutely jumped at the chance, knowing that my ignorance was my greatest asset.

I hope what resulted from our work will help those like me (people who can easily learn but need to be told pretty much everything from the bottom up) to start testing and enjoy the warm, satisfied glow of contributing to their community.
Most of all, I really enjoyed collaborating with some very nice people indeed.”
Leigh Sutherland

“I marvel at how we live in an age in which we can collaborate and share with people all over the world – as such I really like the ideas of free and open source. A long time happy Xubuntu user, I felt the time to be involved, to go from user-only to contributor was long overdue – Xubuntu is a community effort after all. So, when the call for testing came last March, I dove in. At first testing seemed daunting, complicated and very technical. But, with leaps and bounds, and the endless patience and kindness of the Xubuntu-bunch over at Xubuntu-development, I got going. I felt I was at last “paying back”. When flocculant asked if I would help him and Leigh to write some pages to make the information about testing more accessible for users like me, with limited technical skills and knowledge, I really liked the idea. And that started a collaboration I really enjoyed.

It’s my hope that with these pages we’ve been able to get across the information needed by someone like I was when I started -technical newby, noob- to simply get set up to get testing.

It’s also my hope people like you will tell us where and how these pages can be improved, with the aim to make the first forays into testing as gentle and easy as possible. Because without testing we as a community can not make xubuntu as good as we’d want it to be.”
Willem Hobers

23 May, 2018 04:49PM

Benjamin Mako Hill: Natural experiment showing how “wide walls” can support engagement and learning

Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have “wide walls” in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In a new paper, Sayamindu Dasgupta and I attempted to provide an empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.

Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the Scratch programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, Sayamindu was guided by “wide walls” principle when he designed and implemented the Scratch cloud variables system in 2011-2012.

While designing the system, Sayamindu hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding persistence and shared-ness. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, we saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.

cloud-variable-scriptExample of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.

Although these examples reflected powerful anecdotal evidence, we were also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.

Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.

We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.

The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they had used the system.

Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)

It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.

Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for download as an open-access piece from the ACM digital library.


This blog post, and the open access paper that it describes, is a collaborative project with Sayamindu Dasgupta. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.

23 May, 2018 04:17PM

hackergotchi for ZEVENET

ZEVENET

Looking Forward to Zevenet EE 5.2: IPv6 Support

The transition to IPv6 is inevitably close, organizations around the world are preparing their systems, operations and procedures in order to be ready when the time comes. Some of them consider IPv6 a must ability for any solution to be integrated in their data centers nowadays. At Zevenet, IPv6 was a pending subject to face and for that reason, after some source code refactoring and functions...

Source

23 May, 2018 10:33AM by Zevenet

hackergotchi for VyOS

VyOS

VyOS 1.2.0 status update

While VyOS 1.2.0 nightly builds have been fairly usable for a while already, there are still some things to be done because we can make a named release candidate from it. These are the things that have been done lately:

EC2 AMI scripts retargeting and clean up

The original AMI build scripts had been virtually unchanged since their original implementation in 2014, and by this time they've had ansible warnings at every other step, which prompted us to question everything they do, and we did. This resulted in a big spring cleanup of those scripts, and now they are way shorter, faster, and robust.

Other than the fact that they now work with VyOS 1.2.0 properly, one of the biggest improvements from the user point of view is that it's now easy to build an AMI with a custom config file simply by editing the file at playbooks/templates/config.boot.default.ec2

The primary motivation for it was to replace cumbersome in-place editing of the config.boot.default from the image with a single template, but in the end it's a win-win solution for both developers and users.

The original scripts were also notorious for their long execution time and fragility. What's worse is that when they failed (and it's usually "when" rather than "if"), they would leave behind a lot of gargabe they couldn't automatically clean up, since they used to create a temporary VPC complete with an internet gateway, subnet, and route table, all just for a single build instance. They also used a t2.medium instance that was clearly oversized for the task and could be expensive to leave running if clean up failed.

Now they create the build instance in the first available subnet of the default VPC, so even if they fail, you only need to delete a t2.micro instance, a key pair, and a security group.

It is no longer possible to build VyOS 1.1.x images with those scripts from the baseline code, but I've created a tag named 1.1.x from the last commit where it was possible, so you can do it if you want to — without these recent improvements of course.

Package upgrades and new drivers

We've upgraded StrongSWAN to 5.6.2, which hopefully will fix a few longstanding issues. Some enthusiastic testers are already testing it, but everyone is invited to test it as well.

SR-IOV is now basically a requirement for high performance virtualized networking, and it needs appropriate drivers. Recent nightly builds include a newer version of Intel's ixgbe and Mellanox OFED drivers, so the support for recent 10gig cards and SR-IOV in particular has improved.

A step towards using the master branch again

The "current" git branch we use throughout the project where everyone else uses "master" was never intended to be a permanent setup: it always was a workaround for the master branch in packages inherited from Vyatta Core being messed up beyond any repair. It will take quite some time to get rid of the "current" branch completely and we'll only be able to do it when we finally consolidate all the code under vyos-1x, but we've made jenkins builds correctly put the packages built from the "master" branch in our development repository, so we'll be able to do it for packages that do not include any legacy code at least.

IPv6 VRRP status

This is the most interesting part. IPv6 VRRP is perhaps a single most awaited feature. Originally it was blocked by lack of support for it in keepalived. Now keepalived supports it, but integrating it will need some backwards-incompatible changes.

Originally, keepalived allowed mixing IPv4 and IPv6 in the same group, but it no longer allows it (curiously, the protocol standard does allow IPv4 advertisments over IPv6 transport, but I can see why they may want to keep these separate). This means to take advantage of the improvements it made, we also have to disallow it, thus breaking the configs of people who attempted to use it. We've been thinking about keeping the old syntax while generating different configs from it, or automated migration, but it's not clear if automated migration is really feasible.

An incompatible syntax change is definitely needed because, for example, if we want to support setting hello source address or unicast VRRP peer address for both IPv4 and IPv6, we obviously need separate options.

Soon IPv6 addresses in IPv4 VRRP groups will be disallowed and syntax for IPv6-only VRRP groups will be added alongside the old vrrp-group syntax. If you have ideas for the new syntax, the possible automated migration, or generally how to make the transition smooth, please comment on the relevant task.

PowerDNS recursor instead of dnsmasq

The old dnsmasq (which I, frankly, always viewed as something of a spork, with its limited DHCP server functionality built into what's mainly a caching DNS resolver), has been replaced with PowerDNS recursor, a much cleaner implementation.

23 May, 2018 06:37AM by Daniil Baturin

hackergotchi for Qubes

Qubes

Fedora 26 and Debian 8 approaching EOL

Fedora 26 will reach EOL (end-of-life) on 2018-06-01, and Debian 8 (“Jessie” full, not LTS) will reach EOL on 2018-06-06. We strongly recommend that all Qubes users upgrade their Fedora 26 TemplateVMs and StandaloneVMs to Fedora 27 or higher by 2018-06-01. Debian 8 users may choose to rely on Debian 8 LTS updates until approximately 2020-06-06 or upgrade to Debian 9 at their discretion. We provide step-by-step upgrade instructions for upgrading from Fedora 26 to 27, Fedora 27 to 28, and Debian 8 to 9. For a complete list of TemplateVM versions supported for your specific version of Qubes, see Supported TemplateVM Versions.

We also provide fresh Fedora 27, Fedora 28, and Debian 9 TemplateVM packages through the official Qubes repositories, which you can install in dom0 by following the standard installation instructions for Fedora and Debian TemplateVMs.

After upgrading your TemplateVMs, please remember to set all qubes that were using the old template to use the new one. The instructions to do this can be found in the upgrade instructions for Fedora 26 to 27, Fedora 27 to 28, and Debian 8 to 9.

Please note that no user action is required regarding the OS version in dom0. If you’re using Qubes 3.2 or 4.0, there is no dom0 OS upgrade available, since none is currently required. For details, please see our Note on dom0 and EOL.

If you’re using an older version of Qubes than 3.2, we strongly recommend that you upgrade to 3.2, as older versions are no longer supported.

23 May, 2018 12:00AM

May 22, 2018

hackergotchi for VyOS

VyOS

Things the new style configuration mode definitions intentionally do not support

I've made three important changes to the design of the configuration command definitions, and later I realized that I never wrote down a complete explanation of the changes and the motivation behind them.

So, let's make it clear: these changes are intentional and they shouldn't be reintroduced. Here's the details:

The "type" option

In the old definitions, you can see the "type:" option in the node.def files very often. In the new style XML definitions, there's no equivalent of it, and the type is always set to "txt" in autogenerated node.def's for tag and leaf nodes (which means "anything" to the configuration backend).

I always felt that the "type" option suffers from two problems: scope creep and redundancy.

The scope creep is in the fact that "type" was used for both value validation and generating completion help in "val_help:" option. Also, the "u32" type (32-bit unsigned integer) has a little known undocumented feature: it could be used for range validation in form of "type: u32:$lower-$upper" (e.g. u32:0-65535). It has never been used consistently even by the original Vyatta Core authors, plenty of node.def's use additional validation statements instead.

Now to the redundancy: there are two parallel mechanisms for validations in the old style definitions. Or three, depending on the way you count them. There are "syntax:expression:" statements that are used for validating values at set time, and "commit:expression:" that are checked at commit time.

My feeling from working with the system for scary amount of time was that the "type" option alone is almost never suffucient, and thus useless, since actual, detailed validation is almost always done elsewhere, in those "syntax/commit:expression:" statements or in the configuration scripts. Sometimes a "commit:expression:" is used where "syntax:expression:" would be more appropriate (i.e. validation is delayed) but let's focus on set-time validation only.

But without data to back it up, a feeling is just a feeling, so I made up a quick and dirty script to do some analysis. You can repeat what I've done easily with "find /opt/vyatta/share/vyatta-cfg/templates/ -type f -maxdepth 100 -name 'node.def' | nodecheck.py".

On VyOS 1.1.8 (which doesn't include any rewritten code) the output is:
Has type: 2737
Has type txt: 1387
Has type other than txt: 1350
Has commit or syntax expression: 1700
Has commit or syntax expression and type txt: 740
Has commit or syntax expression and type other than txt: 960

While irrelevant to the problem on hand, the total count of node.def's is 4293). In other words, of all nodes that have the type option, 50% have it set to "txt". Some of them are genuinely "anything goes" nodes such as "description" options, but most use it as a placeholder.

68% of all nodes that have a type are also using either "syntax:expression:" or "commit:expression:". Of all nodes that have a type more specific than "txt", 73% have additional validation. This means that even for supposedly specific types, type alone is enough only in 23% cases. This raises the question whether we need types at all.

Sure, we could introduce more types and add support for something of a sum type, but is it worth the trouble if validation can be easily delegated to external scripts? Besides, right now types are built in the config backend, which means adding a new one requires modifying it starting from the node.def parser.

In the new style definitions, I felt like the only special case that is special enough is regular expression. This is how value constraints checked at set time are defined:

<leafNode name="foo">
  <properties>
    <constraint>
      <regex>(baz|baz)</regex>
      <validator>quux</validator>
    </constraint>
  </properties>
<leafNode>

Here the "validator" tag contains a reference to a script stored in /usr/libexec/vyos/validators/. Since adding a new validator is easy, there's no reason to hesitate to add new ones for common (and even not so common) cases. Note that "regex" option is automatically wrapped in ^$, so there's no need to do it by hand every time.

Default values

The old definitions used to support "default:" option for setting default values for nodes. It looks innocous on the surface, but things get complicated when you look deeper into its behaviour.

You may think a node either exists, or it does not. What is the value of a node that doesn't exist? Sounds rather like a Zen koan, but here's cheap enlightenment for you: it depends on whether it has a default value or not.

Thus, nodes effectively have three states: "doesn't exist", "exists", and "default". As you can already guess, it's hard to tell the latter two apart. It's also very hard to see if a node was deleted from a config or just reset to a default value. It also means that every node lookup cannot operate on the running config tree alone and has to consult the command definitions as well, which is very bad if you plan to use the same code for the CLI and for standalone config handling programs such as migration scripts.

Last time people tried to introduce rollback without reboot, the difficulties of handling the third virtual "default" state was one of the biggest problems, and it's still one of the reasons we don't have a real rollback. VyConf has no support for default values for this reason, so we should eliminate them as we rewrite the code.

Defaults should be handled by config scripts now. Sure, we lose "show -all" and the ability to view defaults, but the complications that come with it hardly make it worth the trouble. There are also many implicit defaults that come from underlying software options anyway.

Embedded shell scripts

That's just a big "no". Have you ever tried to debug code that is spread across multiple node.def's in nested directories and that cannot be executed separately or stepped through?

While it's tempting to allow that for "trivial" scripts, the code tends to grow and things get ugly. Look the the implementation of PPPoE or tunnel interfaces in VyOS 1.1.8.

If it's more than one command, make it an external script, and you'll never regret the decision when it begins to grow.

22 May, 2018 06:45PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: UbuCon Europe 2018: Analysing a dream [English|Spanish]

UbuCon Europe 2018: Analysing a dream (English)

The idea of organising the Ubucon in Xixon, Asturies was set two years ago, while participating in the European Ubucon in Essen (Germany). The Paris Ubucon took place and in those days we uderstood that there was a group enough of people with the capacities and the will to hold an European Congress for Ubuntu lovers. We had learnt a lot from German and French colleagues thanks to their respective amazing organizations and, at the same time, our handicap was the lack of s consolidated group in Spain.


Asturies



The first task was to bring together a group of people big enough and with the motivation of working together both in preparation and the development of activities during the three main days of the Ubucon. Eleven volunteers responded to the call of Marcos Costales, creating a telegram group where two main decisions were taken:
  • Chosen city, Xixon
  • Date, coinciding with the release of Ubuntu 18.04


2


A particular building was selected for the Ubucon. The "Antiguo instituto Jovellanos" had everything we needed: perfect location in the center of the city, big conference room for 100 people, inner courtyard and the availability of several extra rooms in case we had more and more speakers.


3

We made our move and offered Spain as a potential host for the next UbuCon Europe. We knew that the idea was floating in the minds of our portuguese and british colleagues but someway, he had the feeling that it was our moment to take the risk and we did. Considering that there is no European Organisation for Ubuntu (although it is getting close), we tacitly recevied the honor of being in charge of the European Ubucon 2018. Then , the long process of making a dream come true began.

4

The organization was simple and even simpler. Pivoting on Marcos as the main responsible, we organized several groups in Telegram: communication, organization....and we began to give publicity to the event.
The gathering of attendees was done through the website http://ubucon.eu where after a first press release that some important blogs of Ubuntu in Spanish spread, we received an avalanche of registrations (more than 100 in the first days) that made us fear for our capacity of reception and that on the other hand allowed us to take the pulse of the interest aroused.

We created a GMail account to manage communications, reused existing European UbuCon accounts on Twitter and Google+ and created a Facebook account managed by our friend Diogo Constantino from Portugal, to give information to everyone on social networks.
We used Telegram to create an information channel (205 members) and two groups, one for attendees (68 members) and one for speakers (31 members).
Suddenly it seemed to us that the creature was growing and growing, even above our expectations. We had to ask for institutional support and we got it.

The Municipal Foundation of Culture and Education of Xixón gave us the Old Jovellanos Institute. 
The Gijón Congress Office provided us with contacts and discounts on bus and train transport (ALSA and RENFE).
Canonical helped us financially by paying the insurance that covered our possible accident liabilities and the small costs of auxiliary material. 
Nathan Haines, Marius Quabeck and Ubuntu FR provided us with tablecloths for the tables.
Slimbook provided us with laptops for each of the conference rooms and for the reception of attendees.

5

At the time, with our dream growing in the wishing oven like a huge cake, it seemed to us that we needed legal protection in the form of a partnership and we tried. We live in a big old country that is not agile for these things and besides the difficulty of bringing together people from Alicante, Andalusia, Asturias, the Balearic Islands, Barcelona, León and Madrid we were put ahead of the administration and it was not possible.
Speaking of the dispersion of the organizers. How did we coordinate? 
Telegram has been the main axis. EtherPad is used to build documents collaboratively. Mumble (hosted on a Raspberry PI) for coordination meetings, Drive for documents and records of attendees and speakers and MailChimp for bulk mailings.


6

So it was time to ask for speakers and then what was already overflowing became a real luxury. We began to receive requests for talks from individuals and businesses and there were weeks when we had to meet every other day to decide on approvals.  Finally we had 37 talks, conferences, workshop and 6 stands, 3 podcasts broadcasting in Spanish, Portuguese and German. On Saturday the 28th we had to provide up to 4 spaces at a time to accommodate everyone.

7

An Ubucon Europe must have at least three objectives to achieve:
  1. Sharing knowledge
  2. Bring Europeans together around Ubuntu, strengthen bonds of friendship
  3. Have fun
8


To achieve the third objective we had the best possible host. Marcos was in his hometown and had all the elements to make the hours of living together and having fun unique. We knew it was very important that social events be close to each other. Xixon is a city with an ideal size for this to happen and so they organized themselves. Focused on Saturday's spicha, which was attended by 87 people, the rest of the days we had a complete program that allowed those who did not know Spain, and Asturias in particular, to touch the sky of a tremendously welcoming land with their fingers. Live music in the Trisquel, drinks and disco by the sea, cider and Asturian gastronomy in the Poniente beach and cultural visits... Could you ask for more?


9

10


And the dream came true, on Friday 27th April Ubucon Europe 2018 was inaugurated. All the scheduled events were on schedule and the staff of 8 people attended as well as we knew the reception of the event as well as the logistics of the up to 4 simultaneous rooms that we needed at some point. Without incident, 140 attendees were able to listen to some of the 37 talks, more than 350 messages were published on Twitter, hundreds of posts on Google + and Facebook and we spent 474 € on the small expenses of the organization, which have possibly provided the city with 40,000 € of profit between hotels, restaurants, transport.... The social events were a success and the group of participants/speakers stayed together for three days.
We're proud of you. We can't always make our dreams come true.

11

And that's all. See you next time!




UbuCon Europe 2018: Análisis de un sueño (Spanish)

La organización de la Ubucon en Xixón se empezó a gestar dos años antes, mientras participábamos en Alemania en la Ubucon Europea de Essen, luego vino París y para esas fechas empezamos a entender que había un grupo suficiente de personas con capacidad de organización y con  voluntad de llevar a cabo un congreso europeo de amantes de Ubuntu. Habíamos aprendido de alemanes y franceses con organizaciones fantásticas y sin embargo teníamos en nuestra contra la inexistencia de un equipo consolidado en España.

12

Lo primero fue reunir a un número suficiente de personas dispuestas a trabajar, tanto en la preparación como en el desarrollo de los tres días. A la convocactoria de Marcos Costales nos apuntamos 11 personas reunidas en un grupo de Telegram que tomamos las primeras decisiones:
  •     La ciudad elegida, Xixón
  •     Las fechas coincidentes con la salida de Ubuntu 18.04

Conseguimos un edificio singular para la celebración. El antiguo Instituto Jovellanos tenía todo lo que necesitábamos: estaba en un lugar céntrico de la ciudad, tenía un salón de actos para más de 100 personas, un patio para montar stands y la posibilidad de utilizar distintas aulas según fuéramos consiguiendo más conferenciantes.


13


Nos decidimos y nos postulamos como país anfitrión. Sabíamos que la idea rondaba entre portugueses y británicos, pero de alguna manera teníamos la sensación de que era nuestro momento para lanzarnos a la piscina y así lo hicimos y dado que hasta ahora no existe ninguna organización europea de Ubuntu (aunque ya está cerca) de una manera tácita se nos otorgó el honor de hacernos cargo de la Ubucon Europea de 2018 y entonces empezó la carrera por hacer realidad un sueño.

14

La organización fue sencilla y aun se simplificó más. Pivotando sobre Marcos como responsable pincipal, organizamos en Telegram varios grupos: comunicación, organización....y empezamos a dar publicidad al evento.
La recogida de asistentes se hizo a través de la página web http://ubucon.eu donde tras un primer comunicado de prensa que difundieron algunos blogs importantes de Ubuntu en español, recibimos una avalancha de inscripciones (más de 100 en los primeros días) que nos hizo temer por nuestra capacidad de acogida y que por otro lado nos permitió ir tomando el pulso al interés suscitado.

Creamos una cuentra de GMail para administrar las comunicaciones, reutilizamos las cuentas existentes de la UbuCon europea en Twitter y Google+ y creamos una de Facebook que administró nuestro amigo Diogo Constantino de Portugal, para dar información a todo el mundo en redes sociales.
Usamos Telegram para crear un canal de información (205 miembros) y dos grupos, uno para asistentes (68 miembros) y otro para conferenciantes (31 miembros).
De pronto nos pareció que la criatura crecía y crecía, incluso por encima de nuestras espectativas. Tuvimos que pedir apoyo institucional y lo conseguimos.

La Fundación Municipal de Cultura y Educación de Xixón nos cedió el Antiguo Instituto Jovellanos
La Oficina de Congresos de Gijón nos facilitó contactos y descuentos en transporte de autobús y tren (ALSA y RENFE).
Canonical nos ayudó económicamente pagando el seguro que cubría nuestras posibles responsabilidades por accidentes y los pequeños costes de material auxiliar
Nathan Haines, Marius Quabeck y Ubuntu FR nos facilitaron manteles para las mesas.
Slimbook nos facilitó portátiles para cada una de las salas de conferencias y para la recepción de asistentes.

En aquel momento, con nuestro sueño creciendo en el horno de los deseos como un enorme bizcocho, nos pareció que necesitábamos un amparo legal en forma de asociación y lo intentamos. Vivimos en un país grande y viejo que no es ágil para estas cosas y además de la dificultad de reunir a personas de Alicante, Andalucía, Asturias, Baleares, Barcelona, León y Madrid se nos puso por delante la administración y no fue posible.
Hablando de la dispersión de los organizadores. ¿Cómo nos coordinábamos? 
Telegram ha sido el eje principal. EtherPad  lo usamos para construir documentos de manera colaborativa. Mumble (alojado en una Raspberry PI) para las reuniones de coordinación, Drive para los documentos y registros de asistentes y conferenciantes y MailChimp para el envío de correos masivos.

15

Así las cosas llegó el momento de pedir conferenciantes y entonces lo que ya estaba siendo un desborde empezó a ser realmente un lujo. Empezamos a recibir peticiones de charlas por parte de particulares y empresas y hubo semanas en las que cada dos días teníamos que reunirnos para decidir las aprobaciones.  Finalmente tuvimos 37 charlas, conferencias, workshop y 6 stands, 3 podcasts emitiendo en español, portugués y alemán. El sábado 28 tuvimos que habilitar hasta 4 espacios a la vez para albergar a todo el mundo.

16

Una Ubucon Europe debe tener al menos tres objetivos a cumplir:
  1.     Compartir conocimiento
  1.     Reunir a los europeos en torno a Ubuntu, reforzar lazos de amistad
  1.     Divertirse

Para conseguir el tercer objetivo contábamos con el mejor anfitrión posible. Marcos estaba en su ciudad y tenía todos los mimbres para que las horas de convivencia y diversión fuesen únicas. Sabíamos que era muy importante que los eventos sociales estuvieran cerca unos de otros. Xixón es una ciudad con un tamaño ideal para que esto ocurriera y así se organizaron. Centrados en la espicha del sábado a la que asistieron 87 personas, el resto de los días tuvimos un programa completo que permitió a quienes no conocian España, y Asturias en particular tocar con los dedos el cielo de una tierra tremendamente acogedora. Música en vivo en el Trisquel, copas y disco junto al mar, sidra y gastronomia asturiana en la Playa de Poniente y visitas culturales...¿Se podía pedir más?


17

Y el sueño se hizo realidad, el viernes 27 de Abril se inauguró la Ubucon Europe 2018. Todos los actos programados cumplieron con los horarios previstos y el staff de 8 personas atendimos tan bien como supimos la recepción del evento así como la logística de las hasta 4 salas simultáneas que en algún momento necesitamos. Sin incidentes, 140 asistentes pudieron escuchar alguna de las 37 charlas, se publicaron más de 350 mesajes en Twitter, cientos de post en Google + y Facebook y empleamos 474 € en los pequeños gastos de la organización, que posiblemente hayan proporcionado a la ciudad unos 40.000 € de beneficio entre hoteles, restaurantes, transporte... Los actos sociales fueron un éxito y el grupo de participantes/conferenciantes se mantuvo unido los tres días.
Estamos orgullosos. No siempre podemos hacer realidad nuestros sueños.


18

Y esto ha sido todo. ¡Nos vemos en la siguiente!

19

UbuCon Europe 2018 | Made with ❤ by:
  • Fco. Javier Teruelo de Luis
  • Sergi Quiles Pérez
  • Francisco Molinero
  • Santiago Moreira
    • Antonio Fernandes
    • Paul Hodgetts
    • Joan CiberSheep
    • Fernando Lanero
    • Manu Cogolludo
    • Marcos Costales

    20

    Text redaction by Paco Molinero. Translation by Santiago Moreira.


    22 May, 2018 09:32AM by Marcos Costales (noreply@blogger.com)

    May 21, 2018

    hackergotchi for VyOS

    VyOS

    Using the "policy route" and packet marking for custom QoS matches

    There is only that much you can do in a QoS rules to describe the traffic you want it to match. There's DCP, source/destination, and protocol, and that's enough to cover most of the use cases. Most, but not all. Fortunately, they can also match packet marks and that's what enables creating custom matches.

    Packet marks are numeric values set by Netfilter rules that are local to the router and can be used as match criteria in other Netfilter rules and many other components of the Linux kernel (ip, tc, and so on).

    Suppose you have a few phones in the office and you want to prioritize their VoIP traffic. You could create a QoS match for each of them, but it's quite some config duplication, which will only get worse when you add more phones. If you find a way to group those addresses in one match, wouldn't it be nice? Sadly, there's no such option in QoS. Firewall can use address groups though, so we can make the QoS rule match a packet mark (e.g. 100) and set that mark to traffic from the phones.

    # show traffic-policy 
     priority-queue VoIP {
         class 7 {
             match SIP {
                 mark 100
             }
             queue-type drop-tail
         }
         default {
             queue-type fair-queue
         }
     }
    
    

    Now the confusing bit. Where do we set the mark? Around Vyatta 6.5, an unfortunate design decision was made: "firewall modify" was moved under overly narrow and not so obvious "policy route". Sadly we are stuck with it for the time being because it's not so easy to automatically convert the syntax for upgrades. But, its odd name notwithstanding, it still does the job.

    Let's create an address group and a "policy route" instance that sets the mark 100:

    # show firewall group 
     address-group Phones {
         address 10.4.5.10
         address 10.4.5.11
         address 10.4.5.12
     }
    [edit]
    # show policy route 
     route VoIP {
         rule 10 {
             set {
                 mark 100
             }
             source {
                 group {
                     address-group Phones
                 }
             }
         }
     }
    

    Now we need to assign the QoS ruleset to our WAN and the "policy route" instance to our LAN interface:

    set interfaces ethernet eth0 policy route VoIP
    set interfaces ethernet eth1 traffic-policy out VoIP
    

    You can as well take advantage of "policy route" ruleset options for time-based filtering or matching related connections. Besides, you can use it to set DSCP values in case your QoS setup is on a different router:

    set policy route Foo rule 10 set dscp 46
    

    21 May, 2018 09:14PM by Daniil Baturin

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Daniel Pocock: OSCAL'18 Debian, Ham, SDR and GSoC activities

    Over the weekend I've been in Tirana, Albania for OSCAL 2018.

    Crowdfunding report

    The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

    The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

    Debian and Ham radio booth

    Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

    The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

    I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

    Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

    A versatile venue and the dictator's revenge

    It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

    The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

    Meeting with Debian's Google Summer of Code students

    Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

    We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

    Workshops and talks

    On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

    21 May, 2018 08:44PM

    The Fridge: Ubuntu Weekly Newsletter Issue 528

    Welcome to the Ubuntu Weekly Newsletter, Issue 528 for the week of May 13 – 19, 2018. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Wild Man
    • Chris Guiver
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    21 May, 2018 08:07PM

    Kubuntu General News: Plasma 5.12.5 bugfix update for Kubuntu 18.04 LTS – Testing help required

    Are you using Kubuntu 18.04, our current LTS release?

    We currently have the Plasma 5.12.5 LTS bugfix release available in our Updates PPA, but we would like to provide the important fixes and translations in this release to all users via updates in the main Ubuntu archive. This would also mean these updates would be provide by default with the 18.04.1 point release ISO expected in late July.

    The Stable Release Update tracking bug can be found here: https://bugs.launchpad.net/ubuntu/+source/plasma-desktop/+bug/1768245

    A launchpad.net account is required to post testing feedback as bug comments.

    The Plasma 5.12.5 changelog can be found at: https://www.kde.org/announcements/plasma-5.12.4-5.12.5-changelog.php

    [Test Case]

    * General tests:
    – Does plasma desktop start as normal with no apparent regressions over 5.12.4?
    – General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

    * Specific tests:
    – Check the changelog:
    – Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
    – Test the ‘fixed’ functionality.

    Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt based package management is advisable.

    Details on how to enable the propose repository can be found at: https://wiki.ubuntu.com/Testing/EnableProposed.

    Unfortunately that page illustrates Xenial and Ubuntu Unity rather than Bionic in Kubuntu. Using Discover or Muon, use Settings > More, enter your password, and ensure that Pre-release updates (bionic-proposed) is ticked in the Updates tab.

    Or from the commandline, you can modify the software sources manually by adding the following line to /etc/apt/sources.list:

    deb http://archive.ubuntu.com/ubuntu/ bionic-proposed restricted main multiverse universe

    It is not advisable to upgrade all available packages from proposed, as many will be unrelated to this testing and may NOT have been sufficiently verified as updates to assume safe. So the safest but a little involved method would be to use Muon (or even synaptic!) to select each upgradeable packages with a version containing 5.12.5-0ubuntu0.1 (5.12.5.1-0ubuntu0.1 for plasma-discover due to an additional update).

    Please report your findings on the bug report. If you need some guidance on how to structure your report, please see https://wiki.ubuntu.com/QATeam/PerformingSRUVerification. Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

    We need your help to get this important bug-fix release out the door to all of our users.

    Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

    21 May, 2018 03:36PM

    hackergotchi for Purism PureOS

    Purism PureOS

    Last Call for Librem 5 Dev Kit: order yours before June 1st 2018

    Purism has finalized the specifications for the Librem 5 development kit and will be placing all the component parts order and fabrication run the first week of June 2018. If you want to have early access to the hardware that will serve as the platform for the Librem 5 phone, you must place your dev kit order before June 1st, 2018. The price for the development kit is now $399, up from the early-bird pricing that was in effect during the campaign and until today. The dev kit is a small batch, “limited edition” product. After this batch, we are not planning for a second run (as the production of the phone itself will replace the dev kit in 2019).

    Improved specifications

    We decided to wait to get the latest i.MX 8M System On Module (SOM), rather than utilizing the older i.MX 6 SOM, therefore having the dev kit align nicely with the ending phone hardware specifications. This means the dev kits will begin delivery in the latter part of August for the earliest orders while fulfilling other dev kits in September. Choosing to wait for the i.MX 8M SOM also means our hardware design for the Librem 5 phone is still on target for January 2019 because we are pooling efforts rather than separating them as two distinct projects. Our dev kit choices and advancements benefit the Librem 5 phone investment and timeline.

    The current dev kit specification is (subject to minor changes during purchasing):

    • i.MX 8M system on module (SOM) including at least 2GB LPDDR4 RAM and 16GB eMMC (NOTE: The Librem 5 phone will have greater RAM and storage)
    • M.2 low power WiFi+Bluetooth card
    • M.2 cellular baseband card for 3G and 4G networks
    • 5.7″ LCD touchscreen with a 18:9 (2:1) 720×1440 resolution
    • 1 camera module
    • 1 USB-C cable
    • Librem 5 dev kit PCB
      • Inertial 9-axis IMU sensor (accel, gyro, magnetometer)
      • GNSS (aka “GPS”)
      • Ethernet (for debugging and data transfer)
      • Mini-HDMI connector (for second screen)
      • Integrated mini speaker and microphone
      • 3.5mm audio jack with stereo output and microphone input
      • Vibration motor
      • Ambient light sensor
      • Proximity sensor
      • Slot for microSD
      • Slot for SIM card
      • Slot for smartcard
      • USB-C connector for USB data (host and client) and power supply
      • Radio and camera/mic hardware killswitches
      • Holder for optional 18650 Li-poly rechargeable battery with charging from mainboard (battery not required and not included!)

    The dev kit will be the raw PCB without any outer case (in other words, don’t expect to use it as a phone to carry in your pocket!), but the physical setup will be stable enough so that it can be used by developers. As we finalize the designs and renders we will publish images.

    21 May, 2018 03:00PM by Nicole Faerber

    hackergotchi for SolydXK

    SolydXK

    New SolydXK Community site

    Today the last SolydXK domain, solydxk.com, was migrated to Europe and we celebrate this with a new site. The .com domain is the base site for our community. Here you can download the SolydXK ISOs and get support in our forum.

    The design hasn't changed much to keep everything recognizable for our users. There are still some things left to do. The mirror download has not been implemented yet and if you find anything else wrong with the site, please use the form to contact me: https://solydxk.com/about-us/contact

    Also note that from 25 May 2018, the GDPR is in effect. Please read our privacy policy here: https://solydxk.com/privacy-policy

    21 May, 2018 02:13PM

    hackergotchi for Parrot Security

    Parrot Security

    Parrot 4.0 release notes

    Parrot 4.0 is now available for download. The development process of this version required a lot of time, and many important updates make this release an important milestone in the history of our project. This release includes all the updated packages and bug fixes released since the last version (3.11), and it marks the end […]

    21 May, 2018 01:14PM by palinuro

    BunsenLabs Linux

    [Important] Forums privacy policy update – please review by May 24th

    On May 25th 2018, the European Union's new rules on general data protection (GDPR) come into force. As BL is hosted within the borders of the EU, and has a lot of EU users, we have expanded the forum rules with statements regarding how exactly the personal information you have provided when signing up is being used.

    This is relevant because aside from posts, you have provided us with (at least) an email address and name handle which constitute personal identifyable information under GDPR. This makes it a legal obligation for us to inform you about how this information is used exactly.

    Essentially, nothing has changed about how we run the forums when compared to before now – we safeguard our database and will not share it with a third party, ever.

    However, we need you to review the new forum rules ASAP. You should contact us if you do not consent to the new policy, in which case we will remove your forum account. Consent is a precondition for using our forum service.

    If you do not object by May 24th 2018, 21:00 UTC , we will regard this as an expression of consent.

    Since this is a sensitive issue, we will send a copy of this announcement to all signed-up users by email shortly.

    Thanks for bothering with this
    The BL team.

    21 May, 2018 12:00AM

    May 19, 2018

    hackergotchi for VyOS

    VyOS

    New-style operational mode command definitions are here

    We've had a convertor from the new style configuration command definitions in XML to the old style "templates" for a while in VyOS 1.2.0. As I predicted, a number of issues were only discovered and fixed as we have rewritten more old scripts, but by now they should be fully functional. However, until very recently, there was no equivalent of it for the operational mode commands. Now there is.

    The new style

    In case you've missed our earlier posts, here's a quick review. The configuration backend currently used by VyOS keeps command definitions in a very cumbesome format with a lot of nested directories where a directory represents a nesting level (e.g. "system options reboot-on-panic" is in "templates/system/options/reboot-on-panic"), a file with command properties must be present in every directory, and command definition files are allowed to have embedded shell scripts.

    This makes command definitions very hard to read, and even harder to write. We cannot easily change that format until a new configuration backend is complete, but we can abstract it away, and that's what we did.

    The new command definitions are in XML, and a RelaxNG schema is provided, so everyone can see its complete grammar and automatically check if their definitions are correctly written. Automated validation is already integrated in the build process.

    Rewriting the command definitions goes along with rewriting the code they call. New style code goes to the vyos-1x package.

    19 May, 2018 11:48PM by Daniil Baturin

    May 18, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Sina Mashek: Check if external ip has changed

    Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

    Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

    #!/bin/sh
    # Check external IP for change
    # Ideal for use in a cron job
    #
    # Usage: sh check-ext-ip.sh
    #
    # Returns: Nothing if the IP is same, or the new IP address
    #          First run always returns current address
    #
    # Requires dig:
    #    Debian/Ubuntu: apt install dnsutils
    #    Solus: eopkg install bind-utils
    #    CentOS/Fedora: yum install bind-utils
    #
    # by Sina Mashek <sina@sinacutie.stream>
    # Released under CC0 or Public Domain, whichever is supported
    
    # Where we will store the external IP
    EXT_IP="$HOME/.external-ip"
    
    # Check if dig is installed
    if [ "$(command -v dig)" = "" ]; then
        echo "This script requires 'dig' to run"
    
        # Load distribution release information
        . /etc/os-release
    
        # Check for supported release; set proper package manager and package name
        if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
            MGR="apt"
            PKG="dnsutils"
        elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
            MGR="yum"
            PKG="bind-utils"
        elif [ "$ID" = "solus" ]; then
            MGR="eopkg"
            PKG="bind-utils"
        else
            echo "Please consult your package manager for the correct package"
            exit 1
        fi
    
        # Will run if one of the above supported distributions was found
        echo "Installing $PKG ..."
        sudo "$MGR" install "$PKG"
    fi
    
    # We check our external IP directly from a DNS request
    GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"
    
    # Check if ~/.external-ip exists
    if [ -f "$EXT_IP" ]; then
        # If the external ip is the same as the current ip
        if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
            exit 0
        fi
    # If it doesn't exist or is not the same, grab and save the current IP
    else
        echo "$GET_IP" > "$EXT_IP"
    fi
    

    18 May, 2018 09:00PM

    Sina Mashek: Check if external ip has changed

    Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

    Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

    #!/bin/sh
    # Check external IP for change
    # Ideal for use in a cron job
    #
    # Usage: sh check-ext-ip.sh
    #
    # Returns: Nothing if the IP is same, or the new IP address
    #          First run always returns current address
    #
    # Requires dig:
    #    Debian/Ubuntu: apt install dnsutils
    #    Solus: eopkg install bind-utils
    #    CentOS/Fedora: yum install bind-utils
    #
    # by Sina Mashek <sina@sinacutie.stream>
    # Released under CC0 or Public Domain, whichever is supported
    
    # Where we will store the external IP
    EXT_IP="$HOME/.external-ip"
    
    # Check if dig is installed
    if [ "$(command -v dig)" = "" ]; then
        echo "This script requires 'dig' to run"
    
        # Load distribution release information
        . /etc/os-release
    
        # Check for supported release; set proper package manager and package name
        if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
            MGR="apt"
            PKG="dnsutils"
        elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
            MGR="yum"
            PKG="bind-utils"
        elif [ "$ID" = "solus" ]; then
            MGR="eopkg"
            PKG="bind-utils"
        else
            echo "Please consult your package manager for the correct package"
            exit 1
        fi
    
        # Will run if one of the above supported distributions was found
        echo "Installing $PKG ..."
        sudo "$MGR" install "$PKG"
    fi
    
    # We check our external IP directly from a DNS request
    GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"
    
    # Check if ~/.external-ip exists
    if [ -f "$EXT_IP" ]; then
        # If the external ip is the same as the current ip
        if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
            exit 0
        fi
    # If it doesn't exist or is not the same, grab and save the current IP
    else
        echo "$GET_IP" > "$EXT_IP"
    fi
    

    18 May, 2018 09:00PM

    hackergotchi for VyOS

    VyOS

    How to setup an IPsec connection between two NATed peers: using id's and RSA keys

    In the previous post from this series, we've discussed setting up an IPsec tunnel from a NATed router to a non-NATed one. The key point is that in the presence of NAT, the non-NATed side cannot identify the NATed peer by its public address, so a manually configured id is required.

    What if both peers are NATed though? Suppose you are setting up a tunnel between two EC2 instances. They are both NATed, and this creates its own unique challenges: neither of them know their public addresses or can identify their peers by their public address. So, we need to solve two problems.

    In this post, we'll setup a tunnel between two routers, let's call them "east" and "west". The "east" router will be the initiator, and "west" will be the responder.

    18 May, 2018 07:45PM by Daniil Baturin

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Marco Trevisan (Treviño): Hello Planet GNOME!

    Hey guys, although I’ve been around for a while hidden in the patches, some months ago (already!?!) I did my application to join the GNOME Foundation, and few days after – thanks to some anonymous votes – I got approved :), and thus I’m officially part of the family!

    So, thanks again, and sorry for my late “hello” 🙂

    18 May, 2018 03:46PM

    hackergotchi for Purism PureOS

    Purism PureOS

    Introducing Calls on the Librem 5

    Introduction

    Arguably the most critical functionality in a phone is the ability to make and receive calls through the Public Switched Telephone Network (PSTN), that is normal cellular calls using phone numbers. While at Purism we are eager to implement communication systems that enable much greater privacy and security than one can expect from PSTN calls, the PSTN is still the most ubiquitous network and for the time being we can’t very well go around selling a phone that isn’t able to make PSTN calls.

    My task has been to develop a dialer and call handler for PSTN calls. Like all of our work on the Librem 5, this is intended to make use of existing code wherever possible and also target the GNOME platform which our PureOS defaults to. There is currently no GNOME PSTN dialer so we intend to contribute our program to the GNOME project.

    Initial ideas

    After some research, the initial goal was to use the Telepathy framework, the idea being that we could write a Telepathy PSTN dialer and get a SIP dialer for free because Telepathy has both PSTN and SIP connection managers. What’s more, the PSTN connection manager, telepathy-ring, is used in shipped phones. And, while it has its issues, in my opinion Telepathy is pretty awesome 🙂

    Furthermore, my colleague François Téchené wrote a blog post describing a “feature”-based approach rather than an application-based approach to the phone UX. Telepathy could provide the technical underpinnings of such an approach.

    It’s worth noting however, that Telepathy is a contentious framework. There are a number of voices within the GNOME project who would seemingly have it die a fiery death. Telepathy is a complex system and is notorious for the difficulty of making changes to the framework itself. To do so, one must synchronise changes to formal D-Bus API specifications and a multitude of distinct software components. A long discussion of Telepathy and possible replacements took place on GNOME’s desktop-devel mailing list in August and September 2017.

    Wider discussions

    After starting to work on some preliminary Telepathy code, given that our goal is for the dialer to be GNOME’s dialer, and the intention to use the contentious Telepathy framework, I checked in with the GNOME desktop-devel mailing list again to see what they thought.

    Discussion ensued of both Telepathy and general issues around consolidating different communication systems. The main take away from this discussion was that creating a consolidated system like our “feature”-based approach is difficult to say the least. Like the previous discussion in 2017, a Telepathy-NG was touted. This is future work for us to take on once the basic phone functionality is in place. For now though, there was no major push-back against the idea of creating a PSTN dialer using Telepathy.

    I also spoke to Robert McQueen, one of the original guys behind Telepathy, on IRC. The telepathy-ring connection manager makes use of a mobile telephony framework called oFono. Given the complexity of writing a Telepathy client, Robert suggested that a good approach might be to create a UI with a thin abstraction layer, first implementing a simple oFono backend and then afterwards implementing a more complex Telepathy backend. We’ve taken on Robert’s suggestion and our dialer program has been built using this approach.

    Introducing Calls

    Our program is named Calls. It has a GTK+ 3 user interface and makes use of oFono through a thin abstraction layer. We also make use of our libhandy for the dialpad widget.

    “Can it make a phone call?!”

    Yes, it can! 🙂

    Internals

    The following diagram shows a UMLish representation of the abstraction layer underlying the architecture of Calls:

    The classes are actually GInterfaces. To give a better understanding of the semantics behind each interface, here is a table of objects that possible implementations could make use of:

    Interface Example implementation objects
    Provider oFono Manager, Telepathy Account Manager
    Origin oFono Modem/VoiceCallManager, Telepathy Account
    Call oFono VoiceCall, Telepathy Channel

    The name “Origin” was chosen because it is an object which “originates” a call.

    The MessageSource super-interface is used to issue messages to the user. The abstraction layer is intended as a very thin layer to the user interface so implementations are expected to report information, including error messages, warnings and so on, in a manner suitable for presentation to the user. Hence, methods usually do not return error
    information but instead rely on the implementation issuing appropriate message signals.

    The source code is available in our community group in GNOME’s gitlab.

    Modems, oFono and ModemManager

    The demo above is using a SIM7100E modem from SIMCom which you may be able to see mounted on the prototype board, with the red top border, to the bottom right of the display. Like many cellular modems, this modem supports both AT commands and QMI.

    When the SIM7100E was first plugged in, oFono didn’t recognise it. However, there is a different mobile telephony framework, ModemManager which did recognise the modem and could make calls, send SMS messages and make data connections out of the box. We considered using ModemManager instead of oFono but unfortunately ModemManager’s voice call support is rudimentary and it has no support for supplementary call services like call waiting or conference calling.

    Meanwhile, QMI is preferable to AT commands but oFono has no support for voice calls using QMI. Hence, to get voice calls working, we needed a new driver for the SIM7100E using AT commands. This driver has been upstreamed.

    Where next

    We’ve done a decent amount of work so far but there’s still some way to go before we have a dialer that you can stick in your pocket and use every day. Here are some of the things we have to work on:

    1. Add ringtones. At present, the program doesn’t play any sound when there’s an incoming call. It would also be good to play DTMF noises to the user when they press digits in the dial pad.
    2. Implement call history and integration with GNOME Contacts. At present Calls makes no records of any kind so we need suitable record storage and a UI for it. Similarly, we need to be able to search for contacts from within Calls and add phone numbers from call records to contacts.
    3. The UI is currently basic but functional. It is a far cry from the polished beauty our designers have envisioned. A lot of effort will be needed to rework and polish the UI.
    4. Implement phone settings in GNOME Settings. We need a new page for phone settings like selecting the mobile network to connect to and so on and so forth.
    5. Deal with multiple SIMs and bringing the modem online. At present, Calls is a pretty dumb frontend on top of oFono D-Bus objects and only makes use of modems that are already in a usable state. There needs to be some mechanism to configure which modems Calls should make use of and to bring them online automatically when the device starts. Similarly, there need to be mechanisms for configuring and selecting between multiple SIMs.
    6. Implement the Telepathy backend so we can get SIP calls and calls with whatever else supports Telepathy.
    7. The final choice of modems has not been made yet so we’re not investing too much effort in developing support for the SIM7100E; just enough to test Calls as it is. Assuming we do choose the SIM7100E, we could implement QMI voice call support in oFono. In fact, as I write this post I see there is a discussion on the ofono mailing list about doing just that so QMI voice call support may be done for us. Alternatively, we could implement support for supplementary services in ModemManager, which is more closely aligned with the GNOME platform.
    8. Add support for supplementary services and complex call operations. Just as ModemManager has rudimentary support, so does Calls in its present state. We want to ensure that our dialer has complete support for mobile telephony standards and call control operations.

    That’s all for now, stay tuned for further updates! 🙂

    ⁰ There was a company who shall remain nameless and they sold a GNU/Linux-based phone that wasn’t able make PSTN calls when it shipped. Some five or so years later, I acquired one of these phones and took it to my local LUG. And of course, what was the first question asked: “can it make a phone call?! haha!” Such was the reputation garnered from shipping a phone that couldn’t make phone calls!

    18 May, 2018 03:12PM by Bob Ham

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Setup of an Efficient IT Infrastructure with Central Identity Management in 261 schools in Cologne

    A large-scale project is currently under way in Cologne, Germany: the setup of a standardized, centralized identity infrastructure for all schools. This is set to include considerable simplification of the software distribution and the administration via the education authority over the coming years and measures to ensure that the schools in Cologne are ready for the digitalization of education.

    To illustrate the size of the task: There are 261 schools in Cologne with around 10,000 members of staff teaching approximately 135,000 pupils. To this end, there are around 17,000 PCs available in the schools along with approximately 3,500 mobile devices at present, complemented by private devices. This makes the education authority in Cologne the third-largest in Germany behind Munich and Berlin. To put that into perspective: BMW, Lufthansa, and Bayer AG each have around 120,000 employees in total all around the world. The city realized early on that it is the schools’ responsibility to prepare youngsters for a world which is evolving quicker than ever before as a result of digitalization.

    Educational infrastructure in Cologne – 135,000 students

    Am I online already? Cologne’s first schools provide Internet access since 1997

    With this in mind, Cologne’s municipal authorities began connecting the first schools to the Internet with the support of NetCologne as far back as 1997. NetCologne provides schools with support for the maintenance of networks, clients, and servers via its SchulSupport department now counting around 45 members of staff via a hotline, problem management service, and field services. In 2016, for example, around 35,000 hours of support were provided. The standardized inhouse cabling of all schools was initiated in 2000, and 2014 saw the launch of the “Ganzheitliche Kölner Schul-IT” concept, which describes the services, standards, and strategies for IT in Cologne schools in detail.

    Last year, all secondary schools and with them more than half of all the pupils had access to wireless Internet. The WiFi availability in the remaining schools is also a top priority. The rest of the schools should be equipped with a fiberoptic connection by the end of this year, allowing bandwidths of up to 1 GB to suit their requirements.

    As all other school equipment, the IT infrastructure is a teaching instrument. It should not be assumed, however, that all staff members are IT experts, and so their skills must also be brought up to date. The IT must be low-threshold; it must function simply. Due to the existing technical basis, this has become increasingly more difficult in recent years and the freedom for further modernizations more limited.

    Administration of decentralized school servers results in considerably higher support efforts

    There are a range of different server environments currently in use at the various schools. Different school servers cannot be managed in a standardized fashion, with the result that current maintenance efforts for the systems are high. In addition, it is not possible to offer centralized services with standardized user names. As such, we saw a simpler and centralized administration concept for the whole IT infrastructure via centralized systems as a basic requirement for satisfying the schools’ increasing IT requirements.

    SINN = School Internal Network of NetCologne

    The future requirements don’t stop at equipping each school with fiberoptic networks either. The initiated introduction of WiFi into schools has the aim of making it possible to integrate tablets into teaching. The concept designed for Cologne expressly includes a bring-your-own-devices element for both pupils and teaching staff. The private devices complement the approx. 3,500 tablets financed by the local education authority. In addition, the task also comprises the setup of centrally available services such as an e-mail address for each pupil, learning management systems, web space, and cloud services for the schools.

    All these projects can only be realized through the development of centralized services which can be controlled by the education authority. The technical basis for this is an identity management system with a single sign-on function, a mobile device management concept, e-mail hosting, and a secure and data privacy-compliant school cloud. The schools should have to operate and supervise as few services as possible independently. The use of Office 365 and a school app is planned and almost ready for implementation. Attention must be paid to compliance with youth and data protection legislation; encryption is essential. In addition, the server hardware also requires modernization. We, as the education authority, are receiving news of different requests, topics, and requirements almost every day.

    As such, the challenge consisted in finding a system which is open and can be flexibly expanded as necessary. In order to get the range of requirements and solutions under control, it was necessary to implement technical standards alongside organizational methods such as the establishment of specifications and regulations.

    Graphic about UCS' role in Cologne's school IT
    When searching for a suitable solution, we came across UCS, which has a central identity management (IDM) system in which a wide range of services can be mounted, irrespective of whether they are run on premises or on the cloud and are Windows- or Linux-based.

    UCS’ centralized concept allowed us to pursue the following goals:

    • standardization,
    • quality optimization,
    • specialization of the remote technicians and
    • user maintenance from the school administration software SchiLDZentral via LDAP authentication

    In addition to the central administration, we would also like to automate the software distribution. We decided on the OPSI tool from UBI in Mainz, Germany, which allows packed-based software distribution. Its advantages include improved reaction times, lower time and maintenance requirements, and simplification with the associated standardization.

    Graphic about Cologne's school IT infrastructure

    Another important measure was the introduction and further development of cloud services, which should also be accessible to all users via a centralized identity management system. This brings with it the advantages of one-off user maintenance and simplified control of rights for administrators. It is also simpler for users as they can access all the systems available to them with just one user name and password.

    Increased efficiency thanks to centralized administration and reduction of IT services in schools

    As the result of the planned and to some extent already implemented measures, the City of Cologne will be equipped with a centrally administrated IT infrastructure for its schools, the administrative efforts for which will be considerably lower compared with those of the past. The schools themselves will only operate decentralized school servers with UCS and OPSI, a caching server, and an Internet access point to which the school computers and the teachers’ and pupils’ mobile devices can connect. This largely relieves the teaching staff of administrative responsibilities.

    In future, the NetCologne data center will run UCS centrally for the identity management of applications such as the groupware Open-Xchange, the mobile device management solution Jamf, and the learning management system Moodle. This is complemented by cloud services including Office 365.

    SchildZentral = Cologne’s school administration portal

    Challenge – Rollout of New IT at 260 Schools

    The migration of each individual school demands a considerable amount of time and effort. In each case, the requirements include the migration of the existing data and the creation of the OPSI packages with the teaching software for the different subjects, etc. In addition, almost every school has individual requirements on its IT. With our more than 260 schools, we truly have a mammoth feat ahead of us. In parallel to the rollout in the schools, we will also still have to keep up with our core task: school support. As part of a pilot phase, we have already successfully connected the first schools to UCS, including an operational vocational college and five schools with pilot systems. The next step in the plan is the mounting of Office 365 in UCS and its provision to the schools.

    Christian Lemke

    Chris Lemke is Head of Managed ICT at NetCologne Gesellschaft für Telekommunikation mbH, Cologne.

    Der Beitrag Setup of an Efficient IT Infrastructure with Central Identity Management in 261 schools in Cologne erschien zuerst auf Univention.

    18 May, 2018 11:59AM by Hendrik Petter

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Podcast from the UK LoCo: S11E11 – Station Eleven - Ubuntu Podcast

    This week we reconstruct a bathroom and join the wireless gaming revolution. We discuss the Steam Link app for Android and iOS, the accessible Microsoft Xbox controller, Linux applications coming to ChromeOS and round up the community news.

    It’s Season 11 Episode 11 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    18 May, 2018 11:54AM

    LiMux

    Kompetenzen, Daten und Kommunikation in der Digitalisierung – Ideen vom BarCamp #MucGov18

    Am 5. Mai war es wieder soweit: Nach dem großen Erfolg im letzten Jahr kamen auch diesmal wieder zahlreiche Bürger_innen und Mitarbeiter_innen der Stadt München beim BarCamp #MucGov18 zusammen! Das BarCamp #MucGov18 stand unter dem … Weiterlesen

    Der Beitrag Kompetenzen, Daten und Kommunikation in der Digitalisierung – Ideen vom BarCamp #MucGov18 erschien zuerst auf Münchner IT-Blog.

    18 May, 2018 09:09AM by Lisa Zech

    May 17, 2018

    Cumulus Linux

    5 Network automation tips and tricks for NetOps

    Despite what some people say, automation is not for the lazy. This opinion probably comes from the fact that the whole point of automation is to reduce repetitive tasks and make your life easier. Indeed automation can do just that, as well as giving you back hours each week for other tasks.

    But getting your automation off the ground to begin with can be a challenge. It’s not as if you just decide, “Hey, we’re going to automate our network now!” and then you follow a foolproof, well-defined process to implement network automation across the board. You have to make many decisions that require long discussions, and necessitate ambitious and careful thinking about how you’re going to automate.

    Just as with anything else in the IT world, there are no one-size-fits-all solutions, and no “best practices” that apply to every situation. But there are some common principles and crucial decision points that do apply to all automation endeavors.

    In this post, I’ll give you five network automation tips and tricks to get clarity around your automation decisions and reduce any friction that may be inhibiting (further) adoption of network automation.

    1. Choose whether you want flexibility or simplicity

    Automating your network requires treating your network as code. It’s literally programming your network, and when it comes to programming, there are several ways to accomplish the same objective. Unlike a traditional CLI, where there may be at most two ways to enable OSPF on an interface, there may be six or more ways with automation.

    Think of flexibility and simplicity as sitting at opposite ends of a spectrum. At the simplicity end of the spectrum, you can automate a task in a way that’s quick and simple, but not very scalable. At the flexibility end of the spectrum, you can automate a task in a way that’s initially difficult and requires a lot of careful thought and testing, but is massively scalable. Whether you go for flexibility or simplicity depends on how comfortable you are with automation and programming in general.

    Simplicity

    If you’re just starting out, sticking close to “the way we’ve always done things” will be easier. You can take existing CLI commands and port them with few changes into your automation infrastructure. Take the following CLI configuration, for example:

    router ospf 1
     network 0.0.0.0 area 0.0.0.0

    If you’re using Ansible for automation, you could port the preceding configuration to declarative code like this:

    nclu:
     commands:
     - add ospf network 0.0.0.0 area 0.0.0.0

    There’s almost no difference, and it’s clear what the code does, even to someone who isn’t familiar with Ansible. I call this the copy-and-paste approach. You can take lines from your existing configuration and paste it into a template to create declarative code. It doesn’t get much simpler than this!

    At first, this approach may seem pointless. Why bother with automation at all if you’re just going to essentially port a CLI configuration to a different format? This doesn’t take advantage of the many powerful benefits of programming languages, such as iterative loops and conditionals. Even in this case, a big benefit of automation is improved stability by enforcing predictable (and working) network configurations. If someone surreptitiously makes an ad-hoc change on one switch and brings down part of the network, it can be difficult to determine both what changed and where. But with automation, you can push a button and safely put everything back to the way it was.

    However, there’s another consideration. You must consider who’s going to be looking at the code. In a small environment, there’s a good chance a non-network person will need to analyze the code to diagnose a problem (often when the only full-time network person is out to lunch). Simple, straightforward code makes it easier for them to understand what’s going on.

    However, the simple approach isn’t all roses. If you build your automation by copying and pasting, you have a lot of duplication, and this doesn’t scale. For example, suppose you have a few switches, each with its own unique configuration. You can create a single Ansible Playbook that contains what amounts to a verbatim copy of each of these configurations, and this works fine.

    But this gets unwieldy when you want to add more switches or make a sweeping network-wide change, such as switching your IGP from OSPF to EIGRP, or implementing BGP for the first time. As your network grows and changes, you’ll end up having to refactor your code, which means potentially breaking things that work today. This is risky and requires you to go and retest everything.

    Flexibility

    In a larger environment where you may have a full-time network team, flexibility is more important. It’s also more complicated. You have to think less like a network engineer and more like a programmer. That means separating the logic of your code from the data that’s unique to each individual device. All automation platforms do this natively, although again, there are many ways to go about it. Regardless of the platform you use, you generally break your code across multiple files.

    One file contains a list of devices, usually grouped by role. For example, you may have all of your spine switches in one group and all of your leaf switches in another. Then you may have a supergroup containing both the spine and leaf groups.

    Another file contains variables that are unique per device or device groups. Some examples include IP addresses, ASNs, network statements, access lists, prefix lists, SNMP settings and so on.

    Finally, another file contains the logic and the configuration directives. Unlike a traditional CLI configuration that may contain repetitive commands (like ip prefix-list), this file can contain iterative logic to loop through one section of code many times. This makes it less obvious what the code does, but the tradeoff is that it’s much more scalable.

    Flexibility and simplicity?

    Can you settle somewhere between flexibility and simplicity, perhaps enjoying a little bit of both? The short answer is no. It’s not that you can’t, but it’s not a good idea. Although combining the ease of copy-and-paste with powerful programming logic gives you the best of both worlds, it also gives you the worst. It becomes much more difficult to understand and predict how your automation platform will actually configure your devices. Will those copied-and-pasted BGP configuration statements that apply just to spine01 get overwritten by that loop that applies to all switches? Mixing and matching approaches is more trouble than it’s worth.

    2. Build one-offs into your automation

    One of the biggest barriers to network automation is the inevitable presence of ad-hoc or one-off configurations. You know, things like that one access list entry on that one switch that someone put there to satisfy an IT auditor way back when. Rather than trying to eliminate these, embrace them and make them a part of your automation solution. Adopt the mindset that if it’s not in the automation code, it doesn’t exist in the running configuration.

    Going to the trouble of automating a single statement on one device does take time and effort that may seem wasted; but it’s actually quite the opposite. Failing to ultimately adopt the one-offs into your automation family will inevitably result in a broken network. You’ll eventually encounter a situation where either the automation platform has overwritten your one-off, or your one-off has created a conflict with some new configuration you pushed out via automation.

    Such an ugly event has to happen only once before management declares that automation is off the table, and that all changes must be done manually (after going through rigorous change control, of course). Investing extra time up front to automate your one-offs is far preferable to continuing to do everything manually.

    Note that this does not mean that you have to automate everything right away. It’s wise to start by automating small, simple tasks. But don’t stop there. The goal is to get everything under the automation umbrella and do away with manual configurations once and for all!

    3. Use a single automation platform

    There’s no getting around it. Automation requires treating infrastructure as code, and every automation platform has its own chosen language. Ansible uses Python, while Puppet and Chef use Ruby. Therefore, it’s important that when choosing your platform, everyone who will use the platform agrees on a common language.

    This doesn’t mean that everyone (or anyone) has to know the language starting out. You and others may have to learn it, but the important thing is picking one automation platform and running with it.

    If you have developers or a DevOps team that already uses automation, ask them for recommendations. Find out how they’re using it. If they’re automating only a handful of servers using ad-hoc configurations, they may not be in the best position to advise you on how to automate the network.

    Also, be cautious about choosing a platform just because it’s someone’s favorite. The people who are going to use it must like it. I call this the ice cream test. The developers may all like vanilla, but if you and your colleagues prefer chocolate, you should choose chocolate, even if the developers have some technical arguments against it. At the end of the day, if you don’t like the automation platform, you’re not going to use it.

    Realistically, if you’re on a network team, you might not necessarily see eye-to-eye with your colleagues when it comes to favorite programming languages or automation platforms. But you must decide on a single platform for automating your network.

    4. Use version control

    All automated device configurations should be kept in a centralized repository using a version control system such as Git. This has a couple of advantages.

    First, the repository is the authoritative source for all configurations. Although it takes a while to get to this point, ultimately the goal is that if the configuration is not in the repository, it doesn’t exist on any device. This is the ideal and not a rule because the reality is that, if you’re going to introduce automation bit-by-bit, what’s in your repo will be only a small portion of the actual device configurations.

    The other advantage is that version control lets you keep a record of changes so you can roll back easily. If you add one too many spaces (something Ansible is not forgiving of) or inadvertently delete a line of code, a version control system can tell you exactly what changed. Better yet, correcting the change doesn’t require manually fixing the code. You simply revert to a previous version, and everything is back to the way it was before the mistake.

    The key to effective version control is to track all of your changes, even the little ones. If you make a change and suddenly the network gets slow, version control can help you prove that your change wasn’t the culprit. But there’s one thing that version control can’t track: the state of the network.

    5. Validate and monitor your network using Cumulus NetQ

    Regardless of where you are on your automation journey, it’s a smart idea to make sure NetQ is up and running early on.

    Whereas version control tracks changes to your network configurations, NetQ tracks changes to the state of the network itself. In other words, NetQ can tell you when the state of the network has changed and why it changed.

    Even if your network is only partially automated, NetQ can still track every state change – even the manual ones. This eliminates the blind spot left by partial automation. NetQ can also help you validate that your changes had the expected effect.

    For example, if you implement BGP across your network for the first time, NetQ can give you an instant, real-time view of the BGP status of every device. This saves you the time and trouble of logging into and checking each device manually.

    Get more hours back

    Implementing automation is a manual process that requires careful thought and planning. It’s not just a matter of choosing Ansible or Puppet and learning it. There’s a learning curve, but it’s well worth it. If done correctly, you’ll end up with a more stable and predictable network. And in the long run, you’ll get back hours that you can use to devote to other things. After all, the whole point of automation is to let a machine do the work so you don’t have to!

    Want to know even more about leveraging automation in your network? Good news — we’ve got just the videos for you! Watch our how-to video series on automation and follow along with our networking experts as they take you through the steps.

    The post 5 Network automation tips and tricks for NetOps appeared first on Cumulus Networks Blog.

    17 May, 2018 04:55PM by Ben Piper

    hackergotchi for Purism PureOS

    Purism PureOS

    Purism and Nitrokey Partner to Build Purekey for Purism’s Librem Laptops

    San Francisco (May 17, 2018) – Purism, the social purpose corporation which designs and produces security focused hardware and software, has announced today that they are partnering with Nitrokey, maker of Free Software and Open Hardware USB OpenPGP security tokens and Hardware Security Modules (HSMs) to create Purekey, Purism’s own OpenPGP security token designed to integrate with its hardware and software. Purekey embodies Purism’s mission to make security and cryptography accessible where its customers hold the keys to their own security and follows on the heels of their announcement of a partnership with cryptography pioneer and GnuPG maintainer Werner Koch.

    Purism customers will be able to purchase a Purekey by itself or as an add-on with a laptop order. For add-on orders, Purism can pre-configure the Purekey at the factory to act as an easy-to-use disk decryption key and ship laptops that are pre-encrypted. Customers will be able to insert their Purekey at boot and decrypt their drive automatically without having to type in a long passphrase. Customers will also be able to replace the factory-generated keys with their own at any time.

    Purekey will also be a critical component in Purism’s tamper-evident boot protection. Purism will tightly integrate Purekey into their tamper-evident boot software so that customers will be able to detect tampering on their hardware from the moment it leaves the factory.

    Enterprise customers have long used security tokens for easy and secure key management from everything from email encryption to code signing and multi-factor authentication. With Purekey, IT departments will have an integrated solution out of the box for disk and email encryption, authentication, and tamper-evident boot security that’s easy to use.

    “Often security comes at the expense of convenience but Purekey provides a rare exception. By keeping your encryption keys on a Purekey instead of on a hard drive, your keys never leave the tamper-proof hardware. This not only makes your keys more secure from attackers, it makes using your keys on multiple devices more convenient. When your system needs to encrypt,  decrypt, or sign something, just insert your Purekey; when you are done, remove it and put it back in your pocket.” — Purism CSO Kyle Rankin

    “We’re pleased to be working with the Purism team, who are very aligned with our commitment to open hardware and free software. The possibilities of this partnership are exciting, especially given the growing importance of secure key storage on hardware smart cards and Purism’s important work on tamper-evident protection.” — Nitrokey CEO Jan Suhr

    “We are long-time fans of Nitrokey as they are the only smart card vendor that shares our commitment to open hardware and free software. Their company and security products are a perfect complement to Purism’s belief that ethical computing means privacy and security without sacrificing personal control over your devices.” — Purism CEO Todd Weaver

    About Nitrokey UG

    Founded as an open source project in 2008 and turned into a full corporate entity in 2015, Nitrokey develops and produces highly secure open-source hardware and software USB keys that provide cryptographic functions for protecting; emails, files, hard drives, server certificates, online accounts and data at rest, preventing against identity theft and data loss.

    About Purism

    Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

    17 May, 2018 04:42PM by Kyle Rankin

    hackergotchi for Tanglu developers

    Tanglu developers

    Creating RESTful applications with Qt and Cutelyst

    This mini tutorial aims to show you the fundamentals of creating a RESTful application with Qt, as a client and as a server with the help of Cutelyst.

    Services with REST APIs have become very popular in recent years, and interacting with them may be necessary to integrate with services and keep your application relevant, as well as it may be interesting to replace your own protocol with a REST implementation.

    REST is very associated with JSON, however, JSON is not required for a service to become RESTful, the way data is exchanged is chosen by the one who defines the API, ie it is possible to have REST exchanging messages in XML or another format. We will use JSON for its popularity, simplicity and due to the QJsonDocument class being present in the Qt Core module.

    A REST service is mainly characterized by making use of the little-used HTTP headers and methods, browsers basically use GET to get data and POST to send form and file data, however REST clients will use other methods like DELETE, PUT and HEAD, concerning headers many APIs define headers for authentication, for example X-Application-Token can contain a key generated only for the application of a user X, so that if this header does not contain the correct data it will not have access to the data.

    Let’s start by defining the server API:

    • /api/v1/users
      • GET – Gets the list of users
        • Answer: [“uuid1”, “uuid2”]
      • POST – Register new user
        • Send: {“name”: “someone”, “age”: 32}
        • Answer: {“status”: “ok / error”, “uuid”: “new user uuid”, “error”: “msg in case of error”}
    • /api/v1/users/ – where UUID should be replaced by the user’s UUID
      • GET – Gets user information
        • Answer: {“name”: “someone”, “age”: 32}
      • PUT – Update user information
        • Send: {“name”: “someone”, “age”: 57}
        • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}
      • DELETE – Delete user
        • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}

    For the sake of simplicity we will store the data using QSettings, we do not recommend it for real applications, but Sql or something like that escapes from the scope of this tutorial. We also assume that you already have Qt and Cutelyst installed, the code is available at https://github.com/ceciletti/example-qt-cutelystrest

    Part 1 – RESTful Server with C ++, Cutelyst and Qt

    First we create the server application:

    $ cutelyst2 --create-app ServerREST

    And then we will create the Controller that will have the API methods:

    $ cutelyst2 --controller ApiV1

    Once the new class has been instantiated in serverrest.cpp, init() method with:

    #include "apiv1.h"
    
    bool ServerREST::init()
    {
        new ApiV1 (this);
        ...

    Add the following methods to the file “apiv1.h”

    C_ATTR(users, :Local :AutoArgs :ActionClass(REST))
    void users(Context *c);
    
    C_ATTR(users_GET, :Private)
    void users_GET(Context *c);
    
    C_ATTR(users_POST, :Private)
    void users_POST(Context *c);
    
    C_ATTR(users_uuid, :Path('users') :AutoArgs :ActionClass(REST))
    void users_uuid(Context *c, const QString &uuid);
    
    C_ATTR(users_uuid_GET, :Private)
    void users_uuid_GET(Context *c, const QString &uuid);
    
    C_ATTR(users_uuid_PUT, :Private)
    void users_uuid_PUT(Context *c, const QString &uuid);
    
    C_ATTR(users_uuid_DELETE, :Private)
    void users_uuid_DELETE(Context *c, const QString &uuid);

    The C_ATTR macro is used to add metadata about the class that the MOC will generate, so Cutelyst knows how to map the URLs to those functions.

    • :Local – Map method name to URL by generating /api/v1/users
    • :AutoArgs – Automatically checks the number of arguments after the Context *, in users_uuid we have only one, so the method will be called if the URL is /api/v1/users/any-thing
    • :ActionClass(REST) ​​- Will load the REST plugin that will create an Action class to take care of this method, ActionREST will call the other methods depending on the called method
    • :Private – Registers the action as private in Cutelyst, so that it is not directly accessible via URL

    This is enough to have an automatic mapping depending on the HTTP method for each function, it is important to note that the first function (without _METHOD) is always executed, for more information see the API of ActionREST

    For brevity I will show only the GET code of users, the rest can be seen in GitHub:

    void ApiV1::users_GET(Context *c)
    {
        QSettings s;
        const QStringList uuids = s.childGroups();
    
        c->response()->setJsonArrayBody(QJsonArray::fromStringList(uuids));
    }

    After all the implemented methods start the server:

    cutelyst2 -r --server --app-file path_to_it

    To test the API you can test a POST with curl:

    curl -H "Content-Type: application/json" -X POST -d '{"name": "someone", "age": 32}' http://localhost:3000/api/v1/users

    Okay, now you have a REST server application, made with Qt, with one of the fastest answers in the old west 🙂

    No, it’s serious, check out the benchmarks.

    Now let’s go to part 2, which is to create the client application that will consume this API.

    Part 2 – REST Client Application

    First create a QWidgets project with a QMainWindow, the goal here is just to see how to create REST requests from Qt code, so we assume that you are already familiar with creating graphical interfaces with it.

    Our interface will be composed of:

    • 1 – QComboBox where we will list users’ UUIDs
    • 1 – QLineEdit to enter and display the user name
    • 1 – QSpinBox to enter and view user age
    • 2 – QPushButton
      • To create or update a user’s registry
      • To delete the user record

    Once designed the interface, our QMainWindow sub-class needs to have a pointer to QNetworkAccessManager, this is the class responsible for handling communication with network services such as HTTP and FTP. This class works asynchronously, it has the same operation as a browser that will create up to 6 simultaneous connections to the same server, if you have made more requests at the same time it will put them in a queue (or pipeline them if set).

    Then create a QNetworkAccessManager *m_nam; as a member of your class so we can reuse it. Our request to obtain the list of users will be quite simple:

    QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));
    
    QNetworkReply *reply = m_nam->get(request);
    connect(reply, &QNetworkReply::finished, this, [this, reply] {
        reply->deleteLater();
        const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
        const QJsonArray array = doc.array();
    
        for (const QJsonValue &value : array) {
            ui->uuidCB->addItem(value.toString());
        }
    });

    This fills with the data via GET from the server our QComboBox, now we will see the registration code which is a little more complex:

    QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));
    request.setHeader(QNetworkRequest::ContentTypeHeader, "application/json");
    
    QJsonObject obj {
        {"name", ui->nameLE->text()},
        ("age", ui->ageSP->value()}
    };
    
    QNetworkReply *reply = m_nam->post(request, QJsonDocument(obj).toJson());
    connect(reply, &QNetworkReply::finished, this, [this, reply] {
        reply->deleteLater();
        const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
        const QJsonObject obj = doc.object();
    
        if (obj.value("status").toString() == "ok") {
            ui->uuidCB->addItem(obj.value("uuid").toString());
        } else {
            qWarning() << "ERROR" << obj.value("error").toString();
        }
    });

    With the above code we send an HTTP request using the POST method, like PUT it accepts sending data to the server. It is important to inform the server with what kind of data it will be dealing with, so the “Content-Type” header is set to “application/json”, Qt issues a warning on the terminal if the content type has not been defined. As soon as the server responds we add the new UUID in the combobox so that it stays up to date without having to get all UUIDs again.

    As demonstrated QNetworkAccessManager already has methods ready for the most common REST actions, however if you wanted to send a request of type OPTIONS for example will have to create a request of type CustomOperation:

    m_nam->sendCustomRequest("OPTIONS", request);

    Did you like the article? Help by giving a star to Cutelyst and/or supporting me on Patreon

    https://github.com/cutelyst/cutelyst

    17 May, 2018 02:20PM by dantti

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    ‘Vote for Apps’ End of Round 3: Did Your Favorite Make It?

    The third round of the voting series “Vote for Apps” in the Univention App Catalog is over. It ran from April 13 to May 13.

    Thank you guys very much for participating.

    This time, the apps Metasfresh, an open source ERP system, and PaperCut MF, a system for reducing printing costs by tracking and avoiding unnecessary printouts, were in the game. Most votes were caught by Metasfresh!

    The following ranking is the result of all votes from the three rounds:

    1. SOGo
    2. Metasfresh
    3. Zammad
    4. GitLab
    5. PaperCut MF
    6. Wekan
    7. Cozy
    8. Mailman 3
    9. Dropbox Connector

    The voting results help us in prioritizing and communicating with the vendors. However, they do not guarantee the appearance of any app in the App Center. The goal of our surveys is to find out which apps you would like to use. We want to extend the App Center portfolio regularly by solutions of value to you.

    If you want to have a look at the second round results, read this article ‘Vote for Apps’ End of 2. Round: Did Your Favorite Make It?.

    Are you missing any app that you would love to have available in the App Center? PLease let us know via this link:

    Suggest new app

    Der Beitrag ‘Vote for Apps’ End of Round 3: Did Your Favorite Make It? erschien zuerst auf Univention.

    17 May, 2018 01:13PM by Nico Gulden

    May 16, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Mathieu Trudel: Building a local testing lab with Ubuntu, MAAS and netplan

    Overview

    I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

    One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

    The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

    My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

    The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

    Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

    Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

    Setting up the network hardware

    I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

    For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

    MAAS server hardware setup

    Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

    network:
        ethernets:
            enp0s25:
                addresses: []
                dhcp4: true
                optional: true
        bridges:
            maasbr0:
                addresses: [ 10.3.99.25/24 ]
                dhcp4: no
                dhcp6: no
                interfaces: [ vlan15 ]
            staging:
                addresses: [ 10.3.98.25/24 ]
                dhcp4: no
                dhcp6: no
                interfaces: [ vlan10 ]
        vlans:
            vlan15:
                dhcp4: no
                dhcp6: no
                accept-ra: no
                id: 15
                link: enp0s25
            vlan10:
                dhcp4: no
                dhcp6: no
                accept-ra: no
                id: 10
                link: enp0s25
        version: 2
    Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):
    <network>
    <name>maasbr0</name>
    <bridge name="maasbr0">
    <forward mode="bridge">
    </forward></bridge></network>
    Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:

    libvirtd_opts="-l"

    In /etc/default/libvirtd, then restart the libvirtd service.


    dnsmasq server

    The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:
    dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
    dhcp-hostsdir=/depot/dnsmasq/reservations
    dhcp-authoritative
    dhcp-fqdn
    # copied from maas, specify boot files per-arch.
    dhcp-boot=tag:x86_64-efi,bootx64.efi
    dhcp-boot=tag:i386-pc,pxelinux
    dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
    dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
    # pass search domains everywhere, it's easier to type short names
    dhcp-option=119,home.example.com,staging.example.com,maas.example.com
    domain=example.com
    no-hosts
    addn-hosts=/depot/dnsmasq/dns/
    domain-needed
    expand-hosts
    no-resolv
    # home network
    domain=home.example.com,10.3.0.0/23
    auth-zone=home.example.com,10.3.0.0/23
    dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
    # specify the default gw / next router
    dhcp-option=tag:home,3,10.3.0.1
    # define the tftp server
    dhcp-option=tag:home,66,10.3.0.5
    # staging is configured as above, but on 10.3.98.0/24.
    # maas.example.com: "isolated" maas network.
    # send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
    server=/maas.example.com/10.3.99.25
    # very basic tftp config
    enable-tftp
    tftp-root=/depot/tftp
    tftp-no-fail
    # set some "upstream" nameservers for general name resolution.
    server=8.8.8.8
    server=8.8.4.4


    DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:

    de:ad:be:ef:ca:fe,10.3.0.21

    I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.

    Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:

    10.3.0.21 izanagi

    Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.


    Installing MAAS

    At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.

    The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.

    Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.

    This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).

    Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:


    late_commands:
    [...]
      pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
      pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
      pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
    apt:
      sources:
        proposed.list:
          source: deb $MIRROR {{release}}-proposed main universe
    write_files:
      userconfig:
        path: /etc/cloud/cloud.cfg.d/99-users.cfg
        content: |
          system_info:
            default_user:
              lock_passwd: False
              plain_text_passwd: [REDACTED]
          users:
            - default
            - name: mtrudel
              groups: sudo
              gecos: Matt
              shell: /bin/bash
              lock-passwd: False
              passwd: [REDACTED]


    The pinning_ entries are simply added to the end of the "late_commands" section.

    For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:

    maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
    maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"

    The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.


    Setting up the client

    Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):

    ssh ubuntu@vocal-toad.maas.example.com

    There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:

    CanonicalDomains home.example.com
    CanonicalizeHostname yes
    CanonicalizeFallbackLocal no
    HashKnownHosts no
    UseRoaming no
    # canonicalize* options seem to break github for some reason
    # I haven't spent much time looking into it, so let's make sure it will go through the
    # DNS resolution logic in SSH correctly.
    Host github.com
      Hostname github.com.
    Host *.maas
      Hostname %h.example.com
    Host *.staging
      Hostname %h.example.com
    Host *.maas.example.com
      User ubuntu
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null

    Host *.staging.example.com
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
    Host *.lxd
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
      ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
    Host *.libvirt
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
      ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p

    As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.

    The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.

    Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:

    vocal-toad.maas
    And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).

    It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:

    #!/bin/sh
    ssh $@.maas

    And I saved this as "sandbox" in ~/bin and making it executable.

    And with this, the lab is ready.

    Usage

    To connect to a deployed system, one can now do the following:


    $ sandbox vocal-toad
    Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
    [...]
    ubuntu@vocal-toad:~$
    ubuntu@vocal-toad:~$ id mtrudel
    uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)

    Mobility

    One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").

    All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.

    Next steps

    One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
    • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
    • Automate customizing installed packages based on tags applied to the machines.
      • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
      • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
        • This could be done using chef or puppet, but will require setting up the infrastructure for it.
      • Integrate automatic installation of snaps.
    • Load new images into the raspberry pi automatically for netboot / preseeded installs
      • I have scripts for this, but they will take time to adapt
      • Space on such a device is at a premium, there must be some culling of old images

    16 May, 2018 10:47PM by Mathieu Trudel-Lapierre (noreply@blogger.com)

    hackergotchi for VyOS

    VyOS

    On security of GRE/IPsec scenarios

    As we've already discussed, there are many ways to setup GRE (or something else) over IPsec and they all have their advantages and disadvantages. Recently an issue was brought to my attention: which ones are safe against unencrypted GRE traffic being sent?

    The reason this issue can appear at all is that GRE and IPsec are related to each other more like routing and NAT: in some setups their configuration has to be carefully coordinated, but in general they can easily be used without each other. Lack of tight coupling between features allows greater flexibility, but it may also create situations when the setup stops working as intended without a clear indication as to why it happened.

    Let's review the knowingly safe scenarios:

    VTI

    This one is least flexible, but also foolproof by design: the VTI interface (which is secretly simply IPIP) is brought up only when an IPsec tunnel associated with it is up, and goes down when the tunnel goes down. No traffic will ever be sent over a VTI interface until IKE succeeds.

    Tunnel sourced from a loopback address

    If you have missed it, the basic idea of this setup is the following:

    set interfaces dummy dum0 address 192.168.1.100/32
    
    set interfaces tunnel tun0 local-ip 192.168.1.100/32
    set interfaces tunnel tun0 remote-ip 192.168.1.101/32 # assigned to dum0 on the remote side
    
    set vpn ipsec site-to-site peer 203.0.113.50 tunnel 1 local prefix 192.168.1.100/32
    set vpn ipsec site-to-site peer 203.0.113.50 tunnel 1 remote prefix 192.168.1.101/32
    

    Most often it's used when the routers are behind NAT, or one side lacks a static address, which makes selecting traffic for encryptions by protocol alone impossible. However, it also introduces tight coupling between IPsec and GRE: since the remote end of the GRE tunnel can only be reached via an IPsec tunnel, no communication between the routers over GRE is possible unless the IPsec tunnel is up. If you fear that any packets may be sent via the default route, you can nullroute the IPsec tunnel network to be sure.

    The complicated case

    Now let's examine the simplest kind of setup:

    set interfaces tunnel tun0 local-ip 192.0.2.100 # WAN address
    set interfaces tunnel tun0 remote-ip 203.0.113.200
    
    set vpn ipsec site-to-site peer 203.0.113.200 tunnel 1 protocol gre
    

    In this case IPsec is setup to encrypt the GRE traffic to 203.0.113.200, but the GRE tunnel itself can work without IPsec. In fact, it will work without IPsec, just without encryption, and that is the concern for some people. If the IPsec tunnel goes down due to misconfiguration, it will fall back to the common, unencrypted GRE.

    What can you do about it?

    As a user, if your requirement is to prevent unencrypted traffic from ever being sent, you should use VTI or use loopback addresses for tunnel endpoints.

    For developers this question is more complicated.

    What should be done about it?

    The opinions are divided. I'll summarize the arguments here.

    Arguments for fixing it:

    • Cisco does it that way (attempts to detect that GRE and IPsec are related — at least in some implementations and at least when it's referenced as IPsec profile in the GRE tunnel)
    • The current behaviour is against user's intentions

    Arguments against fixing it:

    • Attempts to guess user's intentions are doomed to fail at least some of the time (for example, what if a user intentionally brings an IPsec tunnel down to isolate GRE setup issues?)
    • The only way to guarantee that unencrypted traffic is never sent is checking for a live SA matching protocol and source before forwarding every packet — that's not good for performance).

    Practical considerations:

    • Since IKE is in the userspace, the kernel can't even know that an SA is supposed to exist until IKE succeeds: automatic detection would be a big change that is unlikely to be accepted in the mainline kernel.
    • Configuration changes required to avoid the issue are simple
    If you have any thoughts on the issue, please share with us!

    16 May, 2018 07:33PM by Daniil Baturin

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jonathan Carter: Video Channel Updates

    Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

    I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

    I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

    My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

    • Video Logs: Almost like a blog, in video format.
    • Howto: Howto videos.
    • Debian Package of the Day: Exploring packages in the Debian archive.
    • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
    • What’s the Difference: Just comparing 2 or more things.
    • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

    It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

    I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

    You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

    Other than that I’m open to ideas, thanks for reading :)

    16 May, 2018 06:19PM

    May 15, 2018

    Sina Mashek: I miss contributing code

    Something I need to force myself to do again is make tools that are potentially useful for others. Or better yet, find tools that someone else made that do the same/similar thing, and contribute to them!

    At some point between my being steeped in the Ubuntu Beginners Team1 and now, I stopped contributing heavily to existing projects, and got sucked into the thrill of building shit for just myself.

    The funniest part is having looked around lately, I have ditched almost everything I’ve built myself because I found that someone/another team of people have built what I wanted -but better.

    Learning how to build those things was great and all, but I could have probably learned a little faster (and other non-programming skills) by contributing to an existing project.

    Basically, I think it’s time for me to start contributing to the tools I use again.

    1. whah, the UBT hasn’t been around for such a long time now!

    15 May, 2018 09:00PM

    Cumulus Linux

    Cumulus content roundup: May

    Hope you brought your networking acronyms dictionary with you – this month’s Cumulus content roundup is going full tech-geek and we’re NOT ashamed! We’re brushing up on EVPN, ECMP, DWDM and TGIF (okay, not the last one. But did that make you LOL?) See a term that makes you go WTF? Don’t worry — we’ve got webinars, videos, blog posts and more to help you differentiate between BGP and OMG.

    From Cumulus Networks:

    EVPN content hub: Deploying EVPN enables you to enhance your layer 3 data center with benefits such as multitenancy, scalability, ARP suppression and more. Don’t know where to begin? Browse this EVPN resources page to learn more about how you can incorporate EVPN into your Cumulus network.

    Celebrating ECMP in Linux — part one: Equal Cost Multi-Path (ECMP) routes are a big component of all the super-trendy data center network designs that are en vogue right now. Read part one of this series about ECMP’s history, how it’s evolved and what Cumulus is doing to help.

    Networking how-to video — What is Voyager?: Voyager is a Dense Wavelength Division Multiplexing (DWDM) platform Facebook brought to the Telecom Infra Project (TIP), bringing the first open packet optical platform to the industry. Check out the video to hear technical expert, Diane Patton, explain the technology in more detail.

    Achieve actionable insight from the host to the switch: Cumulus NetQ reduces network operational headache by providing actionable insight into every trace and hop in the network. Watch this webinar to learn how you can simplify container management.

    NetQ + Kubernetes: bringing container visibility with the leading container orchestrator: If you want to take a deeper dive into NetQ 1.3’s Kubernetes support, look no further. Read this blog post to learn how NetQ’s integration with Kubernetes works and what you can do with it.

    Want more Cumulus content? Check out our learn center, resources page and solutions section for everything you need!

    From the World Wide Web:

    10 competitors Cisco just can’t kill off: Creating a short list of key Cisco competitors is no easy task as the company now competes in multiple markets. In this case we tried to pick companies that have been around awhile or firms that have developed key technologies that directly impacted the networking giant. Check out who made the list.

    Cycle Pedals Bare Metal Container Orchestrator as Kubernetes Alternative: Cycle.io is tacking its newly developed, bare-metal focused container orchestration platform to Packet’s bare metal compute, network, and storage resources. The combination targets organizations that want all the benefits of containers without having to wade into the growing morass of Kubernetes. Read more.

    Red Hat and Microsoft bring OpenShift to Azure: At Red Hat Summit in San Francisco, Red Hat and Microsoft announced they were bringing Red Hat OpenShift, Red Hat’s Kubernetes container orchestration platform, to Microsoft’s Azure, Microsoft’s public cloud. Find out more about the announcement.

    The post Cumulus content roundup: May appeared first on Cumulus Networks Blog.

    15 May, 2018 07:36PM by Madison Emery

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2018

    A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

    Individual reports

    In March, about 183 work hours have been dispatched among 13 paid contributors. Their reports are available:

    • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for May).
    • Antoine Beaupré did 12h.
    • Ben Hutchings did 17 hours (out of 15h allocated + 2 remaining hours).
    • Brian May did 10 hours.
    • Chris Lamb did 16.25 hours.
    • Emilio Pozuelo Monfort did 11.5 hours (out of 16.25 hours allocated + 5 remaining hours, thus keeping 9.75 extra hours for May).
    • Holger Levsen did nothing (out of 16.25 hours allocated + 16.5 hours remaining, thus keeping 32.75 extra hours for May). He did not get hours allocated for May and is expected to catch up.
    • Hugo Lefeuvre did 20.5 hours (out of 16.25 hours allocated + 4.25 remaining hours).
    • Markus Koschany did 16.25 hours.
    • Ola Lundqvist did 11 hours (out of 14 hours allocated + 9.5 remaining hours, thus keeping 12.5 extra hours for May).
    • Roberto C. Sanchez did 7 hours (out of 16.25 hours allocated + 15.75 hours remaining, but immediately gave back the 25 remaining hours).
    • Santiago Ruano Rincón did 8 hours.
    • Thorsten Alteholz did 16.25 hours.

    Evolution of the situation

    The number of sponsored hours did not change. But a few sponsors interested in having more than 5 years of support should join LTS next month since this was a pre-requisite to benefit from extended LTS support. I did update Freexian’s website to show this as a benefit offered to LTS sponsors.

    The security tracker currently lists 20 packages with a known CVE and the dla-needed.txt file 16. At two week from Wheezy’s end-of-life, the number of open issues is close to an historical low.

    Thanks to our sponsors

    New sponsors are in bold.

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    15 May, 2018 03:32PM

    LiMux

    Schülerbeförderung jetzt online beantragen

    Nach der Meldebescheinigung steht den Bürger_innen der Stadt München ab sofort ein weiterer Online-Service zur Verfügung: der Antrag auf Schülerbeförderung oder auch „Antrag auf Kostenfreiheit des Schulweges“! Mit der Einführung dieses Online-Antrags ist das Projekt E- … Weiterlesen

    Der Beitrag Schülerbeförderung jetzt online beantragen erschien zuerst auf Münchner IT-Blog.

    15 May, 2018 10:15AM by Lisa Zech

    May 14, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: Video: How to Manage Failure and Poor Decisions – A Practical Guide

    I realized I haven’t been putting many videos online recently. As such, I have started recording some instructional and coaching videos that I am putting online that I hope are useful to you folks.

    To get started, I wanted to touch on the topic of handling failure and poor decisions in a way that helps to identify pragmatic outcomes and lead towards better outcomes. This video introduces the issue, delves into how to unpick and understand the components of failure, and some practical recommendations for concrete next steps after this assessment.

    Here is the video:

    Can’t see it? Click here.

    The post Video: How to Manage Failure and Poor Decisions – A Practical Guide appeared first on Jono Bacon.

    14 May, 2018 09:12PM

    Daniel Pocock: A closer look at power and PowerPole

    The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

    The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

    People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

    OSCAL organizer urgently looking for an Apple MacBook PSU

    In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

    If only Apple used PowerPole...

    Why batteries?

    The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

    Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

    Why PowerPole?

    Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

    WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

    Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

    The pen is mightier than the sword, but what about the crimper?

    The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

    Here are some packets of PowerPoles in every size:

    Example cables

    It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

    Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

    Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

    Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

    Distributing power between multiple devices

    There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

    In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

    Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

    Need help crimping your cables?

    If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

    I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

    14 May, 2018 07:25PM

    hackergotchi for Ubuntu

    Ubuntu

    Ubuntu Weekly Newsletter Issue 527

    Welcome to the Ubuntu Weekly Newsletter, Issue 527 for the week of May 6 – 12, 2018. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    14 May, 2018 07:10PM by krytarik

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Studio: Ubuntu Studio Development News – May 14, 2018

    Plans for Ubuntu Studio 18.10 – Cosmic Cuttlefish For Ubuntu 18.10, we have been starting to think outside-the-box. There is something to be said of remaining with what you have and refining it, but staying in one spot can lead quickly to stagnation. Coming up with new ideas and progressing forward with those ideas is […]

    14 May, 2018 06:44PM

    hackergotchi for Tails

    Tails

    Call for testing: Additional Software feature

    You can help Tails! The beta version for the Additional Software feature is ready for testing. We are very excited.

    What's new in the Additional Software feature Beta?

    We've designed and implemented a user interface to select additional software packages and make additional software persistent.

    Users are now able to decide, for each additional piece of software that they might install in Tails once, whether it shall be installed automatically in the future.

    How to test Tails Additional Software feature Beta?

    1. Download and install the ISO image on a USB stick, start from the stick and configure a persistent volume. Reboot and use the package manager to install a package currently not in Tails (example: Mumble).

    2. Configure if you want this package to be installed automatically on each boot.

    3. Restart Tails and use Mumble (or another program you've just installed).

    4. Imagine that over time you have installed several additional programs (Mumble, VLC, etc.) but don’t remember the exact list. How would you check your list of additional software?

    5. How would you stop installing Mumble every time you start Tails?

    We are interested in your feedback on bugs and usability of this feature.

    • Are there any notifications that are not clear?
    • Did you run into technical issues?
    • Were you able to to modify the installation details as asked in the last question?
    • Do you have ideas on packages that we should propose to Tails users for installation?

    Please send feedback emails to tails-testers@boum.org.

    Get Tails Additional Software Feature Beta

    Tails Additional Software feature beta ISO image

    Known issues

    We've identified a list of known issues #15567 among which

    • Additional Software gets opened multiple times #15528
    • Remove and Cancel buttons don't work after escaping password prompt #15581

    We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

    14 May, 2018 02:00PM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Lubuntu Blog: This Week in Lubuntu Development #5

    Here is the fifth issue of This Week in Lubuntu Development. You can read the previous issue here. Changes General Lubuntu 18.04 was released! Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine and Lubuntu Translations Team Lead Marcin Mikołajczak. You can see the commits they have made here. We need […]

    14 May, 2018 02:56AM

    May 13, 2018

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky 4.8

    New live/install iso images of SparkyLinux 4.8 “Tyche” are out.
    Sparky 4 is based on Debian stable line “Stretch” and built around the Openbox window manager.

    Sparky 4.8 offers a fully featured operating system with a lightweight LXDE desktop environment; and minimal images of MinimalGUI (Openbox) and MinimalCLI (text mode) which lets you install the base system with a desktop of your choice with a minimal set of applications, via the Sparky Advanced Installer.

    Sparky 4.8 armhf offers a fully featured operating system for single board mini computers RaspberryPi; with the Openbox window manager as default; and a minimal, text mode CLI image to customize it as you like.

    All existing other desktops are fully supported, and can be installed via the Minimal iso images or after installing Sparky on a hard drive, via APTus-> Desktop tool.

    Changelog:
    – full system upgrade from Debian stable repos as of May 11, 2018
    – Linux kernel 4.9.88 (PC)
    – Linux kernel 4.14.34 (ARM)
    – Calamares 3.1.12 with possibility of installing the live system on an encrypted disk
    – added new option of live system booting which lets you choose your localization
    – sparky tools which need the root access use pkexec now instead of gksu/gksudo/kdesudo/etc.
    – added packages: xinit (provides startx command) and bleachbit (for cleaning the system)
    – sparky advanced installer features 6 localizations now: Brazilian, English, German Italian, Polish and Portuguese
    – APTus 0.4.x has been enlarged of new, additional applications and tools to be easy installed
    – removed packages: gksu, gdebi, reportbug, sparky-fontset

    There is no need to reinstall existing Sparky installations of 4.6, 4.7 and 4.8 RC, simply make full system upgrade.

    Sparky PC:
    user: live
    password: live
    root password is empty

    Sparky ARM:
    user: pi
    password: sparky
    root password: toor

    New iso images of the stable edition can be downloaded from the download/stable page.

    Known issues:
    – installing Openbox via MinimalCLI image makes problem with the ‘obmenu-generator’ package installation. To solve that, after running Sparky from hard drive, reinstall the package:
    sudo apt update
    sudo apt install --reinstall obmenu-generator

    and install ‘sparky-desktop-openbox’ meta package as well.

     

    13 May, 2018 06:08PM by pavroo

    May 12, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Andrea Corbellini: 11 years of Ubuntu membership

    It's been 11 years and 1 month since I was awarded with official Ubuntu membership. I will never forget that day: as a kid I had to write about myself on IRC, in front of the Community Council members and answer their questions in a language that was not my primary one. I must confess that I was a bit scared that evening, but once I made it, it felt so good. It felt good not just because of the award itself, but rather because that was the recognition that I did something that mattered. I did something useful that other people could benefit from. And for me, that meant a lot.

    So much time has passed since then. So many things have changed both in my life and around me, for better or worse. So many that I cannot even enumerate all of them. Nonetheless, deep inside of me, I still feel like that young kid: curious, always ready to experiment, full of hopes and uncertain (but never scared) about the future.

    Through the years I received the support of a bunch of people who believed in me, and I thank them all. But if today I feel so hopeful it's undoubtedly thanks to one person in particular, a person who holds a special place in my life. A big thank you goes to you.

    12 May, 2018 09:30PM

    May 11, 2018

    Ubuntu Podcast from the UK LoCo: S11E10 – Ten Little Ladybugs - Ubuntu Podcast

    This week we’ve been smashing up a bathroom like rock stars. We discuss the Ubuntu 18.04 (Bionic Beaver) LTS release, serve up some command line love and go over your feedback.

    It’s Season 11 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    • We discuss what we’ve been up to recently:
      • Mark has been smashing his bathroom.
    • We discuss the Ubuntu 18.04 (Bionic Beaver) LTS release.

    • We share a Command Line Lurve:

      • yes – repeatedly output “y” or specified string for piping into interactive programs
    yes | fsck /var
    
    • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • Image credit: Kirstyn Paynter

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    11 May, 2018 02:00PM

    Simos Xenitellis: A closer look at Chrome OS using LXD to run Linux GUI apps (Project Crostini)

    At Google I/O 2018, one of the presentations was on What’s new in Android apps for Chrome OS (Google I/O ’18). The third and most exciting developer tool shown in the presentation, was the ability to run graphical Linux apps on Chrome OS. Here is a screenshot of a native Linux terminal application, as shown in the presentation.

    They were so excited that the presenter said it would knock the socks off of the participants. And they had arranged a giveaway of socks. Actual socks swag to those attending the presentation 8-).

    The way that they get the GUI apps from the LXD container to appear to the screen, is similar to

    How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

    Project Crostini

    Project Crostini is the Chrome OS project to add support to run Linux GUI apps on Chrome OS.

    The components that facilitate Project Crostini can be found at https://github.com/lstoll/cros-crostini That page has instructions for those that wanted to enable the running of Linux GUI apps on Chrome OS, when Project Crostini was still under development. Lincoln Stoll dissected the source of Chrome OS and created a helpful list of the involved repositories.

    The basic component is The Chrome OS Virtual Machine Monitor (crossvm), which runs untrusted operating systems through Linux’s KVM interface. The Linux distribution would run in a VM. The test repositories make reference to the X server, XWayland and Wayland. There is a repository called sommelier, which is a nested Wayland compositor with X11 forwarding support. It needs more searching to figure out where the source code ended into the Chrome OS repository and what is actually being used.

    Update #1: Here are the vm_tools in Chrome OS. They include garcon,  a service that gets added in the container and communicates with another service outside of the container (vm_concierge).

    What is important, is that LXD runs in this VM and is configured to launch a machine container with a Linux distribution. We are going in depth into this.

    LXD in Project Crostini

    Here is the file that does the initial configuration of the LXD service. It preseeds LXD with the following configuration.

    1. It uses a storage pool with the btrfs filesystem.
    2. It sets up a private bridge for networking.
    3. It configures the default LXD profile with relevant settings that will be applied to the container when it gets created.
      1. The container will not autostart when the Chromebook is restarted. It will get started manually.
      2. There will be private networking.
      3. The directory /opt/google/cros-containers of the host gets shared into the container as both /opt/google/cros-containers and /opt/google/garcon.
      4. The container will be able to get the IP address of the host from the file /dev/.host_ip (inside the container).
      5. The Wayland socket of the VM is shared to the container. This means that GUI applications that run in the container, can appear in the X server running in the VM of Chrome OS.
      6. The /dev/wl0 device file of the host is shared into the container as /dev/wl0. With permissions 0666. That’s the wireless interface.
    # Storage pools
    storage_pools:
    - name: default
     driver: btrfs
     config:
       source: /mnt/stateful/lxd/storage-pools/default
    
    # Network
    # IPv4 address is configured by the host.
    networks:
    - name: lxdbr0
      type: bridge
      config:
        ipv4.address: none
        ipv6.address: none
    
    # Profiles
    profiles:
    - name: default
      config:
        boot.autostart: false
      devices:
        root:
          path: /
          pool: default
          type: disk
        eth0:
          nictype: bridged
          parent: lxdbr0
          type: nic
        cros_containers:
          path: /opt/google/cros-containers
          source: /opt/google/cros-containers
          type: disk
        garcon:
          path: /opt/google/garcon
          source: /opt/google/cros-containers
          type: disk
        host-ip:
          path: /dev/.host_ip
          source: /run/host_ip
          type: disk
        wayland-sock:
          path: /dev/.wayland-0
          source: /run/wayland-0
          type: disk
        wl0:
          source: /dev/wl0
          type: unix-char
          mode: 0666

    Here it creates the btrfs storage pool (termina-lxd-scripts/files/stateful_setup.sh),

     mkfs.btrfs /dev/vdb || true # The disk may already be formatted.
     mount -o user_subvol_rm_allowed /dev/vdb /mnt/stateful

    With the completed LXD configuration, let’s see how the container gets created. It is this file, termina-lxd-scripts/files/run_container.sh Specifically,

    1. It configures an LXD remote URL, https://storage.googleapis.com/cros-containers, that has the container image. It is accessible through the simplestreams protocol.
    2. It launches the container image with
      lxc launch google:debian/stretch

    How to try google:debian/stretch on our own LXD installation

    Let’s delve deeper in the container image that is used in Chrome OS. For this, we are adding the LXD remote URL and then launch the container.

    First, let’s add the LXD remote.

    $ lxc remote add google https://storage.googleapis.com/cros-containers --protocol=simplestreams

    Let’s verify that it has been added.

    $ lxc remote list
    +--------+------------------------------------------------+---------------+-----------+--------+--------+
    | NAME   | URL                                            | PROTOCOL      | AUTH TYPE | PUBLIC | STATIC |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | google | https://storage.googleapis.com/cros-containers | simplestreams |           | YES    | NO     |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | images | https://images.linuxcontainers.org             | simplestreams |           | YES    | NO     |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | local (default) | unix://                               | lxd           | tls       | NO     | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | ubuntu | https://cloud-images.ubuntu.com/releases       | simplestreams |           | YES    | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | ubuntu-daily | https://cloud-images.ubuntu.com/daily    | simplestreams |           | YES    | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+

    What’s in the google: container image repository?

    $ lxc image list google:
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+
    | ALIAS                   | FINGERPRINT  | PUBLIC | DESCRIPTION                                           | ARCH   | SIZE     | UPLOAD DATE                  |
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+
    | debian/stretch (3 more) | 706f2390a7f6 | yes    | Debian for Chromium OS stretch amd64 (20180504_22:19) | x86_64 | 194.82MB | May 4, 2018 at 12:00am (UTC) |
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+

    It is a single image for x86_64, based on Debian Stretch (20180504_22:19).

    Let’s see again those details for the specific container image from Google.

    $ lxc image show google:debian/stretch
    auto_update: false
    properties:
     architecture: amd64
     description: Debian for Chromium OS stretch amd64 (20180504_22:19)
     os: Debian for Chromium OS
     release: stretch
     serial: "20180504_22:19"
    public: true

    Compare those details with the stock debian/stretch container image,

    $ lxc image show images:debian/stretch
    auto_update: false
    properties:
     architecture: amd64
     description: Debian stretch amd64 (20180511_05:25)
     os: Debian
     release: stretch
     serial: "20180511_05:25"
    public: true

    Can we then get detailed info of the Google container image?

    $ lxc image info google:debian/stretch
    Fingerprint: 706f2390a7f67655df8d0d5d46038ed993ad28cb161648781fbd60af4b52dd76
    Size: 194.82MB
    Architecture: x86_64
    Public: yes
    Timestamps:
     Created: 2018/05/04 00:00 UTC
     Uploaded: 2018/05/04 00:00 UTC
     Expires: never
     Last used: never
    Properties:
     serial: 20180504_22:19
     description: Debian for Chromium OS stretch amd64 (20180504_22:19)
     os: Debian for Chromium OS
     release: stretch
     architecture: amd64
    Aliases:
     - debian/stretch/default
     - debian/stretch/default/amd64
     - debian/stretch
     - debian/stretch/amd64
    Cached: no
    Auto update: disabled

    Compare those details with the stock debian/stretch container image.

    $ lxc image info images:debian/stretch
    Fingerprint: 07341ea710a44508c12e5b3b437bd13fa334e56b3c4e2808c32fd7e6b12df8d1
    Size: 110.22MB
    Architecture: x86_64
    Public: yes
    Timestamps:
     Created: 2018/05/11 00:00 UTC
     Uploaded: 2018/05/11 00:00 UTC
     Expires: never
     Last used: never
    Properties:
     os: Debian
     release: stretch
     architecture: amd64
     serial: 20180511_05:25
     description: Debian stretch amd64 (20180511_05:25)
    Aliases:
     - debian/stretch/default
     - debian/stretch/default/amd64
     - debian/9/default
     - debian/9/default/amd64
     - debian/stretch
     - debian/stretch/amd64
     - debian/9
     - debian/9/amd64
    Cached: no
    Auto update: disabled

    Up to now we learned that the Google debian/stretch container image has 95MB of extra files (compressed).

    It’s time to launch a container with google:debian/stretch!

    $ lxc launch google:debian/stretch chrome-os-linux
    Creating chrome-os-linux
    Starting chrome-os-linux 
    $

    Now, get a shell into this container.

    $ lxc exec chrome-os-linux bash
    root@chrome-os-linux:~#

    There is no non-root account,

    root@chrome-os-linux:~# ls /home/
    root@chrome-os-linux:~#

    Differences with stock debian/stretch image

    These are the Chrome OS-specific packages that the container image has. These are not architecture-specific files (architecture: all).

    ii cros-adapta 0.1 all Chromium OS GTK Theme This package provides symlinks
    ii cros-apt-config 0.12 all APT config for Chromium OS integration. This package
    ii cros-garcon 0.10 all Chromium OS Garcon Bridge. This package provides the
    ii cros-guest-tools 0.12 all Metapackage for Chromium OS integration. This package has
    ii cros-sommelier 0.11 all sommelier base package. This package installs unitfiles
    ii cros-sommelier-config 0.11 all sommelier config for Chromium OS integration. This
    ii cros-sudo-config 0.10 all sudo config for Chromium OS integration. This package
    ii cros-systemd-overrides 0.10 all systemd overrides for running under Chromium OS. This
    ii cros-ui-config 0.11 all UI integration for Chromium OS This package installs
    ii cros-unattended-upgrades 0.10 all Unattended upgrades config. This package installs an
    ii cros-wayland 0.10 all Wayland extras for virtwl in Chromium OS. This package

    There are 305 additional packages in total in the Chrome OS container image of Debian stretch, compared to the stock Debian Stretch image.

    adwaita-icon-theme
    apt-transport-https
    apt-utils
    at-spi2-core
    bash-completion
    ca-certificates
    cpp
    cpp-6
    cros-adapta
    cros-apt-config
    cros-garcon
    cros-guest-tools
    cros-sommelier
    cros-sommelier-config
    cros-sudo-config
    cros-systemd-overrides
    cros-ui-config
    cros-unattended-upgrades
    cros-wayland
    curl
    dbus-x11
    dconf-cli
    dconf-gsettings-backend:amd64
    dconf-service
    desktop-file-utils
    dh-python
    distro-info-data
    fontconfig
    fontconfig-config
    fonts-croscore
    fonts-dejavu-core
    fonts-roboto
    fonts-roboto-hinted
    glib-networking:amd64
    glib-networking-common
    glib-networking-services
    gnome-icon-theme
    gsettings-desktop-schemas
    gtk-update-icon-cache
    hicolor-icon-theme
    i965-va-driver:amd64
    less
    libapt-inst2.0:amd64
    libasound2:amd64
    libasound2-data
    libasound2-plugins:amd64
    libasyncns0:amd64
    libatk-bridge2.0-0:amd64
    libatk1.0-0:amd64
    libatk1.0-data
    libatspi2.0-0:amd64
    libauthen-sasl-perl
    libavahi-client3:amd64
    libavahi-common-data:amd64
    libavahi-common3:amd64
    libavcodec57:amd64
    libavresample3:amd64
    libavutil55:amd64
    libcairo-gobject2:amd64
    libcairo2:amd64
    libcolord2:amd64
    libcroco3:amd64
    libcrystalhd3:amd64
    libcups2:amd64
    libcurl3:amd64
    libcurl3-gnutls:amd64
    libdatrie1:amd64
    libdconf1:amd64
    libdrm-amdgpu1:amd64
    libdrm-intel1:amd64
    libdrm-nouveau2:amd64
    libdrm-radeon1:amd64
    libdrm2:amd64
    libegl1-mesa:amd64
    libencode-locale-perl
    libepoxy0:amd64
    libffi6:amd64
    libfile-basedir-perl
    libfile-desktopentry-perl
    libfile-listing-perl
    libfile-mimeinfo-perl
    libflac8:amd64
    libfont-afm-perl
    libfontconfig1:amd64
    libfontenc1:amd64
    libfreetype6:amd64
    libgail-common:amd64
    libgail18:amd64
    libgbm1:amd64
    libgdbm3:amd64
    libgdk-pixbuf2.0-0:amd64
    libgdk-pixbuf2.0-common
    libgl1-mesa-dri:amd64
    libgl1-mesa-glx:amd64
    libglapi-mesa:amd64
    libglib2.0-0:amd64
    libglib2.0-data
    libgmp10:amd64
    libgnutls30:amd64
    libgomp1:amd64
    libgraphite2-3:amd64
    libgsm1:amd64
    libgtk-3-0:amd64
    libgtk-3-bin
    libgtk-3-common
    libgtk2.0-0:amd64
    libgtk2.0-bin
    libgtk2.0-common
    libharfbuzz0b:amd64
    libhogweed4:amd64
    libhtml-form-perl
    libhtml-format-perl
    libhtml-parser-perl
    libhtml-tagset-perl
    libhtml-tree-perl
    libhttp-cookies-perl
    libhttp-daemon-perl
    libhttp-date-perl
    libhttp-message-perl
    libhttp-negotiate-perl
    libice6:amd64
    libicu57:amd64
    libidn2-0:amd64
    libio-html-perl
    libio-socket-ssl-perl
    libipc-system-simple-perl
    libisl15:amd64
    libjack-jackd2-0:amd64
    libjbig0:amd64
    libjpeg62-turbo:amd64
    libjson-glib-1.0-0:amd64
    libjson-glib-1.0-common
    liblcms2-2:amd64
    libldap-2.4-2:amd64
    libldap-common
    libllvm3.9:amd64
    libltdl7:amd64
    liblwp-mediatypes-perl
    liblwp-protocol-https-perl
    libmailtools-perl
    libmp3lame0:amd64
    libmpc3:amd64
    libmpdec2:amd64
    libmpfr4:amd64
    libnet-dbus-perl
    libnet-http-perl
    libnet-smtp-ssl-perl
    libnet-ssleay-perl
    libnettle6:amd64
    libnghttp2-14:amd64
    libnuma1:amd64
    libogg0:amd64
    libopenjp2-7:amd64
    libopus0:amd64
    liborc-0.4-0:amd64
    libp11-kit0:amd64
    libpango-1.0-0:amd64
    libpangocairo-1.0-0:amd64
    libpangoft2-1.0-0:amd64
    libpciaccess0:amd64
    libperl5.24:amd64
    libpixman-1-0:amd64
    libpng16-16:amd64
    libpolkit-agent-1-0:amd64
    libpolkit-backend-1-0:amd64
    libpolkit-gobject-1-0:amd64
    libproxy1v5:amd64
    libpsl5:amd64
    libpulse0:amd64
    libpulsedsp:amd64
    libpython3-stdlib:amd64
    libpython3.5-minimal:amd64
    libpython3.5-stdlib:amd64
    libreadline7:amd64
    librest-0.7-0:amd64
    librsvg2-2:amd64
    librsvg2-common:amd64
    librtmp1:amd64
    libsamplerate0:amd64
    libsasl2-2:amd64
    libsasl2-modules:amd64
    libsasl2-modules-db:amd64
    libsensors4:amd64
    libshine3:amd64
    libsm6:amd64
    libsnappy1v5:amd64
    libsndfile1:amd64
    libsoup-gnome2.4-1:amd64
    libsoup2.4-1:amd64
    libsoxr0:amd64
    libspeex1:amd64
    libspeexdsp1:amd64
    libsqlite3-0:amd64
    libssh2-1:amd64
    libssl1.1:amd64
    libswresample2:amd64
    libtasn1-6:amd64
    libtdb1:amd64
    libtext-iconv-perl
    libthai-data
    libthai0:amd64
    libtheora0:amd64
    libtie-ixhash-perl
    libtiff5:amd64
    libtimedate-perl
    libtwolame0:amd64
    libtxc-dxtn-s2tc:amd64
    libunistring0:amd64
    liburi-perl
    libva-drm1:amd64
    libva-x11-1:amd64
    libva1:amd64
    libvdpau-va-gl1:amd64
    libvdpau1:amd64
    libvorbis0a:amd64
    libvorbisenc2:amd64
    libvpx4:amd64
    libwavpack1:amd64
    libwayland-client0:amd64
    libwayland-cursor0:amd64
    libwayland-egl1-mesa:amd64
    libwayland-server0:amd64
    libwebp6:amd64
    libwebpmux2:amd64
    libwebrtc-audio-processing1:amd64
    libwrap0:amd64
    libwww-perl
    libwww-robotrules-perl
    libx11-protocol-perl
    libx11-xcb1:amd64
    libx264-148:amd64
    libx265-95:amd64
    libxaw7:amd64
    libxcb-dri2-0:amd64
    libxcb-dri3-0:amd64
    libxcb-glx0:amd64
    libxcb-present0:amd64
    libxcb-render0:amd64
    libxcb-shape0:amd64
    libxcb-shm0:amd64
    libxcb-sync1:amd64
    libxcb-xfixes0:amd64
    libxcomposite1:amd64
    libxcursor1:amd64
    libxdamage1:amd64
    libxfixes3:amd64
    libxft2:amd64
    libxi6:amd64
    libxinerama1:amd64
    libxkbcommon0:amd64
    libxml-parser-perl
    libxml-twig-perl
    libxml-xpathengine-perl
    libxml2:amd64
    libxmu6:amd64
    libxpm4:amd64
    libxrandr2:amd64
    libxrender1:amd64
    libxshmfence1:amd64
    libxt6:amd64
    libxtst6:amd64
    libxv1:amd64
    libxvidcore4:amd64
    libxxf86dga1:amd64
    libxxf86vm1:amd64
    libzvbi-common
    libzvbi0:amd64
    lsb-release
    mesa-va-drivers:amd64
    mesa-vdpau-drivers:amd64
    mime-support
    openssl
    perl
    perl-modules-5.24
    perl-openssl-defaults:amd64
    policykit-1
    publicsuffix
    pulseaudio
    pulseaudio-utils
    python-apt-common
    python3
    python3-apt
    python3-minimal
    python3.5
    python3.5-minimal
    readline-common
    rename
    rtkit
    sgml-base
    shared-mime-info
    sudo
    tcpd
    ucf
    unattended-upgrades
    unzip
    va-driver-all:amd64
    vdpau-driver-all:amd64
    x11-common
    x11-utils
    x11-xserver-utils
    xdg-user-dirs
    xdg-utils
    xkb-data
    xml-core
    xz-utils

    The binary files are meant to be found at /opt/google/cros-containers/. For example,

    root@chrome-os-linux:~# cat /usr/bin/gnome-www-browser 
    #!/bin/bash
    /opt/google/cros-containers/bin/garcon --client --url "$@"
    root@chrome-os-linux:~#

    Obviously, these files are not found in the container that I just launched. These files are provided by Chrome OS of a the Chromebook.

    I did not find binaries in the container to launch a Linux terminal application. I assume would be found at /opt/google/cros-containers/bin/ as well.

    The Chrome OS deb package repository

    Here is the repository,

    root@chrome-os-linux:~# cat /etc/apt/sources.list.d/cros.list 
    deb https://storage.googleapis.com/cros-packages stretch main
    root@chrome-os-linux:~#

    And here are the details of the packages,

    $ curl https://storage.googleapis.com/cros-packages/dists/stretch/main/binary-amd64/Packages 
    Package: cros-adapta
    Version: 0.1
    Architecture: all
    Recommends: libgtk2.0-0, libgtk-3-0
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-adapta/cros-adapta_0.1_all.deb
    Size: 792
    SHA256: 885783a862f75fb95e0d389c400b9463c9580a84e9ec54c1ed2c8dbafa1ccbc5
    SHA1: 23cbf5f11724d971592da9db9a17b2ae1c28dfad
    MD5sum: 27fdba7a27c84caa4014a69546a83a6b
    Description: Chromium OS GTK Theme This package provides symlinks
     which link the bind-mounted theme into the correct location in the
     container.
    Homepage: https://chromium.googlesource.com/chromiumos/third_party/cros-adapta/
    Built-Using: Bazel
    
    Package: cros-apt-config
    Version: 0.12
    Architecture: all
    Depends: apt-transport-https
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-apt-config/cros-apt-config_0.12_all.deb
    Size: 7358
    SHA256: d6d21bdf348e6510a9c933f8aacde7ac4054b6e2f56d5e13e9772800fab13e9e
    SHA1: 51b23541fc8029725966bf45f0a98075cbb01dfa
    MD5sum: b3de74124b2947e0ad819416ce7eed78
    Description: APT config for Chromium OS integration. This package
     installs the keyring for the Chromium OS integration apt repo, the
     source list, and APT preferences.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-garcon
    Version: 0.10
    Architecture: all
    Depends: desktop-file-utils, xdg-utils
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-garcon/cros-garcon_0.10_all.deb
    Size: 1330
    SHA256: 32430b920770a8f6d5e0f271de340e87afb32bd9c2a4ecc4e470318e37033672
    SHA1: 46f24826d9a0eaab8ec1617d173c48f15fedd937
    MD5sum: 4ab2fa3b50ec42bddf6aeeb93c1ef202
    Description: Chromium OS Garcon Bridge. This package provides the
     systemd unit files for Garcon, the bridge to Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-guest-tools
    Version: 0.12
    Architecture: all
    Depends: cros-garcon, cros-sommelier
    Recommends: bash-completion, cros-apt-config, cros-sommelier-config,
     cros-sudo-config, cros-systemd-overrides, cros-ui-config,
     cros-unattended-upgrades, cros-wayland, curl, dbus-x11, pulseaudio,
     unzip, vim
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-guest-tools/cros-guest-tools_0.12_all.deb
    Size: 10882
    SHA256: 5f0a2521351b22fe3b537431dec59740c6cc96771372432fe3c7a88a5939884d
    SHA1: d37aab929c0c7011dd6b730bdc2052d7e232d577
    MD5sum: 5d9fafa14a4f88108f716438c45cf390
    Description: Metapackage for Chromium OS integration. This package has
     dependencies on all other packages necessary for Chromium OS
     integration.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sommelier
    Version: 0.11
    Architecture: all
    Depends: libpam-systemd
    Recommends: x11-utils, x11-xserver-utils, xkb-data
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sommelier/cros-sommelier_0.11_all.deb
    Size: 1552
    SHA256: 522fe94157708d1a62c42a404bcffe537205fd7ea7b0d4a1ed98de562916c146
    SHA1: ec51d2d8641d9234ccffc0d61a03f8f467205c73
    MD5sum: 8ed001a623ae74302d7046e4187a71c7
    Description: sommelier base package. This package installs unitfiles
     and support scripts for sommelier.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sommelier-config
    Version: 0.11
    Architecture: all
    Depends: libpam-systemd, cros-sommelier
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sommelier-config/cros-sommelier-config_0.11_all.deb
    Size: 1246
    SHA256: edbba3817fd3cdb41ea2f008ea4279f2e276580d5b1498c942965c3b00b4bff1
    SHA1: 762ca85f3f9cea87566f42912fd6077c0071e740
    MD5sum: 767a8a8c9b336ed682b95d9dd49fbde5
    Description: sommelier config for Chromium OS integration. This
     package installs default configuration for sommelier. that is ideal
     for integration with Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sudo-config
    Version: 0.10
    Architecture: all
    Depends: sudo
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sudo-config/cros-sudo-config_0.10_all.deb
    Size: 810
    SHA256: d9c1e2b677dadd1dd20da8499538d9ee2e4c2bc44b16de8aaed0f1e747f371a3
    SHA1: 07b961e847112da07c6a24b9f154be6fed13cca1
    MD5sum: 37f54f1e727330ab092532a5fc5300fe
    Description: sudo config for Chromium OS integration. This package
     installs default configuration for sudo to allow passwordless sudo
     access for the sudo group.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-systemd-overrides
    Version: 0.10
    Architecture: all
    Depends: systemd
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-systemd-overrides/cros-systemd-overrides_0.10_all.deb
    Size: 10776
    SHA256: 7b960a84d94be0fbe5b4969c7f8e887ccf3c2adf2b2dc10b5cb4856d30eeaab5
    SHA1: 06dc91e9739fd3d70fa54051a1166c2dfcc591e2
    MD5sum: 16033ff279b2f282c265d5acea3baac6
    Description: systemd overrides for running under Chromium OS. This
     package overrides the default behavior of some core systemd units.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-ui-config
    Version: 0.11
    Architecture: all
    Depends: cros-adapta, dconf-cli, fonts-croscore, fonts-roboto
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-ui-config/cros-ui-config_0.11_all.deb
    Size: 1280
    SHA256: bc1c5513ab67c003a6c069d386a629935cd345b464d13b1dd7847822f98825f3
    SHA1: 4193bd92f9f05085d480de09f2c15fe93542f272
    MD5sum: 7e95b56058030484b6393d05767dea04
    Description: UI integration for Chromium OS This package installs
     default configuration for GTK+ that is ideal for integration with
     Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-unattended-upgrades
    Version: 0.10
    Architecture: all
    Depends: unattended-upgrades
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-unattended-upgrades/cros-unattended-upgrades_0.10_all.deb
    Size: 1008
    SHA256: 33057294098edb169e03099b415726a99fb1ffbdf04915a3acd69f72cf4c84e8
    SHA1: ec575f7222c5008487c76e95d073cc81107cad0b
    MD5sum: ae30c3a11da61346a710e4432383bbe0
    Description: Unattended upgrades config. This package installs an
     unattended upgrades config for Chromium OS guest containers.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-wayland
    Version: 0.10
    Architecture: all
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: x11
    Filename: pool/main/c/cros-wayland/cros-wayland_0.10_all.deb
    Size: 886
    SHA256: 06d26a150e69bda950b0df166328a2dae60ac0a0840f432b26dc127b842dd1ef
    SHA1: 48bd118c497b0a4090b126d7c3d8ec3aacced504
    MD5sum: a495d16e5212535571adfd7820b733c2
    Description: Wayland extras for virtwl in Chromium OS. This package
     provides config files and udev rules to improve the Wayland experience
     under CrOS.
    Homepage: https://chromium.org
    Built-Using: Bazel

    Conclusion

    It is quite neat that Chrome OS uses machine containers with LXD to maintain a Linux installation.

    Apart from the apparent benefits of the Chromebook users, it makes sense to have a look at the implementation in order to figure out how to create a sort of lightweight Virtualbox clone (ability to run a desktop environment of any Linux distribution) that uses containers and LXD.

    11 May, 2018 01:14PM

    Cumulus Linux

    Solving challenges with Linux networking, programmable pipelines and tunnels

    Exciting advances in modern data center networking

    Many moons ago, Cumulus Networks set out to further the cause of open networking. The premise was simple: make networking operate like servers. To do that, we needed to develop an operating system platform, create a vibrant marketplace of compatible and compliant hardware and get a minimum set of features implemented in a robust way.

    Today, these types of problems are largely behind us, and the problem set has moved in the right direction towards innovation and providing elegant solutions to the problems around scale, mobility and agility. Simply put, if “Linux is in the entire rack,” then it follows that the applications and services deployed via these racks should be able to move to any rack and be deployed for maximum overall efficiency.

    The formula for this ephemeral agility then is based on two constructs.

    1. If the application can deploy anywhere, the policies governing the application’s ability to interact with the world need to be enforceable anywhere and on any rack in the entire data center.
    2. It should be possible to place an application on any rack and all the connectivity it needs should be available without needing any physical changes in the data center

    So let’s set the stage for the Linux-fueled networking technologies that address these requirements:

    1. Programmable pipelines to implement policy
      EBPF, P4…
    2. Use of tunnel technology to build horizontal scale and multi tenancy
      VXLAN, VRF, EVPN, MPLS…

    Let’s scratch under the surface a bit and look at a common data center architecture and understand the options, such as programmable pipelines and tunnels, Linux has been unlocking.

    A typical modern data center

    The figure above is what a typical modern data center looks like, which I’ll be using as a reference for this discussion. The two server clusters shown here are connected through a 2-layer CLOS network. As is typical, the server clusters are running multiple tenancy domains, but have asymmetric policy needs. The red and blue colors indicate the tenancy membership and the colored wires indicate the paths selected for a flow in that tenancy.

    Programmable pipelines to implement policy

    Policy at the edge:

    The trend in modern data center design is to cocoon the application runtime environment with all the components that it needs. This basic principle manifests itself as containers or the more complete virtual machines, where all the components needed are packaged together with the application. This self-contained packaging makes it impervious to the vagaries of the environment it runs in. The networking aspect of that solution are policies that track the application and need to be applied at every node where the application is running.

    Policies can range from ones that block particular flows or IP addresses to preventing certain traffic from going out on particular ports. Load balancing, stateful firewalls and DDOS mitigation are other examples of such policies. Since these are typically closely associated with the application instance, in my “carefully cultivated” opinion Linux networking hooks provide an excellent place to insert and enforce said policies. I assert that this layer is thus imperative to be able create the complete “application package” that is needed for the mobile, agile data center.

    EBPF:

    EBPF has taken this world by storm in the Linux kernel community. At its base, it is a collection of hook points in the kernel where a C or Python program can be attached. Said program can be inserted by a userspace program (running at the right privilege level of course) and can perform operations that, amongst other things, can modify/inspect a packet and its forwarding behavior. Even more powerful is the ability to have data structures (called maps) that can share data with the inserted program running as part of the kernel’s dataplane pipeline.

    Consider an example where some packets need to be converted from IPv4 to IPv6 before it is sent out. An EBPF program can be written to examine all the outgoing packets, look up candidate subnets from a userspace supplied map and, if the current packet needs the treatment, NAT it and send it out. Using the EBPF framework, you:

    1. Write this program in C or Python.
    2. Compile it using standard compiler tools.
    3. Load the program dynamically into a running kernel.
    4. Configure and update the NAT rules from a userspace service.

    There are several interesting articles that go into detail on this if you’re interested in learning more. The scope of EBPF now includes hooks that let you connect to process information, socket entry points, a whole bunch of kernel operations and TC (layer in the kernel which implements egress/ingress QOS and filtering) where forwarding packet operations can be imposed. Clearly then, an EBPF program that identifies flows and takes action can be built using these tools, and can be setup such that it follows an application to its host.

    P4:

    P4 is the basis of a programmable language originated by Barefoot Networks to create a software defined ASIC for networking. The language allows a “program” written in the P4 language to specify a forwarding pipeline, the packet types this pipeline will operate on and the logic that makes forwarding decisions as the packet progresses through the pipeline. The utility of P4 as pertains to this conversation is where it can form the language that can be used to generate EBPF programs or push matching functionality into hardware to implement policies. More information on P4 can be found here and the EBPF specific functions are here.

    Using P4 to generate the EBPF function basically allows an even higher level perspective of being able to insert the policy enforcement into the hardware or into the kernel via an EBPF program, and thus getting access to a dial that lets you trade off cost versus performance.

    The ultimate goal being that a set of policies expressed as P4 programs or EBPF programs are attached to the applications Container or VM, and when injected into the host for an application provides all classification and actions needed. To be fair, the programmer experience for both EBPF and P4 is still raw, but work is progressing fervently. This will be the new frontier of networking innovation for years to come.

    Use of tunnel technology to build horizontal scale and multi tenancy

    Multi-tenancey on shared platforms:

    Processor economy curves have made it such that it is most economical to build a single physical platform and then carve it out by running various networks overlaid on top of that. Typically the server architecture uses VM’s or containers to maximize utilization and resiliency, and the network has to support those constructs. Furthermore, since some services will scale better if provided on a bare metal server or through a physical appliance, the network needs to be able to handle service insertion into tenancy domain as well.

    The classical solution to this problem used VLANs and created Layer 2 tenancy boundaries all over the network, or put another way, each tenant was assigned a VLAN. Enlightened networks use VRFs and create Layer 3 tenancy boundaries all the way to the participating hosts and build a more scalable Layer 3 version of the VLAN design, since one can rely on the routing plane to react to topology changes, host presence indication and other signals.

    Both of these solutions have two specific problems.

    1. Since the namespace for VLANs and VRFs spans the whole infrastructure, including remote sites, they need to be created and maintained apriori. This means that either you need all VLANs/VRFs to be present everywhere or you have a very complex provisioning system that decides on a per server and network node basis what tenancy participation will be allowed. In practice, most people tend to do global provisioning at a pod level and selective provisioning between pods. By adding the new bridge driver model (to be able to handle L2 scale) and adding VRFs (in addition to the namespace solution that existed for a while) to Linux, these solutions can be implemented in a way that makes the entire data center look like one homogenous operating system.
    2. The fabric connectivity has the same complexity, as all servers need to either be able to reach all networks or a very complex and dynamic algorithm is needed (aka a controller). This controller must know where a tenant is going to show up and the path through the network plumbed to ensure that the specific server that is hosting the tenant VM/Container has access to all the services it needs.

    The simple solution to the VLAN/VRF strategies that have the “staticness” problem is to use tunnelling constructs available in the Linux kernel and bind an application to a tunnel. With this approach, each application can decide which tenancy group it should belong to and can only reach other applications within its own tenancy group. Since tunnel encap/decap happens at the edge of the network, the only thing that needs managing is the allowed membership of a given server for a given tenancy group (aka tunnel). The network provides generic inter-tunnel connectivity.

    The Linux kernel provides a cornucopia of options here.

    • For Layer 2 adjacency : VXLAN saw its first formal implementation in the Linux kernel and has since been made incredibly robust and feature-full. With recent additions and in conjunction with FRR, it can be used to implement simple Layer 2 networks that stretch across Layer 3 fabrics and also sophisticated Distributed Router solutions where the end hosts do routing (and thus are more efficient for Layer 2-Layer 3 translations) using EVPN in the control plane.
    • For Layer 3 adjacency: This is now a complete solution as well, when using the Linux kernel in conjunction with a MPLS control plane either using BGP (Segment Routing) or LDP. When LWT (lightweight tunnels) were added to the Linux kernel, it became possible to create a translation scheme that worked at high scale and converted from an IP forwarding target to practically any kind of tunnel encap. This facility can be exploited by a host running the Linux kernel (significant majority of hosts out there) and an appropriate control plane software like FRR.

    The beauty of using the Linux constructs for tunneling from the host/app is that the user gets to choose whether the tunnels originate in the layer called “Virtual” in the picture or in the physical TOR based on the tradeoffs in scale, speed and visibility. Additionally, if a physical appliance needs to be inserted into the network, the physical networking layers provide the exact same workflow and automation interface, thus making it seamless.

    In all cases:

    If all aspects of the diagram above are running a version of Linux, you get maximum economy of scale in terms of tools, best practices, automation frameworks and debugging outages. This is a factor that becomes increasingly more useful as you deploy larger and larger networks, as your workloads (VM’s or containers) keep moving around, and your needs for load balancing and resiliency evolve.

    If you’d like to take a deeper dive into the capabilities of Linux and see why it’s the language of the data center, head over to our Linux networking resource center. Peruse white papers, videos, blog posts and more — we’ve got just what you need.

    The post Solving challenges with Linux networking, programmable pipelines and tunnels appeared first on Cumulus Networks Blog.

    11 May, 2018 01:00PM by Shrijeet Mukherjee

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    MediaWiki – a Culture of Sharing Your Knowledge

    Enterprise wikis are about technology and features – but also about the main principle of sharing.

    Wikis were a small cultural revolution. The idea behind it: Different people join online to collaboratively write texts, review them, discuss changes, improve, supplement, link and categorize them. And through this work and combined knowledge of many people a powerful central knowledge base is created in which you almost always find what you need to know.

    Wikipedia has consistently thought through the model of openly exchanging knowledge and made it popular. For 17 years Wikipedia has been the place to go for knowledge on the Web while the use of wikis has been spreading in companies, too. The open source software TWiki (1998) or the proprietary Confluence (2004) paved the way for the triumphant march of wiki technology in companies. Of course, the Wikipedia software MediaWiki (2002) also plays a part in the story about wikis in companies. But more on that in a moment.

    Wikis as enterprise software

    It became apparent very early on that a wiki for a company had to be built differently from the wiki for an online encyclopedia. It must meet other technical and organizational requirements.

    This starts with the connection to the central authentication system (LDAP, SAML), a differentiated rights management and general interoperability with other applications. Above all, however, internal quality assurance and content maintenance must be supported, for example, with workflow tools, release mechanisms, read confirmations or resubmissions. Furthermore, it must be possible to hierarchize and bundle content, for example, by combining individual articles into “books”. And last but not least, an excellent search and a very good performance are required. The processing of structured metadata and a secure exchange via API complete the picture.

    Successful enterprise wikis must be available as in-house or cloud solutions, with Docker or as appliances. Providers that only offer one variant will quickly lose ground. And just like with any other enterprise software, the “product” enterprise wiki also includes, of course, the necessary services and support.

    A very small group of suitable systems has emerged on the market over the last ten years. This includes Confluence as a proprietary system. In the open source segment, XWiki, TWiki and various MediaWiki-based systems are to be mentioned. BlueSpíce MediaWiki is the leading MediaWiki for companies.

    Creating defined conditions with enterprise wikis

    Around 2010, many people still saw wikis just as a software for a more or less well-maintained knowledgebase. Only gradually people in management became aware of the many possible applications.

    Today, wikis provide organization manuals, QA systems with process descriptions and responsibilities, protocols, emergency and risk management instructions, product descriptions, service or support manuals, documentation of software, data protection requirements or descriptions of technical systems.

    Today, enterprise wikis document the setup and operation of technical systems.

    Some technical and formal standards have developed over time to deal with all these subjects – but development is far from complete.

    One could also say that wikis are there to create defined states in dynamic companies. They answer the question: ‘What do we do and how?’ Wikis also offer a lot of space to collect experiences and develop the company further in a goal-oriented manner.

    The objective: Away from the knowledge silo

    In this respect, there is no company or organization that can do without a wiki. When you want to select a system, however, you should note that though the various wiki systems have become very similar in their technical design, they still follow different approaches under the hood.

    The solutions such as TWiki or Confluence, which were tailored to companies at an early stage of their development, responded to central company requirements from ten years ago. At that time, CEOs desired knowledge to be protectable. So they were able to set up “Spaces” and “Webs”. In other words, areas where only authorized employees have access to.

    This basic idea applies to the entire architecture of the software up to the point that file attachments are not centrally available. Due to this concept, these ‘rooms’ became new silos of knowledge. The economic and organizational advantages of collecting knowledge centrally and networking the company was thereby thwarted. As a result, employees again wondered how they could get the information they needed to get their jobs done.

    To solve this problem, Wikipedia software MediaWiki offered a more interesting approach. Its mission was to bring knowledge together centrally while protecting those content only that needs to be protected. Many jumped on this bandwagon right away: MediaWiki has been and still is the most widely installed wiki software in the enterprise right from the beginning. However, many business-critical enhancements did not follow until little later.

    But, of course, you can also separate the knowledge in MediaWiki according to authorization groups (via name spaces or by distributing the content within a WikiFarm). And with the BlueSpíce MediaWiki extension stack, all conceivable quality and security requirements can now be mapped in an enterprise-compatible and user-friendly way. With extensions from the Semantic MediaWiki world, highly individualized knowledge platforms are also possible.

    Screenshot BlueSpíce MediaWiki

    Wikipedia’s way of sharing knowledge has been codified

    Despite the many further developments and customizations, MediaWiki retained its basic structure. Free software hacker and former Wikimedia release manager Mark Hershberger once said that Wikipedia’s culture of knowledge sharing was encoded in MediaWiki. That MediaWiki users have the feeling to find central articles quickly is not least due to the basic, one might almost say centralistic architecture, which gently but firmly forces the user to agree on one article and a term and to link to relevant subarticles. And it also urges to rather publish knowledge than lock it away.

    At first glance, this seems nothing more than a philosophical view. But the efficient exchange of knowledge and standardization of processes are increasingly becoming essential for companies to remain competitive. What about reducing errors, searching times and redundancies? This is a purely economic question.

    Therefore, the overall task is to bring together the experiences from the Wikipedia universe and from the tightly organized corporate world. This means integrating new technologies over and over again and adapting MediaWiki over and over again.

    Test BlueSpíce MediaWiki in the App Center

     

    Der Beitrag MediaWiki – a Culture of Sharing Your Knowledge erschien zuerst auf Univention.

    11 May, 2018 12:26PM by Maren Abatielos

    hackergotchi for VyOS

    VyOS

    Setting up GRE/IPsec behind NAT

    In the previous posts of this series we've discussed setting up "plain" IPsec tunnels from behind NAT.

    The transparency of the plain IPsec, however, is more often a curse than a blessing. Truly transparent IPsec is only possible between publicly routed networks, and the tunnel mode creates a strange mix of the two approaches: you do not have a network interface associated with the tunnel, but the setup is not free of routing issues either, and it's often hard to test whether the tunnel actually works or not from the router itself.

    GRE/IPsec (or IPIP/IPsec, or anything else) offers a convenient solution: for all intents and purposes it's a normal network interface and makes it look like the networks are connected with a wire. You can easily ping the other side, use the interface for firewall and QoS rulesets, and setup dynamic routing protocols in a straightforward way. However, NAT creates a unique challenge for this setup.

    The canonical and the simplest GRE/IPsec setup looks like this:

    interfaces {
      tunnel tun0 {
        address 10.0.0.2/29
        local-ip 192.0.2.10
        remote-ip 203.0.113.20
        encapsulation gre
      }
    }
    vpn {
      ipsec {
        site-to-site {
          peer 203.0.113.20 {
            tunnel 1 {
              protocol gre
            }
            local-address 192.0.2.10
    

    It creates a policy that encrypts any GRE packets sent to 203.0.113.20. Of course it's not going to work with NAT because the remote side is not directly routable.

    Let's see how we can get around it. Suppose you are setting up a tunnel between routers called East and West. The way to get around it is pretty simple even if not exactly intuitive and boils down to this:

    1. Setup an additional address on a loopback or dummy interface on each router, e.g. 10.10.0.1/32 on the East and 10.10.0.2/32 on the West.
    2. Setup GRE tunnels that are using 10.10.0.1 and .2 as local-ip and remote-ip respectively.
    3. Setup an IPsec tunnels that uses 10.10.0.1 and .2 as local-prefix and remote-prefix respectively.

    This way when traffic is sent through the GRE tunnel on the East, the GRE packets will use 10.10.0.1 as a source address, which will match the IPsec policy. Since 10.10.0.2/32 is specified as the remote-prefix of the tunnel, the IPsec process will setup a kernel route to it, and the GRE packets will reach the other side.

    Let's look at the config:

    interfaces {
      dummy dum0 {
        address 10.10.0.1/32
      }
      tunnel tun0 {
        address 10.0.0.1/29
        local-ip 10.10.0.1
        remote-ip 10.10.0.2
        encapsulation gre
      }
    }
    vpn {
      ipsec {
        site-to-site {
          peer @west {
            connection-type respond
            tunnel 1 {
              local {
                prefix 10.10.0.1/32
              }
              remote {
                prefix 10.10.0.2/32
              }
    

    This approach also has a property that may make it useful even in publicly routed networks if you are going to use the GRE tunnel for sensitive but unencrypted traffic (I've seen that in legacy applications): unlike the canonical setup, GRE tunnel stops working when the IPsec SA goes down because the remote end becomes unreachable. The canonical setup will continue to work even without IPsec and may expose the GRE traffic to eavesdropping and MitM attacks.

    This concludes the series of posts about IPsec and NAT. Next Friday I'll find something else to write about. ;)

    11 May, 2018 03:42AM by Daniil Baturin

    May 10, 2018

    hackergotchi for Purism PureOS

    Purism PureOS

    Librem 5 design report #5

    Hello everyone! A lot has happened behind the scenes since my last design report. Until now, I have been reporting on our design work mainly on the software front, but our effort is obviously not limited to that. The experience that people can have with their physical device is also very important. So in this post I will summarize some recent design decisions we have made both on the software side and the hardware product “experience” design.

    Thinking about the physical shell

    Our goal with the Librem 5 is to improve the visual identity of the Librem line while staying close to the minimalist and humble look that characterize the existing Librem line.

    The main challenge of case design is the need to balance aesthetics, ergonomics, convenience, and technical limitations.

    As you know, the Librem 5 is a special phone that will not integrate the same CPU and chipsets as usually implemented in the vast majority of smartphones in the market. Power consumption is a very important factor to take into account, but so is battery capacity and printed circuit board arrangements, and we don’t want to sacrifice battery life for a few millimeters of thickness. Therefore:

    • We are now aiming for a 5.5″ to 5.7″ screen with a 18:9 ratio that would let us incorporate a larger battery without affecting the shape of the phone.
    • We are also opting for a shape with chamfered edges (as pictured below), instead of the usual rounded ones. Not only do we think it looks elegant, the general shape would provide a better grip and it give us a bit more room inside for components.

    Simplifying the UI shell

    As the implementation of the Librem 5 goes on, we are quite aware that time is limited given our January 2019 target, and we are therefore focusing on robustness and efficiency for the first version of the mobile UI shell (“phosh”), which we wish to push upstream to become the GNOME mobile shell. As you may recall from our technical report from early March, we had discussed with GNOME Shell maintainers, who recommended this clean-slate approach.

    We revisited the shell features and decided to split the design and implementation into several phases.

    Phase 1 defines a shell that is at its simplest state in term of features and usability. This is the shell that should ship with the Librem 5 in January 2019.

    This shell includes :

    • A lock screen.
    • A PIN-based unlock screen for protecting the session.
    • A home screen that displays a paginated list of installed applications.
    • A top bar that displays useful information such as the time, battery level, audio level, network status…
    • A bottom bar that simulates a home button (only visible when opening an application).
    • A virtual keyboard.
    • Incoming call notifications.

    The “call” app is indeed a special case application on a phone, and that’s why we’re prioritizing it for the notifications feature: it has to work from day one, and it has some requirements like the ability to interact directly on the lock screen (to answer an incoming call, or to place an emergency services call).

    Multitasking UI workflows, search and more flexible app notification features/APIs should be implemented during phase 2, available a bit later.

    While “phase 1” might not be the all-you-can-eat features buffet some may be accustomed to, we think that this minimalist shell will be extremely simple to learn, use and will favor a quick and painless adoption. And it’ll be a great starting point.

    Designing the Contacts application

    The Contacts application will be at the center of the communication features. It is the application that will handle the contacts management that other applications such as Calls or Messages will rely on.

    For that matter, we are adapting the existing Contacts application by designing its mobile layout and adding extra fields that will be required by the different communication applications.

    Librem 5 & Fractal team hackfest in Strasbourg

    This week, a few members of the Librem 5 team (including myself) are attending the 2018 Fractal design hackfest in Strasbourg, with the goal of helping the Fractal team to make a beautiful and secure Matrix-based IM application to be used on both the desktop and mobile platform. I hope to do a report on the communication features of the Librem 5 in a future post where I will talk about what happened at the Fractal hackfest.

    10 May, 2018 11:27PM by François Téchené