November 20, 2017

hackergotchi for Wazo


Sprint Review 17.16

Hello Wazo community! Here comes the release of Wazo 17.16!

New features in this sprint

REST API: Groups can now ring arbitrary extensions. This was already possible with a specific user, a custom line and a Local channel. This API does the same thing, only a lot simpler. There is no graphical interface to use this API yet, though.

REST API: Until now, schedules in REST API could only be associated to incoming calls. Schedules can now be associated with users, groups and outgoing calls via REST API.

Ongoing features

User and Tenant Management: We are currently reworking the user and entities (a.k.a tenant) configuration. This should make installations with multiple entities feel more natural in future versions.

REST API: We are working towards replacing the old orange admin web interface with the more modern and easier to maintain blue web interface (wazo-admin-ui on /admin). Since wazo-admin-ui is only using REST API under the hood, we need REST API to cover all cases of administration of Wazo. Hence we are completing the set of REST API offered by Wazo. You can find the complete API documentation on

The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!


20 November, 2017 05:00AM by The Wazo Authors

November 19, 2017

hackergotchi for VyOS


1.1.8 release is available for download

1.1.8, the major minor release, is available for download from (mirrors are syncing up).

It breaks the semantic versioning convention, while the version number implies a bugfix-only release, it actually includes a number of new features. This is because 1.2.0 number is already assigned to the Jessie-based release that is still in beta, but not including those features that have been in the codebase for a while and a few of them have already been in production for some users would feel quite wrong, especially considering the long delay between the releases. Overall it's pretty close in scope to the original 1.2.0 release plan before Debian Squeeze was EOLd and we had to switch the effort to getting rid of the legacy that was keeping us from moving to a newer base distro.

You can find the full changelog here.

The release is available for both 64-bit and 32-bit machines. The i586-virt flavour, however, was discontinued since a) according to web server logs and user comments, there is no demand for it, unlike a release for 32-bit physical machines b) hypervisors capable of running on 32-bit hardware went extinct years ago. The current 32-bit image is built with paravirtual drivers for KVM/Xen, VMware, and Hyper-V, but without PAE, so you shouldn't have any problem running it on small x86 boards and testing it on virtual machines.

We've also made a 64-bit OVA that works with VMware and VirtualBox.


Multiple vulnerabilities in OpenSSL, dnsmasq, and hostapd were patched, including the recently found remote code execution in dnsmasq.


Some notable bugs that were fixed include:

  • Protocol negation in NAT not working correctly (it had exactly opposite effect and made the rule match the negated protocol instead)
  • Inability to reconfigure L2TPv3 interface tunnel and session ID after interface creation
  • GRUB not getting installed on RAID1 members
  • Lack of USB autosuspend causing excessive CPU load in KVM guests
  • VTI interfaces not coming back after tunnel reset
  • Cluster failing to start on boot if network links take too long to get up

New features

User/password authentication for OpenVPN client mode

A number of VPN providers (and some corporate VPNs) require that you use user/password authentication and do not support x.509-only authentication. Now this is supported by VyOS:

set interfaces openvpn vtun0 authentication username jrandomhacker
set interfaces openvpn vtun0 authentication password qwerty
set interfaces openvpn vtun0 tls ca-cert-file /config/auth/ca.crt
set interfaces openvpn vtun0 mode client
set interfaces openvpn vtun0 remote-host

Bridged OpenVPN servers no longer require subnet settings

Before this release, OpenVPN would always require subnet settings, so if one wanted to setup an L2 OpenVPN bridged to another interface, they'd have to specify a mock subnet. Not anymore, now if the device-type is set to "tap" and bridge-group is configured, subnet settings are not required.

New OpenVPN options exposed in the CLI

A few OpenVPN options that formerly would have to be configured through openvpn-option are now available in the CLI:

set interfaces openvpn vtun0 use-lzo-compression
set interfaces openvpn vtun0 keepalive interval 10
set interfaces openvpn vtun0 keepalive failure-count 5

Point to point VXLAN tunnels are now supported

In earlier releases, it was only possible to create multicast, point to multipoint VXLAN interfaces. Now the option to create point to point interfaces is also available:
set interfaces vxlan vxlan0 address
set interfaces vxlan vxlan0 remote
set interfaces vxlan vxlan0 vni 10

AS-override option for BGP

The as-override option that is often used as an alternative to allow-as-in is now available in the CLI:

set protocols bgp 64512 neighbor as-override

as-path-exclude option for route-maps

The option for removing selected ASNs from AS paths is available now:
set policy route-map Foo rule 10 action permit
set policy route-map Foo rule 10 set as-path-exclude 64600

Buffer size option for NetFlow/sFlow

The default buffer size was often insufficient for high-traffic installations, which caused pmacct to crash. Now it is possible to specify the buffer size option:
set system flow-accounting buffer-size 512 # megabytes
There are a few more options for NetFlow: source address (can be either IPv4 or IPv6) and maximum number of concurrenct flows (on high traffic machines setting it too low can cause netflow data loss):
set system flow-accounting netflow source-ip
set system flow-accounting netflow max-flows 2097152

VLAN QoS mapping options

It is now possible to specify VLAN QoS values:
set interfaces ethernet eth0 vif 42 egress-qos 1:6
set interfaces ethernet eth0 vif 42 ingress-qos 1:6

Ability to set custom sysctl options

There are lots of sysctl options in the Linux kernel and it would be impractical to expose them all in the CLI, since most of them only need to be modified under special circumstances. Now you can set a custom option is you need to:
set system sysctl custom $key value $value

Custom client ID for DHCPv6

It  is now possible to specify custom client ID for DHCPv6 client:
set interfaces ethernet eth0 dhcpv6-options duid foobar

Ethernet offload options

Under "set interfaces ethernet ethX offload-options" you can find a number of options that control NIC offload.

Syslog level "all"

Now you can specify options for the *.* syslog pattern, for example:

set system syslog global facility all level notice

Unresolved or partially resolved issues

Latest ixgbe driver updates are not included in this release.

The issue with VyOS losing parts of its BGP config when update-source is set to an address belonging to a dynamic interface such as VTI and the interface takes long to get up and acquire its address was resolved in its literal wording, but it's not guaranteed that the BGP session will get up on its own in this case. It's recommended to set the update-source to an address of an interface available right away on boot, for example, a loopback or dummy interface.

The issue with changing the routing table number in PBR rules is not yet resolved. The recommended workaround is to delete the rule and re-create it with the new table number, e.g. by copying its commands from 'run show configuration commands | match "policy route "'.


I would like to say thanks to everyone who contributed and made this release possible, namely: Kim Hagen, Alex Harpin, Yuya Kusakabe, Yuriy Andamasov, Ray Soucy, Nikolay Krasnoyarski, Jason Hendry, Kevin Blackham, kouak, upa, Logan Attwood, Panagiotis Moustafellos, Thomas Courbon, and Ildar Ibragimov (hope I didn't forget anyone).

A note on the downloads server

The original server is still having IO performance problems and won't handle a traffic spike associated with release well. We've setup the server on our new host specially for release images and will later migrate the rest of the old server including package repositories and the rsync setup.

19 November, 2017 10:50PM by Daniil Baturin

hackergotchi for SparkyLinux


Sparky 4.7

There is an update of SparkyLinux 4.7 “Tyche” out there.
This is Sparky edition based on Debian stable line 9 code name “Stretch”.

No big changes, the new iso images provide updates of all installed packages, from Debian “Stretch” and Sparky repositories as of November 17, 2017.

– Linux kernel 4.9.0-4 (4.9.51)
– Xfce 4.12.3
– LXDE 0.99.2
– Openbox 3.6.1
– Firefox ESR 52.5.0
– Thunderbird 52.4.0
– Pidgin 2.12.0
– HexChat 2.12.4
– VLC 2.2.6
– DeaDBeeF 0.7.2
– LibreOffice 5.2.7
– Transmission 2.92
– Calamares 3.1.8

The new iso images can be used to perform fresh Sparky installation.

If you already have Sparky installed on a hard drive, simply make full system upgrade:
sudo apt update
sudo apt full-upgrade

ISO images of Sparky 4.7 LXDE, Xfce, MinimalGUI (Openbox) & MinimalCLI (text mode) for i686 and x86_64/amd64 machines can be download from the download/stable page.

Consider a small donation which would help to keep Sparky project alive.



19 November, 2017 04:31PM by pavroo

hackergotchi for ARMBIAN


Orange Pi R1

Ubuntu server – legacy kernel
  .torrent (recommended) ?
Command line interface – server usage scenarios.


Debian server – mainline kernel
Command line interface – server usage scenarios.


other download options and archive

Known issues

Legacy kernel images (all boards) (default branch)

  • HDMI output (if exists) supports only limited number of predefined resolutions
  • TV Out doesn’t work as expected (only PAL/NTSC resolution, overscanning, no h3disp support, notes for OPi Zero)
  • 1-Wire protocol, reading out DHT11/DHT22 sensors or S/PDIF requires setting the minimum CPU frequency to 480MHz or higher
  • Hardware accelerated video decoding supports only limited number of video formats
  • ‘Out of memory’ (OOM) issues are possible due to a kernel bug

Mainline kernel images (all boards) (next/dev branches)

  • No Mali drivers
  • No hardware accelerated video decoding
  • schedutil CPU governor may cause clicks and pops on audio output – change(d) to ondemand to work around this issue

Board: Orange Pi Zero

  • Onboard wireless module (XR819) has poor software support so wireless connection issues are expected
  • Onboard wireless module (XR819) is not supported on mainline releases
  • board revision 1.4 report false high CPU temperatures.

Quick start | Documentation


Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).


Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.


Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

Tested hardware

19 November, 2017 08:28AM by igorpecovnik

November 18, 2017

Cumulus Linux

5 ways to design your container network

There’s been a lot of talk about container networking in the industry lately (heck, we can’t even stop talking about it). And it’s for a good reason. Containers offer a fantastic way to develop and manage microservices and distributed applications easily and efficiently. In fact, that’s one of the reasons we launched Host Pack — to make container networking even simpler. Between Host Pack and NetQ, you can get fabric-wide connectivity and visibility from server to switch.

There are a variety of ways you can deploy a container network using Host Pack and Cumulus Linux, and we have documented some of them in several Validated Design Guides discussed below. Wondering which deployment method is right for your business? This blog post is for you.

Docker Swarm with Host Pack

Overview: The Docker Swarm with Host pack solution uses the connectivity module within Host Pack, Free Range Routing (FRR) in a container. The FRR container runs on the servers and uses BGP unnumbered for Layer 3 connectivity, enabling the hosts to participate in the routing fabric. We use Docker Swarm as the container orchestration tool for simplicity.

Choose this deployment if:

  • You’re looking for the easiest and simplest container deployment possible.
  • You don’t mind using overlays and NAT.
  • You are able to install software on the hosts and have at least two leaf (top of rack) switches connected to each host for redundancy.
  • You are very comfortable with Layer 3 and realize the deficiencies of MLAG, STP and large failure domains.

How it works:

When configured, Swarm enables VXLAN tunnels between the hosts for multi-host inter-container communication as shown by the red dotted line. We set up the host’s loopback address as the VTEP , and advertise the VTEP directly into the routing domain from the host via eBGP unnumbered. This enables Layer 3 redundancy between the containers, the network and each other. We access the outside using NAT, as denoted by the yellow dotted line. We access other containers via VXLAN, as denoted by the red dotted line.

More information on this solution can be found in the full validated design guide.

container network

Docker Swarm with MLAG or single attached hosts

Overview: The Docker Swarm with MLAG or single attached hosts solution terminates Layer 3 at the leaf (top of rack) switches and runs Layer 2 to the hosts. MLAG can be deployed from the hosts to the leaf switches, or the hosts can be single attached. We use Docker Swarm as the orchestration tool for simplicity.

Choose this deployment if:

  • You’re looking for the easiest and simplest container deployment possible.
  • You don’t mind using overlays and NAT.
  • You are NOT able to install additional containers (FRR) on the hosts
  • You have 2 Top of Rack (ToR) switches running MLAG for redundancy or have single attached hosts.

How it works:

Docker Swarm enables VXLAN tunnels between the hosts for multi-host inter-container communication. We set up the VXLAN VTEP as either the IP address of the hosts single attached ethernet, or it is the IP address of the hosts bond in the case of MLAG. The ToR(s) advertise the IP subnet of the host facing bond or ethernet (which is also the VTEP) directly into the routing domain via eBGP.

More information on this solution can be found in the full validated design guide.

container network

Host Pack: Advertising the Docker Bridge

Overview: The FRR container, the connectivity module within Host Pack, advertises the Docker Bridge into the routing domain. This deployment option uses an FRR container on the servers with Layer 3 connectivity and eBGP unnumbered – enabling the hosts to participate in the routing fabric. It is best deployed with dual attached hosts to enable Layer 3 redundancy. We directly advertise the Docker Bridge subnet into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool.

Choose this deployment if:

  • You want a container networking system that operates independent of the orchestration tool.
  • You do not have tight constraints on IP address usage for containers or you are using private addresses.
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity.
  • You are able to install software (FRR routing) on the hosts to avoid difficulties with MLAG and STP and prefer smaller failure domains.
  • You have 2 or more Top of Rack (ToR) switches.

How it works:

In this solution, we use a customer configured IP subnet (with an appropriate mask size – depending on planned number of containers per host) for the Docker Bridge. We dictate a private or public IP subnet when setting up the Bridge. Each host’s Docker Bridge must be configured with a different subnet.
We then use FRR within Host Pack to advertise that Bridge subnet into the routing domain for connectivity from either the outside or containers on other hosts.

More information on this solution can be found in the full validated design guide.

container network

Advertise Container Addresses into the Routing Domain with Host Pack Features

Overview: The Advertise Container Addresses into the Routing Domain with Host Pack Advertises deploys Host Pack’s FRR container and the Cumulus Container Advertiser on the hosts. It uses eBGP Layer 3 connectivity to the hosts. It is best deployed with dual attached hosts to enable Layer 3 redundancy. We directly advertise the containers /32 IP addresses into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool, however, a centralized IPAM must be used to assign the container’s IP addresses.

Choose this deployment if:

  • You know which orchestration tool you like and you can integrate it with a centralized IPAM.
  • You have limited IP addresses for containers and need to conserve them (Anycast IPs, Public IPs, Constrained internal IP address space for container deployment).
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity between themselves.
  • You are able to install additional containers (FRR and the Cumulus Container Advertiser) on the hosts to avoid difficulties with MLAG and STP and you prefer smaller failure domains.
  • You have 2 or more Top of Rack (ToR) switches.
  • You can summarize IP addresses at the edge.

How it works:

We deploy Host Pack’s FRR container with eBGP unnumbered for redundancy. We also deploy Host Pack’s container advertiser to advertise the containers /32 IP address into the routing domain. We use the same configured IP subnet on all hosts in the data center and use proxy arp to allow multi-host container to container reachability. This solution conserves IP address space and allows containers to be destroyed and redeployed on a different host using the same IP address and not requiring a different subnet per host.

This solution requires a centralized IPAM that can work with the orchestration tool to ensure container IP addresses are not duplicated in the network.

More information on this solution can be found in the full validated design guide.

container network

Advertise Containers /32 Address with Redistribute Neighbor

Overview: In this solution, redistribute neighbor is used on the leafs to directly advertise the containers’ /32 IP addresses into the routing domain via eBGP unnumbered from the leaf switches. No overlays or NAT are used and no extra containers are needed on the host. We directly advertise the application containers’ /32 IP addresses into the routing domain without any NAT or overlays. It does not dictate a specific orchestration tool, however, a centralized IPAM must be used to assign the containers’ IP addresses.
Choose this deployment if:

  • You know which container orchestration tool you like and you can integrate it with a centralized IPAM.
  • You have limited IP addresses for containers and need to conserve them (Anycast IPs, Public IPs, Constrained internal IP address space for container deployment).
  • You prefer to avoid using overlays and NAT for higher performance and ease of troubleshooting.
  • Your containerized applications support Layer 3 connectivity (containers can be on different subnets).
  • You develop your own application containers and/or are able to get the container to GARP or ping upon spin up.
  • You are NOT able to install additional containers on the hosts.
  • You have single attached hosts.
  • You can summarize IP addresses at the edge.

How it works:
In this solution we deploy the macvlan driver on each host with the same IP subnet on all hosts. On the leaf switch, redistribute neighbor is used to advertise the container /32 addresses into the routing domain. Redistribute neighbor uses the leaf switches ARP table to learn the container IP addresses and advertises them into the routing domain. This means that containers need to GARP or ping upon bring up to announce themselves. Proxy ARP is used to communicate between containers on different hosts.

This solution allows containers to be destroyed and redeployed on a different host using the same IP address, and conserves public IP address space. The macvlan driver also has better performance than Docker Bridge, but only supports single attached hosts.

More information on this solution can be found in the full validated design guide.

container network


No matter how you’re deploying your container network, you can get robust connectivity (and visibility for that matter) with one of the above solutions and Host Pack. If you’re still not sure which solution is right for you, you can test out the technology in your own personal, pre-built data center with Cumulus in the Cloud. You can plug and play with common configurations and build out a virtual container network easily and for free. Check it out.

The post 5 ways to design your container network appeared first on Cumulus Networks Blog.

18 November, 2017 08:00PM by Diane Patton

November 17, 2017

hackergotchi for Maemo developers

Maemo developers

Semrush, MJ12 and DotBot just slow down your server

I recently migrated a server to a new VHost that was supposed to improve the performance – however after the upgrade the performance actually was worse.

Looking at the system load I discovered that the load average was at about 3.5 – with only 2 cores available this corresponds to server overload by almost 2x.

Further looking at the logs revealed that this unfortunately was not due to the users taking interest in the site, but due to various bots hammering on the server. Actual users would be probably drawn away by the awful page load times at this point.

Asking the bots to leave

To improve page loading times, I configured my robots.txt as following

User-agent: *
Disallow: /

This effectively tells all bots to skip my site. You should not do this as you will not be discoverable at e.g. Google.

But here I just wanted to allow my existing users to use the site. Unfortunately the situation only slightly improve; the system load was still over 2.

From the logs I could tell that all bots were actually gone, except for

  • SemrushBot by
  • MJ12Bot by
  • DotBot by

But those were enough to keep the site (PHP+MySQL) overloaded.

The above bots crawl the web for their respective SEO analytics company which sell this information to webmasters. This means that unless you are already a customer of these companies, you do not benefit from having your site crawled.

In fact, if you are interested in SEO analytics for your website, you should probably look elsewhere. In the next paragraph we will block these bots and I am by far not the first one recommending this.

Making the bots leave

As the bots do not respect the robots.txt, you will have to forcefully block them. Instead of the actual webpages, we will give them a 410/ 403 which prevents them touching any PHP/ MySQL resources.

On nginx, add this to your server section:

if ($http_user_agent ~* (SemrushBot|MJ12Bot|DotBot)) {
     return 410;

For Apache2.4+ do:

BrowserMatchNoCase SemrushBot bad_bot
BrowserMatchNoCase MJ12Bot bad_bot
BrowserMatchNoCase DotBot bad_bot
Order Deny,Allow
Deny from env=bad_bot

For additional fun you could also given them a 307 (redirect) to their own websites here.

0 Add to favourites0 Bury

17 November, 2017 08:02PM by Pavel Rojtberg (

hackergotchi for Purism PureOS

Purism PureOS

Reverse engineering the Intel FSP… a primer guide!

Recently, I’ve finished reverse engineering the Intel FSP-S “entry” code, that is from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls. This is only some initial foray into reverse engineering the FSP as a whole, but reverse engineering is something that takes a lot of time and effort. Today’s blog post is here to illustrate that, and to lay the foundations for understanding what I’ve done with the FSP code (in a future blog post).

Over the years, many people asked me to teach them what I do, or to explain to them how to reverse engineer assembly code in general. Sometimes I hear the infamous “How hard can it be?” catchphrase. Last week someone I was discussing with thought that the assembly language is just like a regular programming language, but in binary form—it’s easy to make that mistake if you’ve never seen what assembly is or looks like. Historically, I’ve always said that reverse engineering and ASM is “too complicated to explain” or that “If you need help to get started, then you won’t be able to finish it on your own” and various other vague responses—I often wanted to explain to others why I said things like that but I never found a way to do it. You see, when something is complex, it’s easy to say that it’s complex, but it’s much harder to explain to people why it’s complex.

I was lucky to recently stumble onto a little function while reverse engineering the Intel FSP, a function that was both simple and complex, where figuring out what it does was an interesting challenge that I can easily walk you through. This function wasn’t a difficult thing to understand, and by far, it’s not one of the hard or complex things to reverse engineer, but this one is “small and complex enough” that it’s a perfect example to explain, without writing an entire book or getting into the more complex aspects of reverse engineering. So today’s post serves as a “primer” guide to reverse engineering for all of those interested in the subject. It is a required read in order to understand the next blog posts I would be writing about the Intel FSP. Ready? Strap on your geek helmet and let’s get started!

DISCLAIMER: I might make false statements in the blog post below, some by mistake, some intentionally for the purpose of vulgarizing the explanations. For example, when I say below that there are 9 registers in X86, I know there are more (SSE, FPU, or even just the DS or EFLAGS registers, or purposefully not mentioning EAX instead of RAX, etc.), but I just don’t want to complicate matters by going too wide in my explanations.

A prelude

First things first, you need to understand some basic concepts, such as “what is ASM exactly”. I will explain some basic concepts but not all the basic concepts you might need. I will assume that you know at least what a programming language is and know how to write a simple “hello world” in at least one language, otherwise you’ll be completely lost.

So, ASM is the Assembly language, but it’s not the actual binary code that executes on the machine. It is however, very similar to it. To be more exact, the assembly language is a textual representation of the binary instructions given to the microprocessor. You see, when you compile your regular C program into an executable, the compiler will transform all your code into some very, very, very basic instructions. Those instructions are what the CPU will understand and execute. By combining a lot of small, simple and specific instructions, you can do more complex things. That’s the basis of any programming language, of course, but with assembly, the building blocks that you get are very limited. Before I’ll talk about instructions, I want to explain two concepts first which you’ll need to follow the rest of the story.

The stack

First I’ll explain what “the stack” is.  You may have heard of it before, or maybe you didn’t, but the important thing to know is that when you write code, you have two types of memory:

  • The first one is your “dynamic memory”, that’s when you call ‘malloc’ or ‘new’ to allocate new memory, this goes from your RAM upward (or left-to-right), in the sense that if you allocate 10 bytes, you’ll first get address 0x1000 for example, then when you allocate another 30 bytes, you’ll get address 0x100A, then if you allocate another 16 bytes, you’ll get 0x1028, etc.
  • The second type of memory that you have access to is the stack, which is different, instead it grows downward (or right-to-left), and it’s used to store local variables in a function. So if you start with the stack at address 0x8000, then when you enter a function with 16 bytes worth of local variables, your stack now points to address 0x7FF0, then you enter another function with 64 bytes worth of local variables, and your stack now points to address 0x7FB0, etc. The way the stack works is by “stacking” data into it, you “push” data in the stack, which puts the variable/data into the stack and moves the stack pointer down, you can’t remove an item from anywhere in the stack, you can always only remove (pop) the last item you added (pushed). A stack is actually an abstract type of data, like a list, an array, a dictionary, etc. You can read more about what a stack is on wikipedia and it shows you how you can add and remove items on a stack with this image:

The image shows you what we call a LIFO (Last-In-First-Out) and that’s what a stack is. In the case of the computer’s stack, it grows downward in the RAM (as opposed to upward in the above image) and is used to store local variables as well as the return address for your function (the instruction that comes after the call to your function in the parent function). So when you look at a stack, you will see multiple “frames”, you’ll see your current function’s stack with all its variables, then the return address of the function that called it, and above it, you’ll see the previous function’s frame with its own variables and the address of the function that called it, and above, etc. all the way to the main function which resides at the top of the stack.

Here is another image that exemplifies this:

The registers

The second thing I want you to understand is that the processor has multiple “registers”. You can think of a register as a variable, but there are only 9 total registers on x86, with only 7 of them usable. So, on the x86 processor, the various registers are: EAX, EBX, ECX, EDX, EDI, ESI, EBP, ESP, EIP.

There are two registers in there that are special:

  • The EIP (Instruction Pointer) contains the address of the current instruction being executed.
  • The ESP (Stack Pointer) contains the address of the stack.

Access to the registers is extremely fast when compared to accessing the data in the RAM (the stack also resides on the RAM, but towards the end of it) and most operations (instructions) have to happen on registers. You’ll understand more when you read below about instructions, but basically, you can’t use an instruction to say “add value A to value B and store it into address C”, you’d need to say “move value A into register EAX, then move value B into register EBX, then add register EAX to register EBX and store the result in register ECX, then store the value of register ECX into the address C”.

The instructions

Let’s go back to explaining instructions now. As I explained before, the instructions are the basic building blocks of the programs, and they are very simple, they take the form of:


Where “INS” is the instruction”, and OP1, OP2, OP3 is what we call the “operand”, most instructions will only take 2 operands, some will take no operands, some will take one operand and others will take 3 operands. The operands are usually registers. Sometimes, the operand can be an actual value (what we call an “immediate value”) like “1”, “2” or “3”, etc. and sometimes, the operand is a relative position from a register, like for example “[%eax + 4]” meaning the address pointed to by the %eax register + 4 bytes. We’ll see more of that shortly. For now, let’s give you the list of the most common and used instructions:

  • MOV“: move data from one operand into another
  • ADD/SUB/MUL/DIV“: Add, Substract, Multiply, Divide one operand with another and store the result in a register
  • AND/OR/XOR/NOT/NEG“: Perform logical and/or/xor/not/negate operations on the operand
  • SHL/SHR“: Shift Left/Shift Right the bits in the operand
  • CMP/TEST“: Compare one register with an operand
  • JMP/JZ/JNZ/JB/JS/etc.”: Jump to another instruction (Jump unconditionally, Jump if Zero, Jump if Not Zero, Jump if Below, Jump if Sign, etc.)
  • PUSH/POP“: Push an operand into the stack, or pop a value from the stack into a register
  • CALL“: Call a function. This is the equivalent of doing a “PUSH %EIP+4” + “JMP”. I’ll get into calling conventions later..
  • RET“: Return from a function. This is the equivalent of doing a “POP %EIP”

That’s about it, that’s what most programs are doing. Of course, there’s a lot more instructions, you can see a full list here, but you’ll see that most of the other instructions are very obscure or very specific or variations on the above instructions, so really, this represents most of the instructions you’ll ever encounter.

I want to explain one thing before we go further down: there is an additional register I didn’t mention before called the FLAGS register, which is basically just a status register that contains “flags” that indicate when some arithmetic condition happened on the last arithmetic operation. For example, if you add 1 to 0xFFFFFFFF, it will give you ‘0’ but the “Overflow flag” will be set in the FLAGS register. If you substract 5 from 0, it will give you 0xFFFFFFFB and the “Sign flag” will be set because the result is negative, and if you substract 3 from 3, the result will be zero and the “Zero flag” will be set.

I’ve shown you the “CMP” instruction which is used to compare a register with an operand, but you might be wondering, “What does it mean exactly to ‘compare’?” Well, it’s simple, the CMP instruction is the same thing as the SUB instruction, in that, it substracts one operand from another, but the difference is that it doesn’t store the result anywhere. However, it does get your flags updated in the FLAGS register. For example, if I wanted to compare %EAX register with the value ‘2’, and %EAX contains the value 3, this is what’s going to happen: you will substract 2 from the value, the result will be 1, but you don’t care about that, what you care about is that the ZF (Zero flag) is not set, and the SF (Sign flag is not set), which means that %eax and ‘2’ are not equal (otherwise, ZF would be set), and that the value in %eax is superior to 2 (because SF is not set), so you know that “%eax > 2” and that’s what the CMP does.

The TEST instruction is very similar but it does a logical AND on the two operands for testing, so it’s used for comparing logical values instead of arithmetic values (“TEST %eax, 1” can be used to check if %eax contains an odd or even number for example).

This is useful because the next bunch of instructions I explained in the list above is conditional Jump instructions, like “JZ” (jump if zero) or “JB” (jump if below), or “JS” (jump if sign), etc. This is what is used to implement “if, for, while, switch/case, etc.” it’s as simple as doing a “CMP” followed by a “JZ” or “JNZ” or “JB”, “JA”, “JS”, etc.

And if you’re wondering what’s the difference between a “Jump if below” and “Jump if sign” and “Jump if lower”, since they all mean that the comparison gave a negative result, right? Well, the “jump if below” is used for unsigned integers, while “jump if lower” is used for signed integers, while “jump if sign” can be misleading. An unsigned 3 – 4 would give us a very high positive result…  something like that, in practice, JB checks the Carry Flag, while JS checks the Sign Flag and JL checks if the Sign Flag is equal to the Overflow flag. See the Conditional Jump page for more details.

A practical example

Here’s a very small and simple practical example, if you have a simple C program like this:

int main() {
   return add_a_and_b(2, 3);

int add_a_and_b(int a, int b) {
   return a + b;

It would compile into something like this:

   push   3                ; Push the second argument '3' into the stack
   push   2                ; Push the first argument '2' into the stack
   call   _add_a_and_b     ; Call the _add_a_and_b function. This will put the address of the next
                           ; instruction (add) into the stack, then it will jump into the _add_a_and_b
                           ; function by putting the address of the first instruction in the _add_a_and_b
                           ; label (push %ebx) into the EIP register
   add    %esp, 8          ; Add 8 to the esp, which effectively pops out the two values we just pushed into it
   ret                     ; Return to the parent function.... 

   push   %ebx             ; We're going to modify %ebx, so we need to push it to the stack
                           ; so we can restore its value when we're done
   mov    %eax, [%esp+8]   ; Move the first argument (8 bytes above the stack pointer) into EAX
   mov    %ebx, [%esp+12]  ; Move the second argument (12 bytes above the stack pointer) into EBX
   add    %eax, %ebx       ; Add EAX and EBX and store the result into EAX
   pop    %ebx             ; Pop EBX to restore its previous value
   ret                     ; Return back into the main. This will pop the value on the stack (which was
                           ; the address of the next instruction in the main function that was pushed into
                           ; the stack when the 'call' instruction was executed) into the EIP register

Yep, something as simple as that, can be quite complicated in assembly. Well, it’s not really that complicated actually, but a couple of things can be confusing.

You have only 7 usable registers, and one stack. Every function gets its arguments passed through the stack, and can return its return value through the %eax register. If every function modified every register, then your code will break, so every function has to ensure that the other registers are unmodified when it returns (other than %eax). You pass the arguments on the stack and your return value through %eax, so what should you do if need to use a register in your function? Easy: you keep a copy on the stack of any registers you’re going to modify so you can restore them at the end of your function. In the _add_a_and_b function, I did that for the %ebx register as you can see. For more complex function, it can get a lot more complicated than that, but let’s not get into that for now (for the curious: compilers will create what we call a “prologue” and an “epilogue” in each function. In the prologue, you store the registers you’re going to modify, set up the %ebp (base pointer) register to point to the base of the stack when your function was entered, which allows you to access things without keeping track of the pushes/pops you do throughout the function, then in the epilogue, you pop the registers back, restore %esp to the value that was saved in %ebp, before you return).

The second thing you might be wondering about is with these lines:

mov %eax, [%esp+8]
mov %ebx, [%esp+12]

And to explain it, I will simply show you this drawing of the stack’s contents when we call those two instructions above:

For the purposes of this exercise, we’re going to assume that the _main function is located in memory at the address 0xFFFF0000, and that each instructoin is 4 bytes long (the size of each instruction can vary depending on the instruction and on its operands). So you can see, we first pushed 3 into the stack, %esp was lowered, then we pushed 2 into the stack, %esp was lowered, then we did a ‘call _add_a_and_b’, which stored the address of the next instruction (4 instructions into the main, so ‘_main+16’) into the stack and esp was lowered, then we pushed %ebx, which I assumed here contained a value of 0, and the %esp was lowered again. If we now wanted to access the first argument to the function (2), we need to access %esp+8, which will let us skip the saved %ebx and the ‘Return address’ that are in the stack (since we’re working with 32 bits, each value is 4 bytes). And in order to access the second argument (3), we need to access %esp+12.

Binary or assembly?

One question that may (or may not) be popping into your mind now is “wait, isn’t this supposed to be the ‘computer language’, so why isn’t this binary?” Well, it is… in a way. As I explained earlier, “the assembly language is a textual representation of the binary instructions given to the microprocessor”, what it means is that those instructions are given to the processor as is, there is no transformation of the instructions or operands or anything like that. However, the instructions are given to the microprocessor in binary form, and the text you see above is just the textual representation of it.. kind of like how “68 65 6c 6c 6f” is the hexadecimal representation of the ASCII text “hello”. What this means is that each instruction in assembly language, which we call a ‘mnemonic’ represents a binary instruction, which we call an ‘opcode’, and you can see the opcodes and mnemonics in the list of x86 instructions I gave you above. Let’s take the CALL instruction for example. The opcode/mnemonic list is shown as:

Opcode Mnemonic Description
E8 cw CALL rel16 Call near, relative, displacement relative to next instruction
E8 cd CALL rel32 Call near, relative, displacement relative to next instruction
FF /2 CALL r/m16 Call near, absolute indirect, address given in r/m16
FF /2 CALL r/m32 Call near, absolute indirect, address given in r/m32
9A cd CALL ptr16:16 Call far, absolute, address given in operand
9A cp CALL ptr16:32 Call far, absolute, address given in operand
FF /3 CALL m16:16 Call far, absolute indirect, address given in m16:16
FF /3 CALL m16:32 Call far, absolute indirect, address given in m16:32

This means that this same “CALL” mnemonic can have multiple addresses to call. Actually, there are four different possitiblities, each having a 16 bits and a 32 bits variant. The first possibility is to call a function with a relative displacement (Call the function 100 bytes below this current position), or an absolute address given in a register (Call the function whose address is stored in %eax) or an absolute address given as a pointer (Call the function at address 0xFFFF0100), or an absolute address given as an offset to a segment (I won’t explain segments now). In our example above, the “call _add_a_and_b” was probably stored as a call relative to the current position with 12 bytes below the current instruction (4 bytes per instruction, and we have the CALL, ADD, RET instructions to skip). This means that the instruction in the binary file was encoded as “E8 00 00 00 0C” (The E8 opcode to mean a “CALL near, relative”, and the “00 00 00 0C” to mean 12 bytes relative to the current instruction). Now, the most observant of you have probably noticed that this CALL instruction takes 5 bytes total, not 4, but as I said above, we will assume it’s 4 bytes per instruction just for the sake of keeping things simple, but yes, the CALL (in this case) is 5 bytes, and other instructions will sometimes have more or less bytes as well.

I chose the CALL function above for example, because I think it’s the least complicated to explain.. other instructions have even more complicated opcodes and operands (See the ADD and ADC (Add with Cary) instructions for example, you’ll notice the same opcodes shared between them even, so they are the same instruction, but it’s easy to give them separate mnemonics to differentiate their behaviors).

Here’s a screenshot showing a side by side view of the Assembly of a function with the hexadecimal view of the binary:

As you can see, I have my cursor on address 0xFFF6E1D6 on the assembly view on the left, which is also highlighted on the hex view on the right. That address is a CALL instruction, and you can see the equivalent hex of “E8 B4 00 00 00”, which means it’s a CALL near, relative (E8 being the opcode for it) and the function is 0xB4 (180) bytes below our current position of 0xFFF6E1D6.

If you open the file with a hexadecimal editor, you’ll only see the hex view on the right, but you need to put the file into a Disassembler (such as the IDA disassembler which I’m using here, but there are cheaper alternatives as well, the list can be long), and the disassembler will interpret those binary opcodes to show you the textual assembly representation which is much much easier to read.

Some actual reverse engineering

Now that you have the basics, let’s do a quick reverse engineering exercise… This is a very simple function that I’ve reversed recently, it comes from the SiliconInit part of the FSP, and it’s used to validated the UPD configuration structure (used to tell it what to do).

Here is the Assembly code for that function:

This was disassembled using IDA 7.0 (The Interactive DisAssembler) which is an incredible (but expensive) piece of software. There are other disassemblers which can do similar jobs, but I prefer IDA personally. Let’s first explain what you see on the screen.

On the left side, you see “seg000:FFF40xxx” this means that we are in the segment “seg000” at the address 0xFFF40xxx. I won’t explain what a segment is, because you don’t need to know it. The validate_upd_config function starts at address 0xFFF40311 in the RAM, and there’s not much else to understand. You can see how the address increases from one instruction to the next, it can help you calculate the size in bytes that each instruction takes in RAM for example, if you’re curious of course… (the XOR is 2 bytes, the CMP is 2 bytes, etc.).

As you’ve seen in my previous example, anything after a semicolon (“;”) is considered a comment and can be ignored. The “CODE XREF” comments are added by IDA to tell us that this code has a cross-references (is being called by) some other code. So when you see “CODE XREF: validate_upd_config+9” (at 0xFF40363, the RETN instruction), it means this instruction is being called (referenced by) from the function validate_upd_config and the “+9” means 9 bytes into the function (so since the function starts at 0xFFF40311, it means it’s being called from the instruction at offset 0xFFF4031A. The little “up” arrow next to it means that it comes from above the current position in the code, and if you follow the grey lines on the left side of the screen, you can follow that call up to the address 0xFFF4031A which contains the instruction “jnz short locret_FFF40363”. I assume the “j” letter right after the up arrow is to tell us that the reference comes from a “jump” instruction.

As you can see in the left side of the screen, there are a lot of arrows, that means that there’s a lot of jumping around in the code, even though it’s not immediatly obvious. The awesome IDA software has a “layout view” which gives us a much nicer view of the code, and it looks like this:

Now you can see each block of code separately in their own little boxes, with arrows linking all of the boxes together whenever a jump happens. The green arrows mean that it’s a conditional jump when the condition is successful, while the red arrows means the condition was not successful. This means that a “JZ” will show a green arrow towards the code it would jump to if the result is indeed zero, and a red arrow towards the block where the result is not zero. A blue arrow means that it’s an unconditional jump.

I usually always do my reverse engineering using the layout view, I find it much easier to read/follow, but for the purpose of this exercise, I will use the regular linear view instead, so I think it will be easier for you to follow with that instead. The reason is mostly because the layout view doesn’t display the address of each instruction, and it’s easier to have you follow along if I can point out exactly which instruction I’m looking it by mentioning its address.

Now that you know how to read the assembly code, you understand the various instructions, I feel you should be ready to reverse engineering this very simple assembly code (even though it might seem complex at first). I just need to give you the following hints first:

  • Because I’ve already reversed engineering it, you get the beautiful name “validate_upd_config” for the function, but technically, it was simply called “sub_FFF40311”
  • I had already reverse engineered the function that called it so I know that this function is receiving its arguments in an unusual way. The arguments aren’t pushed to the stack, instead, the first argument is stored in %ecx, and the second argument is stored in %edx
  • The first argument (%ecx, remember?) is an enum to indicate what type of UPD structure to validate, let me help you out and say that type ‘3’ is the FSPM_UPD (The configuration structure for the FSPM, the MemoryInit function), and that type ‘5’ is the FSPS_UPD (The configuration structure for the FSPS, the SiliconInit function).
  • Reverse engineering is really about reading one line at a time, in a sequential manner, keep track of which blocks you reversed and be patient. You can’t look at it and expect to understand the function by viewing the big picture.
  • It is very very useful in this case to have a dual monitor, so you can have one monitor for the assembly, and the other monitor for your C code editor. In my case, I actually recently bought an ultra-wide monitor and I split screen between my IDA window and my emacs window and it’s great. It’s hard otherwise to keep going back and forth between the assembly and the C code. That being said, I would suggest you do the same thing here and have a window on the side showing you the assembly image above (not the layout view) while you read the explanation on how to reverse engineer it below.

Got it? All done? No? Stop sweating and hyperventilating… I’ll explain exactly how to reverse engineer this function in the next paragraph, and you will see how simple it turns out to be!

Let’s get started!

The first thing I do is write the function in C. Since I know the name and its arguments already, I’ll do that:

void validate_upd_config (uint8_t action, void *config) {

Yeah, there’s not much to it yet, and I set it to return “void” because I don’t know if it returns anything else, and I gave the first argument “action” as a “uint8_t” because in the parent function it’s used a single byte register (I won’t explain for now how to differentiate 1-byte, 2-bytes, 4-bytes and 8-bytes registers). The second argument is a pointer, but I don’t know it’s a pointer to what kind of structure exactly, so I just set it as a void *.

The first instruction is a “xor eax, eax”. What does this do? It XORs the eax register with the eax register and stores the result in the eax register itself, which is the same thing as “mov eax, 0”, because 1 XOR 1= 0 and 0 XOR 0 = 0, so if every bit in the eax register is logically XORed with itself, it will give 0 for the result. If you’re asking yourself “Why did the compiler decide to do ‘xor eax, eax’ instead of ‘mov eax, 0’ ?” then the answer is simple: “Because it takes less CPU clock cycles to do a XOR, than to do a move”, which means it’s more optimized and it will run faster. Besides, the XOR takes 2 bytes as you can see above (the address of the instructions jumped from FFF40311 to FFF40313), while a “mov eax, 0” would have taken 5 bytes. So it also helps keep the code smaller.

Alright, so now we know that eax is equal to 0, let’s keep that in mind and move along (I like to keep track of things like that as comments in my C code). Next instruction does a “cmp ecx, 3”, so it’s comparing ecx, which we already know is our first argument (uint8_t action), ok, it’s a comparison, not much to do here, again let’s keep that in mind and continue… the next instruction does a “jnz short loc_FFF40344”, which is more interesting, so if the previous comparison is NOT ZERO, then jump to the label loc_FFF40344 (for now ignore the “short”, it just helps us differentiate between the various mnemonics, and it means that the jump is a relative offset that fits in a “short word” which means 2 bytes, and you can confirm that the jnz instruction does indeed take only 2 bytes of code). Great, so there’s a jump if the result is NOT ZERO, which means that if the result is zero, the code will just continue, which means if the ecx register (action variable) is EQUAL (substraction is zero) to 3, the code will continue down to the next instruction instead of jumping… let’s do that, and in the meantime we’ll update our C code:

void validate_upd_config (uint8_t action, void *config) {
   // eax = 0
   if (action == 3) {
      // 0xFFF40318 
   } else {
      // loc_FFF40344

The next instruction is “test edx, edx”.  We know that the edx register is our second argument which is the pointer to the configuration structure. As I explained above, the “test” is just like a comparison, but it does an AND instead of a substraction, so basically, you AND edx with itself.. well, of course, that has no consequence, 1 AND 1 = 1, and 0 AND 0 = 0, so why is it useful to test a register against itself? Simply because the TEST will update our FLAGS register… so when the next instruction is “JZ” it basically means “Jump if the edx register was zero”… And yes, doing a “TEST edx, edx”  is more optimized than doing a “CMP edx, 0”, you’re starting to catch on, yeay!

And indeed, the next instruction is “jz locret_FFF40363”, so if the edx register is ZERO, then jump to locret_FFF40363, and if we look at that locret_FFF40363, it’s a very simple “retn” instruction. So our code becomes:

void validate_upd_config (uint8_t action, void *config) {
  // eax = 0
  if (action == 3) {
    if (config == NULL)
  } else {
    // loc_FFF40344

Next! Now it gets slightly more complicated… the instruction is: “cmp dword ptr [edx], 554C424Bh”, which means we do a comparison of a dword (4 bytes), of the data pointed to by the pointer edx, with no offset (“[edx]” is the same as saying “edx[0]” if it was a C array for example), and we compare it to the value 554C424Bh… the “h” at the end means it’s a hexadecimal value, and with experience you can quickly notice that the hexadecimal value is all within the ASCII range, so using a Hex to ASCII converter, we realize that those 4 bytes represent the ASCII letters “KBLU” (which is why I manually added them as a comment to that instruction, so I won’t forget). So basically the instruction compares the first 4 bytes of the structure (the content pointed to by the edx pointer) to the string “KBLU”. The next instruction does a “jnz loc_FFF4035E” which means that if the comparison result is NOT ZERO (so, if they are not equal) we jump to loc_FFF4035E.

Instead of continuing sequentially, I will see what that loc_FFF4035E contains (of course, I did the same thing in all the previous jumps, and had to decide if I wanted to continue reverse engineering the jump or the next instruction, in this case, it seems better for me to jump, you’ll see why soon). The loc_FFF4035E label contains the following instruction: “mov, eax, 80000002h”, which means it stores the value 0x80000002 into the eax register, and then it jumps to (not really, it just naturally flows to the next instruction which happens to be the label) locret_FFF40363, which is just a “retn”. This makes our code into this:

uint32_t validate_upd_config (uint8_t action, void *config) {
  // eax = 0
  if (action == 3) {
    if (config == NULL)
       return 0; 
    if (((uint32_t *)config)[0] != 0x554C524B)
       return 0x80000002;
  } else {
    // loc_FFF40344

The observant here will notice that I’ve changed the function prototype to return a uint32_t instead of “void” and my previous “return” has become “return 0” and the new code has a “return 0x80000002”. That’s because I realized at this point that the “eax” register is used to return a uint32_t value. And since the first instruction was “xor eax, eax”, and we kept in the back of our mind that “eax is initialized to 0”, it means that the use case with the (config == NULL) will return 0. That’s why I made all these changes…

Very well, let’s go back to where we were, since we’ve exhausted this jump, we’ll jump back in reverse to go back to the address FFF40322 and continue from there to the next instruction. It’s a “cmp dword ptr [edx+4], 4D5F4450h”, which compares the dword at edx+4 to 0x4D5F4450, which I know to be the ASCII for “PD_M”; this means that the last 3 instructions are used to compare the first 8 bytes of our pointer to “KBLUPD_M”… ohhh, light bulb above our heads, it’s comparing the pointer to the Signature of the FSPM_UPD structure (don’t forget, you weren’t supposed to know that the function is called validate_upd_config, or that the argument is a config pointer… just that it’s a pointer)! OK, now it makes sense, and while we’re at it—and since we are, of course, reading the FSP integration guide PDF, we then also realize what the 0x80000002 actually means. At this point, our code now becomes:

EFI_STATUS validate_upd_config (uint8_t action, void *config) {
  if (action == 3) {
    FSPM_UPD *upd = (FSPM_UPD *) config;
    if (upd == NULL)
       return EFI_SUCCESS; 
    if (upd->FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/)
  } else {
    // loc_FFF40344

Yay, this is starting to look like something… Now you probably got the hang of it, so let’s do things a little faster now.

  • The next line “cmp [edx+28h], eax” compares edx+0x28 to eax. Thankfully, we know now that edx points to the FSPM_UPD structure, and we can calculate that at offset 0x28 inside that structure, it’s the field StackBase within the FspmArchUpd field…
  • and also, we still have in the back of our minds that ‘eax’ is initialized to zero, so, we know that the next 2 instructions are just checking if upd->FspmArchUpd.StackBase is == NULL.
  • Then we compare the StackSize with 0x26000, but the comparison is using “jb” for the jump, which is “jump if below”, so it checks if StackSize < 0x26000,
  • finally it does a “test” with “edx+30h” (which is the BootloaderTolumSize field) and 0xFFF, then it does an unconditional jump to loc_FFF4035C, which itself does a “jz” to the return..
  • which means if (BootloaderTolumSize  & 0xFFF  == 0) it will return whatever EAX contained (which is zero),
  • but if it doesn’t, then it will continue to the next instruction which is the “mov eax, 80000002h”.

So, we end up with this code:

EFI_STATUS validate_upd_config (uint8_t action, void *config) {
  // eax = 0
  if (action == 3) {
    FSPM_UPD *upd = (FSPM_UPD *) config;
    if (upd == NULL)
       return 0;
    if (upd->FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/)
    if (upd->FspmArchUpd.StackBase == NULL)
    if (upd->FspmArchUpd.StackSize < 0x2600)
    if (upd->FspmArchUpd.BootloaderTolumSize & 0xFFF)
  } else {
    // loc_FFF40344
  return EFI_SUCCESS

Great, we just solved half of our code! Don’t forget, we jumped one way instead of another at the start of the function, now we need to go back up and explore the second branch of the code (at offset 0xFFF40344). The code is very similar, but it checks for “KBLUPD_S” Signature, and nothing else. Now we can also remove any comment/notes we have (such as the note that eax is initialized to 0) and clean up, and simplify the code if there is a need.

So our function ends up being (this is the final version of the function):

EFI_STATUS validate_upd_config (uint8_t action, void *config) {
  if (action == 3) {
    FSPM_UPD *upd = (FSPM_UPD *) config;
    if (upd == NULL)
       return EFI_SUCCESS;
    if (upd->FspUpdHeader.Signature != 0x4D5F4450554C524B /* 'KBLUPD_M'*/)
    if (upd->FspmArchUpd.StackBase == NULL)
    if (upd->FspmArchUpd.StackSize < 0x2600)
    if (upd->FspmArchUpd.BootloaderTolumSize & 0xFFF)
  } else {
    FSPS_UPD *upd = (FSPS_UPD *) config;
    if (upd == NULL)
        return EFI_SUCCESS;
    if (upd->FspUpdHeader.Signature != 0x535F4450554C524B /* 'KBLUPD_S'*/)
  return EFI_SUCCESS

Now this wasn’t so bad, was it? I mean, it’s time consuming, sure, it can be a little disorienting if you’re not used to it, and you have to keep track of which branches (which blocks in the layout view) you’ve already gone through, etc. but the function turned out to be quite small and simple. After all, it was mostly only doing CMP/TEST and JZ/JNZ.

That’s pretty much all I do when I do my reverse engineering, I go line by line, I understand what it does, I try to figure out how it fits into the bigger picture, I write equivalent C code to keep track of what I’m doing and to be able to understand what happens, so that I can later figure out what the function does exactly… Now try to imagine doing that for hundreds of functions, some of them that look like this (random function taken from the FSPM module):

You can see on the right, the graph overview which shows the entirety of the function layout diagram. The part on the left (the assembly) is represented by the dotted square on the graph overview (near the middle). You will notice some arrows that are thicker than the others, that’s used in IDA to represent loops. On the left side, you can notice one such thick green line coming from the bottom and the arrow pointing to a block inside our view. This means that there’s a jump condition below that can jump back to a block that is above the current block and this is basically how you do a for/while loop with assembly, it’s just a normal jump that points backwards instead of forwards.

Finally, the challenge!

At the beginning of this post, I mentioned a challenging function to reverse engineer. It’s not extremely challenging—it’s complex enough that you can understand the kind of things I have to deal with sometimes, but it’s simple enough that anyone who was able to follow up until now should be able to understand it (and maybe even be able to reverse engineer it on their own).

So, without further ado, here’s this very simple function:

Since I’m a very nice person, I renamed the function so you won’t know what it does, and I removed my comments so it’s as virgin as it was when I first saw it. Try to reverse engineer it. Take your time, I’ll wait:

Alright, so, the first instruction is a “call $+5”, what does that even mean?

  1. When I looked at the hex dump, the instruction was simply “E8 00 00 00 00” which according to our previous CALL opcode table means “Call near, relative, displacement relative to next instruction”, so it wants to call the instruction 0 bytes from the next instruction. Since the call opcode itself is taking 5 bytes, that means it’s doing a call to its own function but skipping the call itself, so it’s basically jumping to the “pop eax”, right? Yes…  but it’s not actually jumping to it, it’s “calling it”, which means that it just pushed into the stack the return address of the function… which means that our stack contains the address 0xFFF40244 and our next instruction to be executed is the one at the address 0xFFF40244. That’s because, if you remember, when we do a “ret”, it will pop the return address from the stack into the EIP (instruction pointer) register, that’s how it knows where to go back when the function finishes.
  2. So, then the instruction does a “pop eax” which will pop that return address into EAX, thus removing it from the stack and making the call above into a regular jump (since there is no return address in the stack anymore).
  3. Then it does a “sub eax, 0FFF40244h”, which means it’s substracting 0xFFF40244 from eax (which should contain 0xFFF40244), so eax now contains the value “0”, right? You bet!
  4. Then it adds to eax, the value “0xFFF4023F”, which is the address of our function itself. So, eax now contains the value 0xFFF4023F.
  5. It will then substract from EAX, the value pointed to by [eax-15], which means the dword (4 bytes) value at the offset 0xFFF4023F – 0xF, so the value at 0xFFF40230, right… that value is 0x1AB (yep, I know, you didn’t have this information)… so, 0xFFF4023F – 0x1AB = 0xFFF40094!
  6. And then the function returns.. with the value 0xFFF40094 in EAX, so it returns 0xFFF40094, which happens to be the pointer to the FSP_INFO_HEADER structure in the binary.

So, the function just returns 0xFFF40094, but why did it do it in such a convoluted way? The reason is simple: because the FSP-S code is technically meant to be loaded in RAM at the address 0xFFF40000, but it can actually reside anywhere in the RAM when it gets executed. Coreboot for example doesn’t load it in the right memory address when it executes it, so instead of returning the wrong address for the structure and crashing (remember, most of the jumps and calls use relative addresses, so the code should work regardless of where you put it in memory, but in this case returning the wrong address for a structure in memory wouldn’t work), the code tries to dynamically verify if it has been relocated and if it is, it will calculate how far away it is from where it’s supposed to be, and calculate where in memory the FSP_INFO_HEADER structure ended up being.

Here’s the explanation why:

  • If the FSP was loaded into a different memory address, then the “call $+5” would put the exact memory address of the next instruction into the stack, so when you pop it into eax then substract from it the expected address 0xFFF40244, this means that eax will contain the offset from where it was supposed to be.
  • Above, we said eax would be equal to zero, yes, that’s true, but only in the usecase where the FSP is in the right memory address, as expected, otherwise, eax would simply contain the offset. Then you add to it 0xFFFF4023F which is the address of our function, and with the offset, that means eax now contains the exact memory address of the current function, wherever it was actually placed in RAM!
  • Then when it grabs the value 0x1AB (because that value is stored in RAM 15 bytes before the start of the function, that will work just fine) and substracts it from our current position, it gives us the address in RAM of the FSP_INFO_HEADER (because the compiler knows that the structure is located exactly 0x1AB bytes before the current function). This just makes everything be relative.

Isn’t that great!? 😉 It’s so simple, but it does require some thinking to figure out what it does and some thinking to understand why it does it that way… but then you end up with the problem of “How do I write this in C”? Honestly, I don’t know how, I just wrote this in my C file:

// Use Position-independent code to make this relocatable
void *get_fsp_info_header() {
    return 0xFFF40094; 

I think the compiler takes care of doing all that magic on its own when you use the -fPIC compiler option (for gcc), which means “Position-Independent Code”.

What this means for Purism

On my side, I’ve finished reverse engineering the FSP-S entry code—from the entry point (FspSiliconInit) all the way to the end of the function and all the subfunctions that it calls.

This only represents 9 functions however, and about 115 lines of C code; I haven’t yet fully figured out where exactly it’s going in order to execute the rest of the code. What happens is that the last function it calls (it actually jumps into it) grabs a variable from some area in memory, and within that variable, it will copy a value into the ESP, thus replacing our stack pointer, and then it does a “RETN”… which means that it’s not actually returning to the function that called it (coreboot), it’s returning… “somewhere”, depending on what the new stack contains, but I don’t know where (or how) this new stack is created, so I need to track it down in order to find what the return address is, find where the “retn” is returning us into, so I can unlock plenty of new functions and continue reverse engineering this.

I’ve already made some progress on that front (I know where the new stack tells us to return into) but you will have to wait until my next blog post before I can explain it all to you. It’s long and complicated enough that it needs its own post, and this one is long enough already.

Other stories from strange lands

You never really know what to expect when you start reverse engineering assembly. Here are some other stories from my past experiences.

  • I once spent a few days reverse engineering a function until about 30% of it when I finally realized that the function was… the C++ “+ operator” of the std::string class (which by the way, with the use of C++ templates made it excruciatingly hard to understand)!
  • I once had to reverse engineer over 5000 lines of assembly code that all resolved into… 7 lines of C code. The code was for creating a hash and it was doing a lot of manipulation on data with different values on every iteration. There was a LOT of xor, or, and, shifting left and right of data, etc., which took maybe a hundred or so lines of assembly and it was all inside a loop, which the compiler decided that—to optimize it—it would unravel the loop (this means that instead of doing a jmp, it will just copy-paste the same code again), so instead of having to reverse engineer the code once and then see that it’s a loop that runs 64 times, I had to reverse engineer the same code 64 times because it was basically getting copy-pasted by the compiler in a single block but the compiler was “nice” enough that it was using completely different registers for every repetition of the loop, and the data was getting shifted in a weird way and using different constants and different variables at every iteration, and—as if that wasn’t enough— every 1/4th of the loop, changing the algorithm and making it very difficult to predict the pattern, forcing me to completely reverse engineer the 5000+ assembly lines into C, then slowly refactor and optimize the C code until it became that loop with 7 lines of code inside it… If you’re curious you can see the code here at line 39, where there is some operation common to all iterations, then 4 different operations depending on which iteration we are doing, and the variables used for each operation changes after each iteration (P, PP, PPP and PPPP get swapped every time), and the constant values and the indices used are different for each iteration as well (see constants.h). It was complicated and took a long while to reverse engineer.
  • Below is the calling graph of the PS3 firmware I worked on some years ago. All of these functions have been entirely reverse engineered (each black rectangle is actually an entire function, and the arrows show which function calls which other function), and the result was the ps3xport tool. As you can see, sometimes a function can be challenging to reverse, and sometimes a single function can call so many nested functions that it can get pretty complicated to keep track of what is doing what and how everything fits together. That function at the top of the graph was probably very simple, but it brought with it so much complexity because of a single “call”:

Perseverance prevails

In conclusion:

  • Reverse engineering isn’t just about learning a new language, it’s a very different experience from “learning Java/Python/Rust after you’ve mastered C”, because of the way it works; it can sometimes be very easy and boring, sometimes it will be very challenging for a very simple piece of code.
  • It’s all about perseverance, being very careful (it’s easy to get lost or make a mistake, and very hard to track down and fix a mistake/typo if you make one), and being very patient. We’re talking days, weeks, months. That’s why reverse engineering is something that very few people do (compared to the number of people who do general software development). Remember also that our first example was 82 bytes of code, and the second one was only 19 bytes long, and most of the time, when you need to reverse engineer something, it’s many hundreds of KBs of code.

All that being said, the satisfaction you get when you finish reverse engineering some piece of code, when you finally understand how it works and can reproduce its functionality with open source software of your own, cannot be described with words. The feeling of achievement that you get makes all the efforts worth it!

I hope this write-up helps everyone get a fresh perspective on what it means to “reverse engineer the code”, why it takes so long, and why it’s rare to find someone with the skills, experience and patience to do this kind of stuff for months—as it can be frustrating, and we sometimes need to take a break from it and do something else in order to renew our brain cells.

17 November, 2017 08:00PM by Youness Alaoui

hackergotchi for AIMS Desktop developers

AIMS Desktop developers

I am now a Debian Developer

It finally happened

On the 6th of April 2017, I finally took the plunge and applied for Debian Developer status. On 1 August, during DebConf in Montréal, my application was approved. If you’re paying attention to the dates you might notice that that was nearly 4 months ago already. I was trying to write a story about how it came to be, but it ended up long. Really long (current draft is around 20 times longer than this entire post). So I decided I’d rather do a proper bio page one day and just do a super short version for now so that someone might end up actually reading it.

How it started

In 1999… no wait, I can’t start there, as much as I want to, this is a short post, so… In 2003, I started doing some contract work for the Shuttleworth Foundation. I was interested in collaborating with them on tuXlabs, a project to get Linux computers into schools. For the few months before that, I was mostly using SuSE Linux. The open source team at the Shuttleworth Foundation all used Debian though, which seemed like a bizarre choice to me since everything in Debian was really old and its “boot-floppies” installer program kept crashing on my very vanilla computers. 

SLUG (Schools Linux Users Group) group photo. SLUG was founded to support the tuXlab schools that ran Linux.

My contract work then later turned into a full-time job there. This was a big deal for me, because I didn’t want to support Windows ever again, and I didn’t ever think that it would even be possible for me to get a job where I could work on free software full time. Since everyone in my team used Debian, I thought that I should probably give it another try. I did, and I hated it. One morning I went to talk to my manager, Thomas Black, and told him that I just don’t get it and I need some help. Thomas was a big mentor to me during this phase. He told me that I should try upgrading to testing, which I did, and somehow I ended up on unstable, and I loved it. Before that I used to subscribe to a website called “freshmeat” that listed new releases of upstream software and then, I would download and compile it myself so that I always had the newest versions of everything. Debian unstable made that whole process obsolete, and I became a huge fan of it. Early on I also hit a problem where two packages tried to install the same file, and I was delighted to find how easily I could find package state and maintainer scripts and fix them to get my system going again.

Thomas told me that anyone could become a Debian Developer and maintain packages in Debian and that I should check it out and joked that maybe I could eventually snap up “”. I just laughed because back then you might as well have told me that I could run for president of the United States, it really felt like something rather far-fetched and unobtainable at that point, but the seed was planted :)

Ubuntu and beyond

Ubuntu 4.10 default desktop – Image from distrowatch

One day, Thomas told me that Mark is planning to provide official support for Debian unstable. The details were sparse, but this was still exciting news. A few months later Thomas gave me a CD with just “warty” written on it and said that I should install it on a server so that we can try it out. It was great, it used the new debian-installer and installed fine everywhere I tried it, and the software was nice and fresh. Later Thomas told me that this system is going to be called “Ubuntu” and the desktop edition has naked people on it. I wasn’t sure what he meant and was kind of dumbfounded so I just laughed and said something like “Uh ok”. At least it made a lot more sense when I finally saw the desktop pre-release version and when it got the byline “Linux for Human Beings”. Fun fact, one of my first jobs at the foundation was to register the domain name. Unfortunately I found it was already owned by a domain squatter and it was eventually handled by legal.

Closer to Ubuntu’s first release, Mark brought over a whole bunch of Debian developers that was working on Ubuntu over to the foundation and they were around for a few days getting some sun. Thomas kept saying “Go talk to them! Go talk to them!”, but I felt so intimidated by them that I couldn’t even bring myself to walk up and say hello.

In the interest of keeping this short, I’m leaving out a lot of history but later on, I read through the Debian packaging policy and really started getting into packaging and also discovered Daniel Holbach’s packaging tutorials on YouTube. These helped me tremendously. Some day (hopefully soon), I’d like to do a similar video series that might help a new generation of packagers.

I’ve also been following DebConf online since DebConf 7, which was incredibly educational for me. Little did I know that just 5 years later I would even attend one, and another 5 years after that I’d end up being on the DebConf Committee and have also already been on a local team for one.

DebConf16 Organisers, Photo by Jurie Senekal.

It’s been a long journey for me and I would like to help anyone who is also interested in becoming a Debian maintainer or developer. If you ever need help with your package, upload it to and if I have some spare time I’ll certainly help you out and sponsor an upload. Thanks to everyone who have helped me along the way, I really appreciate it!

17 November, 2017 05:48PM by jonathan

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Edging Closer – ODS Sydney

Despite the fact that OpenStack’s mission statement has not fundamentally changed since the inception of the project in 2010, we have found many different interpretations of the technology through the years. One of them was that OpenStack would be an all-inclusive anything-as-a-service, in a striking parallel to the many different definitions the “cloud” assumed at the time. At the OpenStack Developer Summit in Sydney, we found a project that is returning to its roots: scalable Infrastructure-as-a-Service. It turns out, that resonates well with its user base.

Although application developers have not endorsed the OpenStack API ecosystem as a whole, the foundation still notes significant increases in deployments (95% increase compared to 2016), and even public cloud use cases. It may just turn out that Containers, initially having brought some consternation and many proclamations of the end of OpenStack, actually will co-exist. This makes sense: virtual machines are useful constructs to work with, and API controlled management of network and storage primitives in a multi-tenant environment provide a sophistication and control currently not available in the context of application containers.

Stable, reliant and secure IaaS + Kubernetes: we are onto something.

Adding Kubernetes to the mix provides the “no vendor lock-in” so highly sought after by enterprises and telcos, and we at Canonical strive to provide the best experience using Kubernetes anywhere – on premise on OpenStack, on bare metal with MAAS, and in the public clouds at Amazon, Microsoft or Google.

Jointly, both technologies can fulfil the premise for developers and business alike: infrastructure as code, flexible orchestration and scale out models, in a multi-cloud setting.

Effective and efficient bare-metal management is key.

However, this journey towards software controlled infrastructure faces challenges in the form of time and money. If it takes too long or costs too much, your project is in peril.

Hence, our recommendation is to offer IaaS and Kubernetes quickly and see that your cloud is consumable by your developers right away. Avoid the situation where they may have bought into a vendor lock-in scenario on a public cloud due to delays in your IaaS offering.

MAAS is a crucial ingredient for your success as some applications require bare-metal or containers on bare metal; having a scalable provisioning system will enable developers immediately and positions your IaaS as the premier choice for their workloads.

Across three sessions, Canonical Founder and CEO Mark Shuttleworth gave a blueprint for a successful OpenStack implementation, outlining the two most impactful obstacles for the success of your cloud strategy:

  • If you are not providing a more cost attractive solution than the alternatives (VMware or Public Clouds), your OpenStack installation will not find the support of the business you need to sustain the effort.
  • If you are unable to provide your developers with the latest features or fix issues due to operational constraints, you end up in a “stuckstack” situation, and your crucial constituency will look elsewhere for innovation.

It follows that if you are building your cloud, there are only two measures of success:

  1. can you exercise full control and lifecycle manage your cloud?
  2. Your total cost of ownership of a virtual machine: how much does it cost to run a VM in your environment?

The most significant factor in high TCO per VM is consulting costs.

As long as your OpenStack is intended to provide stable IaaS services, there is no need to spend hundreds of thousands of dollars on experts who tune your cloud. It is more important to provide a stable IaaS and Kubernetes offering to your developers as quickly as possible.

By using best “bang for the buck” hardware you can (and need to) get started immediately, for example using a managed service offering such as Bootstack. If your intended target size is under 200 nodes, it is very likely that your TCO will be lowest with a continuously managed service on reference architecture. Even if you plan to scale to thousands of nodes, it will likely take you a minimum of two years to get there. Do not wait until you have a cloud designed for thousands of nodes at a 25 node scale only to find out you just wasted six months planning something that has become so expensive on the books already it will be killed long before it can reach its full potential.

If you build a more substantial cloud, say until roughly 4,000 nodes, you should consider following a reference architecture, but you should invest in your team operating the cloud as the costs for a managed service may become prohibitive at that scale.

Over 4,000 nodes, we recommend to leverage our Ubuntu OpenStack packages and invest in the capabilities you need.

Dondy Bappedyanto, CEO of BizNet GIO co-presented with Mark and explained the benefits of getting started quickly with a managed service: it allows BizNet GIO, an Indonesian public cloud provider, to focus on selling services on top of OpenStack, instead of OpenStack itself. This lends itself well to the local developer market in Indonesia, which is starving for flexible, reliable, cost-attractive and open alternatives to the existing choices in the market.

New use cases: Financial Sector and Edge Compute

City Network provides public and private cloud services for financial institutions and appreciates the pragmatic and quick onramp to OpenStack and Kubernetes as Florian Haas, VP Professional Services & Education at City Networks, explained. The success of City Networks hinges on being able to focus on the regulatory compliance requirements of its customers immediately. Messing around with, say, Neutron settings, is distracting and not conducive to providing this service.

Finally, Kandan Kathirvel, Director of Cloud Strategy & Architecture at AT&T, joined Mark in exploring OpenStack at the Edge, which will require a new reference architecture that is much simplified compared to the existing control plane and setup. OpenStack is needed for the foreseeable future at the Edge because VNF vendors still dictate virtual machines since many of these network functions are not available in a containerised version today. To achieve simplification of the stack at the Edge, the IaaS needs to be very workload specific, OpenStack services need to be containerised, and one toolset needs to be found to manage both the edge and the data centre instantiations of OpenStack. AT&T chose the community project OpenStack Helm to provide this functionality and is actively promoting the project as well as asking for community members to step up and contribute. Several other service providers and telcos have already committed to join AT&T in this effort.

Looking forward

OpenStack has consolidated and matured, both as a project and as a community. The code base of its core is stable, reliant and performant and tackles production workloads for increasingly demanding use cases every day. New use cases such as edge computing will challenge OpenStack to provide answers for a use case that has been, until now, not included in the reference design of the project. The hype may be over, but that only means we can finally start focussing on what is essential: providing the best possible developer experience at the lowest reasonable cost – with OpenStack, MAAS and Kubernetes.

17 November, 2017 02:36PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, October 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 21h (out of 16h allocated + 8.75h remaining, thus keeping 3.75h for November).
  • Ben Hutchings did 20 hours (out of 15h allocated + 9 extra hours, thus keeping 4 extra hours for November).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 7 hours (out of 20.75 hours allocated + 1.5 hours remaining, thus keeping 15.25 hours for November).
  • Guido Günther did 6.5 hours (out of 11h allocated + 1 extra hour, thus keeping 5.5h for November).
  • Hugo Lefeuvre did 20h.
  • Lucas Kanashiro did 2 hours (out of 5h allocated, thus keeping 3 hours for November).
  • Markus Koschany did 19 hours (out of 20.75h allocated, thus keeping 1.75 extra hours for November).
  • Ola Lundqvist did 7.5h (out of 7h allocated + 0.5 extra hours).
  • Raphaël Hertzog did 13.5 hours (out of 12h allocated + 1.5 extra hours).
  • Roberto C. Sanchez did 11 hours (out of 20.75 hours allocated + 14.75 hours remaining, thus keeping 24.50 extra hours for November, he will give back remaining hours at the end of the month).
  • Thorsten Alteholz did 20.75 hours.

Evolution of the situation

The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position.

The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 November, 2017 02:31PM

November 16, 2017

hackergotchi for Univention Corporate Server

Univention Corporate Server

Best use of LDAP in UCS: Schema Extensions for Adding Attributes & New Object Types

The LDAP server in UCS, like the Active Directory on a Windows server, stores all the information on your domain about all your resources from hardware to employee as objects, namely in a structured and well-defined manner. Every object has some defined attributes of a particular type. Common attributes of a user object are, for example, the user’s surname, password and further valuable information on him. Part of the LDAP is the LDAP schema, which provides the administrator with a clear overview on all objects by describing which types of attributes exist within the LDAP and what attributes they have.

So, if you want to include additional attributes or create entirely new object types, extending the schema might be the way to go.

When a schema extension is needed

Univention Corporate Server contains many attributes in its default schemes. The Univention Directory Manager (UDM) uses many of them and makes them available. However, there are some, which are not used by default, because they only apply to a limited list of use cases. Thus, checking the schema directories on the master might reveal the attributes you need.

These directories are:


Furthermore, UCS contains multiple attributes that can be used by the end user. These free attributes are named “univentionFreeAttribute1” to “univentionFreeAttribute100”. These free attributes are strings.

Thus, for small extensions or when the default schema already contains the matching attributes, you can just make use of the ones already present.

Schema extension using the UDM

The Univention Management Console offers the possibility to upload the new schema to all servers and it will execute the needed steps to activate it within the domain. To include a new schema, open the management console, navigate to “Domain” and select “LDAP Navigator”.

Screenshot of the Univention Management Console

Then traverse the LDAP Tree up to the folder “univention” and “ldapschema”.

Screenshot of the LDAP tree in UCS

Click on the folder and then on the add button. Select “Settings: LDAP Schema Extension” as the object to create. Give here a name to your schema and enter its filename. In the data field, copy the schema compressed as bzip2 and encoded in base64. Save your entry and the master will start to process the schema.

Manually including a schema

If you decide that you need a schema extension or have a schema extension from a third party software, you can easily include it in UCS 4.2 or newer.

On the UCS Master, copy your schema file into the directory /var/lib/univention-ldap/local-schema/.

Afterwards, recreate the SLAPD configuration using the Univention Configuration Registry and then restart the LDAP Server.

/usr/sbin/univention-config-registry commit /etc/ldap/slapd.conf
/etc/init.d/slapd crestart

It is recommended to put the schema extension into the /var/lib/univention-ldap/local-schema/ directory on any UCS backup. If you are forced to do a backup2master and the schema is not present, the LDAP server will not start.

Schema replication by the slaves

In the previous part, we talked about adding the schema to the master and backup. So you might be wondering about the UCS slaves.

The UCS slaves and backups replicate the currently working schema from the UCS master. Thus, the schema will be active once the LDAP server on the master is restarted.

UCS replicates changes in the order of their occurrence and the schema needs to be present on the master before you can add an object. This ensures that all LDAP servers will function properly. This replication makes it also more important to place the schema files on all UCS backups, because if you are forced to promote a backup, no LDAP server will be working as they will all replicate the incomplete schema from the new master.

However, it makes installing the schema on the UCS backups less time sensitive.

Packaging schema files and the UCS App Center

UCS contains many software products from third party vendors. Many of these add configuration options to the UMC and include a schema extension to model these options in the LDAP.

Therefore, we have well-defined instructions in the Developer References that describe how to package and install schema extensions and they also describe, in case you use the App Center, how to configure your apps so that the schema will be installed correctly on the master and every backup within the domain.

Extended attributes made within the UMC

Extending the LDAP schema in itself is not necessarily useful by itself. Only if the attributes are filled with meaningful content, is the schema put to good use. In UCS the primary management interface is the UMC. Our management console comes with an inbuilt function to extend itself. These are called extended attributes and extended options.

There are multiple combinations possible to extend the UMC and our documentation provides an overview over the possible combinations.


Schema extensions can customize your LDAP to match your needs. However, each extension enlarges the LDAP and the content needs to be managed by an administrator. Therefore, the first question should always be, do I need it or is there an existing attribute that already fulfils my requirements?

But if you do need to extend the schema, UCS features and its domain concept make installing it incredibly easy to do so.

We hope you enjoyed this article and find it useful. For further questions please comment below or visit our forum to get help.

Thank you!

Der Beitrag Best use of LDAP in UCS: Schema Extensions for Adding Attributes & New Object Types erschien zuerst auf Univention.

16 November, 2017 03:58PM by Kevin Dominik Korte

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S10E37 – Psychotic Fearless Breakfast - Ubuntu Podcast

This week we’ve been upgrading from OpenWRT to LEDE and getting wiser, or older. GoPro open source the Cineform codec, Arch Linux drops i686, Intel and AMD collaborate on new Intel product family, 13 AD&D games have been released by GoG and IBM release a new typeface called Plex.

It’s Season Ten Episode Thirty-Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

16 November, 2017 03:00PM

Ubuntu Insights: Security Team Weekly Summary: November 16, 2017


The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at:

During the last week, the Ubuntu Security team:

  • Triaged 149 public security vulnerability reports, retaining the 50 that applied to Ubuntu.
  • Published 5 Ubuntu Security Notices which fixed 21 security issues (CVEs) across 5 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests


  • (snapd) submitted fix for for /dev/pts slave EPERM fix – PR 4159 and 4160 (2.29)
  • (snapd) submitted fix for modprobe failure causing all security backends to fail – PR 4162
  • (snapd) submitted fix for raw-usb udev_enumerate issue – PR 4164 and 4165 (2.29)
  • (snapd) created policy-updates-xxxii PR for master (PR 4180) and 2.29 (PR 4181), coordinate with snapd team. Among other things, this has a workaround rule for the above electron denial
  • (snapd) submitted ‘add test-policy-app spread test’ – PR 4157
  • updated eCryptfs -next branch for linux-next testing and got it ready to create a 4.15 pull request
  • snapd reviews
    • ‘fix udev tagging for hooks’ – PR 4144
    • ‘drop group filter from seccomp rules’ PR 4185
    • ‘support bash as base runtime’ PR 4197
  • landed documentation for the new (Linux 4.14) seccomp dynamic logging support in the upstream Linux man-pages project: 1234

What the Security Team is Reading This Week

Weekly Meeting

More Info

16 November, 2017 02:30PM

hackergotchi for ARMBIAN


Orange Pi Zero+

Debian server – mainline kernel
Command line interface – server usage scenarios.


other download options and archive

Known issues

All currently available OS images for H5 boards are experimental

  • don’t use them for anything productive but just to give constructive feedback to developers
  • shutdown might result in reboots instead or board doesn’t really power off (cut power physically)

Quick start | Documentation


Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).


Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.


Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

Tested hardware

16 November, 2017 11:01AM by igorpecovnik

hackergotchi for Ubuntu developers

Ubuntu developers

Salih Emin: ucaresystem core 4.3.0 : Launch it from from your applications menu

The 4.3.0 introduces a menu icon and a launcher for ucaresystem-core. Once installed or updated, you will find a uCareSystem Core entry in your menu that you can click if you want to launch ucaresystem-core. Now think for a moment a friend of yours, or your parents that are not comfortable with the terminal. HowContinue reading "ucaresystem core 4.3.0 : Launch it from from your applications menu"

16 November, 2017 10:06AM

Mohamad Faizul Zulkifli: Ubuntu's Guitar Pick

Rare and collectible item. Suitable for guitar hobbyist.
posted from Bloggeroid

16 November, 2017 05:04AM by 9M2PJU (

hackergotchi for VyOS


1.1.8 followup

LLDP bug

James Brown reported on phabricator that LLDP is not working in 1.1.8. Quite a mess up on our side: the reason it's not working is that it's built against the old OpenSSL 0.9.8 which is no longer in VyOS, but since the debian/control was missing dependency on libssl, it wasn't detected as dependent on OpenSSL and thus wasn't rebuilt.

If you want LLDP back right now, you can install the helium3 package from our repository (

OSPF route-map

One feature is missing from the changelog because it was committed "stealthily", without a task number, and thus I missed it. It's the command for setting up route-map for installing routes imported from OSPF.

set protocols ospf route-map MyMap

This route-map can prevent routes from getting installed into the kernel routing table, but make no mistake, it is not incoming route filtering (which would be very bad for OSPF). The routes will stay in OSPFd and in the RIB (they will be displayed as inactive).

The future of 1.1.x

So far it looks like we are going to make an 1.1.9 release to fix the LLDP bug. Perhaps we should also cherry-pick something safe from 1.2.0 too, if you have any specific bugfix or a tiny feature is mind, let us know.

VyOS 1.1.8 on AWS

The official 1.1.8 AMI passed the automated tests and it's on its way to the marketplace, the manual review by the marketplace team will take perhaps a week or so.

If you want to make you own, you can already use the AMI build scripts and point it to the (signed) release image URL.

And while we are at it: IPv6 in VyOS 1.2.0

I've re-enabled the old patch for IPv6 VRRP in the current branch and it will be in today's nightly build. In 1.1.x, we had to disable it because in the older keepalived version IPv4 and IPv6 VRRP were mutually exclusive, however, in the current branch, it seems to work. If you feel adventurous, please test the nightly build (on lab VMs!) and tell us it it works for you.

Also, on the forum, it was reported that the current branch image doesn't build. I've resolved that problem today so if you want to build an image, it should work.

16 November, 2017 12:55AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Colin Watson: Kitten Block equivalent for Firefox 57

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add and, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

16 November, 2017 12:00AM

November 15, 2017

Cumulus Linux

How to gauge your network’s openness

So, you’ve done your research, learned about the many benefits of open networking, and decided you’re interested in building an open network. Congratulations, and welcome to the future of networking! You’ve made a great first step, but maybe you’re concerned about where to begin when it comes to vendors. A lot of network providers will claim that they have open solutions…but how can you be sure you’re choosing the best one? Or how can you determine if your vendor is truly an open solution? Fortunately, there are ways to gauge if your solution is as open as you need it to be. If you don’t want to get duped by phony open vendors, make sure to keep these three things in mind:

The definition of “open networking” is not set in stone

While there are common criteria and ideologies that tend to be associated with open networks, the definition of open networking is still very fluid and can mean different things to different vendors. So, when you’re trying to decide which vendor to go with, don’t let them off easy with simple answers. Ask specific questions about what exactly “open” means to them. Simplicity, flexibility, and modularity are all important determining factors in deciding how open a network truly is. Gartner suggests that you “Request answers that relate specifically to the proposed solution, because vendors might generically claim an open-networking strategy, but implement it only in a few hardware and software platforms that are not their mainstream offerings and represent only a small portion of their portfolios.” Openness is a spectrum, rather than a static condition, so make sure you’re familiar with which open principles matter to you and how you want your vendor to match them.

It’s better to adopt modular approaches

The definition of open networking may still be in flux, but it’s been proven that end-to-end architectures don’t win in the long term. Gartner states that “This is because technology improvements reduce the gap between the advanced, end-to-end, proprietary solutions and the rest of the market. Competitors eventually gain ground and force the incumbent vendors to find new ‘can’t live without’ capabilities that justify a new architecture.” If you’re looking for longevity, it’s definitely wiser to opt for designs based on simple building blocks that leverage disaggregation, fit-for-purpose software and open source and avoid end-to-end proprietary architectures.

Just because a vendor claims “openness” doesn’t make it so

Again, the true definition of open networking is still up in the air. Some vendors like to take advantage of this ambiguity and slap the term “open” on solutions that really don’t provide much flexibility at all. According to Gartner, “Vendors can (and do) claim their solutions are open, because they support basic standards (e.g., Ethernet/IP) or have an API.” However, when you take a closer look at these “open” solutions, the cracks begin to show. The reality, as Gartner points out, is that “Nearly all network vendors market ‘fully open,’ solutions; however, in practice, this means dramatically different things, making it difficult to discern which solutions are truly open,” and “several network vendors have broad portfolios, which include both open and proprietary solutions, and even ‘hardened’ versions of open-source solutions. Thus, they can easily market open strategies, while leading with proprietary and closed solutions in most accounts.” With that in mind, it’s important to make sure you ask vendors the right questions about openness so you don’t get tricked into prescriptive, end-to-end blueprints.

Now you’ve got a factual base knowledge about open networking, but how can you ensure that your network is really as open as you need it to be? This report from Gartner outlines the five must-ask questions you should be posing to network vendors to gauge the openness of their solutions. From simplicity to flexibility, these questions cover all of the basic requirements for determining if a vendor’s solution is right for you. Check it out!

The post How to gauge your network’s openness appeared first on Cumulus Networks Blog.

15 November, 2017 10:33PM by Madison Emery

hackergotchi for Purism PureOS

Purism PureOS

Trusted Platform Module now available as an add-on for Librem laptops

Over the past few months, we have been busy with a plethora of great projects being set afoot. We have been incrementally building a laptop inventory to ship from, we have been continuing the coreboot enablement work on our laptops, neutralizing—and then disabling—the Intel Management Engine, and launching our much awaited Librem phone campaign, which ended in a very motivating success—involving many great organizations part of the Free Software community, such as Matrix, KDE e.v., the GNOME Foundation, Nextcloud, and Monero.

It really has been a whirlwind of events, and this has been happening in parallel to us continuing our existing R&D and operations work, such as preparing a new batch of laptops—namely the much anticipated Librem 13 with i7 processor.

One particular security R&D project dear to our hearts has been the beginning of our collaboration with “Heads” developer Trammell Hudson, a project that has been quietly going on behind the scenes for the past few months. We are very pleased to announce today that we are making a positive step to make this effort within reach of early adopters, with the availability of a Trusted Platform Module (TPM) as an optional component for currently pending and near-future laptop orders.

What is a TPM? Is it for me?

A Trusted Platform Module is a specialized computer chip dedicated to enabling hardware-based (or, I would say, hardware-augmented) security, allowing you to secure your operating system and boot process at the hardware level, with your own cryptographic keys. It facilitates password protection (by storing keys in the dedicated hardware module and preventing “dictionary” attacks) and provides platform integrity verification (allows you to know whether your computer is behaving as intended or not, from a “deep security” standpoint).

The functionality provided by a TPM is useful if you care deeply about the security of your system, to the point where you want absolute certainty that your boot has not been compromised—by viruses, criminal activity, or some other hostile force trying to take over your system—allowing you to enforce a “trusted boot chain” through our coreboot firmware, signed and verified with your own encryption keys, using a special coreboot payload such as Heads, for example. You can read more about the implications in this article by Tom’s Hardware.

At the moment, we simply provide the hardware. We do not yet provide a turn-key “hardware+software” solution for this, so consider this an add-on for early adopters and security professionals, not a product for Joe Plumber.

This is amazing! How do I get one with my order?

If you already have a pending Librem 13 or Librem 15 order, please email to request this feature to be added to your order, which will ship out in the coming weeks. A $99 fee will apply (to cover parts and labor costs, as we are hand-soldering the TPMs on a case-by-case basis).

If you haven’t made an order yet, you can simply select it in the options available in the Librem 13 shop page and Librem 15 shop page.

What are your future plans?

As you can imagine, we are testing the market first by providing this as an optional component that we solder onto the motherboard on a case-by-case basis during final assembly. If there is enough demand, we plan to incorporate this as a standard feature into all our future motherboard designs.

15 November, 2017 10:21PM by Jeff

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Kubuntu Most Wanted

Kubuntu Cafe Live, is our new community show. This new format show is styled using a magazine format. We created lots of space for community involvement, breaking the show into multiple segments, and we want to get you involved. We are looking for Presenters, Trainers, Writers and Hosts.

  • Are you looking for an opportunity to present your idea, or application ?
  • Would you like to teach our community about an aspect of Kubuntu, or KDE ?
  • Would you like to be a show, article or news writer ?
  • Interested in being a host on Kubuntu Cafe Live ?

Contact Rick Timmis or Valorie Zimmerman to get started

The Kubuntu Cafe features a very broad variety of show segments. These include free format unconference segments which can accommodate your ideas. Dojo for teaching and training, Community Feedback, Developers Update, News & Views.

For upcoming show schedule please check the kubuntu calendar

Check out the show to see the new format,


15 November, 2017 10:12PM

Daniel Pocock: Linking hackerspaces with OpenDHT and Ring

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

15 November, 2017 07:57PM

hackergotchi for Kali Linux

Kali Linux

Configuring and Tuning OpenVAS in Kali Linux

Users often request the addition of vulnerability scanners to Kali, most notably the ones that begin with “N”, but due to licensing constraints, we do not include them in the distribution. Fortunately, Kali includes the very capable OpenVAS, which is free and open source. Although we briefly covered OpenVAS in the past, we decided to devote a more thorough post to its setup and how to use it more effectively.

Vulnerability scanners often have a poor reputation, primarily because their role and purpose is misunderstood. Vulnerabilty scanners scan for vulnerabilities–they are not magical exploit machines and should be one of many sources of information used in an assessment. Blindly running a vulnerability scanner against a target will almost certainly end in disappointment and woe, with dozens (or even hundreds) of low-level or uninformative results.

System Requirements

The main complaint we receive about OpenVAS (or any other vulnerability scanner) can be summarized as “it’s too slow and crashes and doesn’t work and it’s bad, and you should feel bad”. In nearly every case, slowness and/or crashes are due to insufficient system resources. OpenVAS has tens of thousands of signatures and if you do not give your system enough resources, particularly RAM, you will find yourself in a world of misery. Some commercial vulnerability scanners require a minimum of 8GB of RAM and recommend even more.

OpenVAS does not require anywhere near that amount of memory but the more you can provide it, the smoother your scanning system will run. For this post, our Kali virtual machine has 3 CPUs and 3GB of RAM, which is generally sufficient to scan small numbers of hosts at once.

Initial OpenVAS Setup in Kali

OpenVAS has many moving parts and setting it up manually can sometimes be a challenge. Fortunately, Kali contains an easy-to-use utility called ‘openvas-setup’ that takes care of setting up OpenVAS, downloading the signatures, and creating a password for the admin user.

This initial setup can take quite a long while, even with a fast Internet connection so just sit back and let it do its thing. At the end of the setup, the automatically-generated password for the admin user will be displayed. Be sure to save this password somewhere safe.

root@kali:~# openvas-setup
ERROR: Directory for keys (/var/lib/openvas/private/CA) not found!
ERROR: Directory for certificates (/var/lib/openvas/CA) not found!
ERROR: CA key not found in /var/lib/openvas/private/CA/cakey.pem
ERROR: CA certificate not found in /var/lib/openvas/CA/cacert.pem
ERROR: CA certificate failed verification, see /tmp/tmp.7G2IQWtqwj/openvas-manage-certs.log for details. Aborting.

ERROR: Your OpenVAS certificate infrastructure did NOT pass validation.
See messages above for details.
Generated private key in /tmp/tmp.PerU5lG2tl/cakey.pem.
Generated self signed certificate in /tmp/tmp.PerU5lG2tl/cacert.pem.

User created with password 'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx'.

Dealing with Setup Errors

Occasionally, the ‘openvas-setup’ script will display errors at the end of the NVT download similar to the following.

(openvassd:2272): lib kb_redis-CRITICAL **: get_redis_ctx: redis connection error: No such file or directory

(openvassd:2272): lib kb_redis-CRITICAL **: redis_new: cannot access redis at '/var/run/redis/redis.sock'

(openvassd:2272): lib kb_redis-CRITICAL **: get_redis_ctx: redis connection error: No such file or directory
openvassd: no process found

If you are unfortunate enough to encounter this issue, you can run ‘openvas-check-setup’ to see what component is causing issues. In this particular instance, we receive the following from the script.

ERROR: The number of NVTs in the OpenVAS Manager database is too low.
FIX: Make sure OpenVAS Scanner is running with an up-to-date NVT collection and run 'openvasmd --rebuild'.

The ‘openvas-check-setup’ scipt detects the issue and even provides the command to run to (hopefully) resolve the issue. After rebuilding the NVT collection as recommended, all checks are passed.

root@kali:~# openvasmd --rebuild
root@kali:~# openvas-check-setup
openvas-check-setup 2.3.7
Test completeness and readiness of OpenVAS-9
It seems like your OpenVAS-9 installation is OK.

Managing OpenVAS Users

If you need (or want) to create additional OpenVAS users, run ‘openvasmd’ with the –create-user option, which will add a new user and display the randomly-generated password.

root@kali:~# openvasmd --create-user=dookie
User created with password 'yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyy'.
root@kali:~# openvasmd --get-users

If you’re anything like us, you will forget to save the admin password or accidentally delete it. Fortunately, changing OpenVAS user passwords is easily accomplished with ‘openvasmd’ and the –new-password option.

root@kali:~# openvasmd --user=dookie --new-password=s3cr3t
root@kali:~# openvasmd --user=admin --new-password=sup3rs3cr3t

Starting and Stopping OpenVAS

Network services are disabled by default in Kali Linux so if you haven’t configured OpenVAS to start at boot, you can start the required services by running ‘openvas-start’.

root@kali:~# openvas-start
Starting OpenVas Services

When the services finish initializing, you should find TCP ports 9390 and 9392 listening on your loopback interface.

root@kali:~# ss -ant
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:*
LISTEN 0 128 *:*

Due to the strain on system resources, you will likely want to stop OpenVAS whenever you are done using it, especially if you are not using a dedicated system for vulnerability scanning. OpenVAS can be stopped by running ‘openvas-stop’.

root@kali:~# openvas-stop
Stopping OpenVas Services

Using the Greenbone Security Assistant

The Greenbone Security Assistant is the OpenVAS web interface, available on your local machine (after starting OpenVAS) at https://localhost:9392. After accepting the self-signed certificate, you will be presented with the login page and once authenticated, you will see the main dashboard.

Configuring Credentials

Vulnerability scanners provide the most complete results when you are able to provide the scanning engine with credentials to use on scanned systems. OpenVAS will use these credentials to log in to the scanned system and perform detailed enumeration of installed software, patches, etc. You can add credentials via the “Credentials” entry under the “Configuration” menu.

Target Configuration

OpenVAS, like most vulnerability scanners, can scan for remote systems but it’s a vulnerability scanner, not a port scanner. Rather than relying on a vulnerability scanner for identifying hosts, you will make your life much easier by using a dedicated network scanner like Nmap or Masscan and import the list of targets in OpenVAS.

root@kali:~# nmap -sn -oA nmap-subnet-86
root@kali:~# grep Up nmap-subnet-86.gnmap | cut -d " " -f 2 > live-hosts.txt

Once you have your list of hosts, you can import them under the “Targets” section of the “Configuration” menu.

Scan Configuration

Prior to launching a vulnerability scan, you should fine-tune the Scan Config that will be used, which can be done under the “Scan Configs” section of the “Configuration” menu. You can clone any of the default Scan Configs and edit its options, disabling any services or checks that you don’t require. If you use Nmap to conduct some prior analysis of your target(s), you can save hours of vulnerability scanning time.

Task Configuration

Your credentials, targets, and scan configurations are setup so now you’re ready to put everything together and run a vulnerability scan. In OpenVAS, vulnerability scans are conducted as “Tasks”. When you set up a new task, you can further optimize the scan by either increasing or decreasing the concurrent activities that take place. With our system with 3GB of RAM, we adjusted our task settings as shown below.

With our more finely-tuned scan settings and target selection, the results of our scan are much more useful.

Automating OpenVAS

One of the lesser-known features of OpenVAS is its command-line interface, which you interact with via the ‘omp’ command. Its usage isn’t entirely intuitive but we aren’t the only fans of OpenVAS and we came across a couple of basic scripts that you can use and extend to automate your OpenVAS scans.

The first is by mgeeky, a semi-interactive Bash script that prompts you for a scan type and takes care of the rest. The scan configs are hard-coded in the script so if you want to use your customized configs, they can be added under the “targets” section.

root@kali:~# apt -y install pcregrep
root@kali:~# ./

:: OpenVAS automation script.
mgeeky, 0.1

[>] Please select scan type:
1. Discovery
2. Full and fast
3. Full and fast ultimate
4. Full and very deep
5. Full and very deep ultimate
6. Host Discovery
7. System Discovery
9. Exit

Please select an option: 5

[+] Tasked: 'Full and very deep ultimate' scan against ''
[>] Reusing target...
[+] Target's id: 6ccbb036-4afa-46d8-b0c0-acbd262532e5
[>] Creating a task...
[+] Task created successfully, id: '
[>] Starting the task...
[+] Task started. Report id: 6bf0ec08-9c60-4eb5-a0ad-33577a646c9b
[.] Awaiting for it to finish. This will take a long while...

8e77181c-07ac-4d2c-ad30-9ae7a281d0f8 Running 1%

We also came across a blog post by code16 that introduces and explains their Python script for interacting with OpenVAS. Like the Bash script above, you will need to make some slight edits to the script if you want to customize the scan type.

root@kali:~# ./
small wrapper for OpenVAS 6

[+] Found target ID: 19f3bf20-441c-49b9-823d-11ef3b3d18c2
[+] Preparing options for the scan...
[+] Task ID = 28c527f8-b01c-4217-b878-0b536c6e6416
[+] Running scan for
[+] Scan started... To get current status, see below:


[+] Scan looks to be done. Good.
[+] Target scanned. Finished taskID : 28c527f8-b01c-4217-b878-0b536c6e6416
[+] Cool! We can generate some reports now ... :)
[+] Looking for report ID...
[+] Found report ID : 5ddcb4ed-4f96-4cee-b7f3-b7dad6e16cc6
[+] For taskID : 28c527f8-b01c-4217-b878-0b536c6e6416

[+] Preparing report in PDF for

[+] Report should be done in : Report_for_192.168.86.27.pdf
[+] Thanks. Cheers!

With the wide range of options available in OpenVAS, we were only really able to just scratch the surface in this post but if you take your time and effectively tune your vulnerability scans, you will find that the bad reputation of OpenVAS and other vulnerability scanners is undeserved. The number of connected devices in our homes and workplaces is increasing all the time and managing them becomes more of a challenge. Making effective use of a vulnerability scanner can make that management at least a little bit easier.

15 November, 2017 05:49PM by dookie

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Orchestrating architectural installations and live shows with snaps

Dutch manufacturer Visual Productions BV, provides multi-platform software and solid-state hardware lighting control technology for namely the architectural, retail, venue and entertainment lighting industries. Originating from an engineering background, Visual Productions combines creative thinking with the talent of listening to market demands in order to develop innovative products.

Visual Productions mainly work with commercial businesses supplying lighting control to a variety of industries including architectural installations, retail, themed venues, cafes, restaurants, right through to live concerts and DJ/LJ events. The current portfolio consists of various high-tech, in-house developed, control solutions for intelligent, LED and conventional lighting equipment. The software applications and hardware devices are designed with a strong emphasis on usability; resulting in feature-rich and user-friendly lighting control products that are amongst the leading choices for lighting control in these sectors around the world.

The development of the software apps shows the innovative approach of Visual Productions, keeping up to date and often ahead of the market in providing technical solutions. The users of the apps range from technicians to artists and to non-technical or public users. The applications must, therefore, be designed to be 100% user-friendly to any of these users. This design process begins with facilitating multi-platform apps with the choice of which OS they may wish to use including Ubuntu.

We spoke to Michael Chiou, a software engineer, at Visual Productions to discover how and why they have used snaps.

Install from Ubuntu Store

How did you find out about snaps?

Over the last year, we have shifted our focus on distributing our software through stores such as Google Play rather than expecting users to come to our website to download the latest version of our software and navigate complicated install wizards. Generally, if an update can be delivered to a store, it is easier than asking people to go to our website. We researched the best way to distribute on Ubuntu and across multiple Linux distros which is where we discovered snaps.

What was the appeal of snaps that made you decide to invest in them?

We like the confinement channels that snaps offers. For example, we can use development mode to release a beta to specific customers but that isn’t visible elsewhere. Security is a big plus particularly code signing – it is more secure distributing through the store as people can’t maliciously change anything. We would like to see more security in the uApp Explorer store over who can publish and what is published though. Other advantages are the integration with CMake and the fact it seems to offer a good, future-proofed solution.

How easy was it to integrate with your existing infrastructure and process?

Officially we have 4 snaps released currently. We used the Snapcraft tool which was a convenient tool where we could just add the information in and the rest was done – it saved the need to maintain ourselves. When you adopt a new format, there is always a bit of a learning curve but this helped make the process easier.

Do you currently use the snap store as a way of distributing your software?

We really like the store and use it to distribute our snaps. One feature, in particular, is when you publish a new update, you come to to the top of all new releases.

What release channels (edge/beta/candidate/stable) in the store are you using or plan to use, if any?

We mostly use the development channel – only to those who we want to check the snap status. We have released stable versions for most of our apps. We did try candidate but found it wasn’t too different to development so we mostly use that and then release.

How would you improve the snap system?

It’s really easy to use the terminal, but for a more casual user, the store inside the OS would be an easier option. We would also like to see a rise in the standards of what you need to get a snap published so as to increase the quality rather than quantity. For first time users, the Snapcraft forum is an inspiration and is definitely useful when getting started. Overall, we believe stores are the future so this is a good solution for us to work with in the long term.

15 November, 2017 04:13PM

Ubuntu Insights: How to turn your website into a desktop app

Turning your website into a desktop integrated app is a relatively simple thing to do, but distributing it as such and making it noticeable in app stores is another story.

This tutorial will show you how to leverage Electron and snaps to create a desktop web app from scratch and release it on a multi-million user store shared between many Linux distributions.

In this tutorial, you’ll learn:

  • How to create a desktop web app using Electron
  • How to create a cross-distro Linux package
  • How to test and share it with the world

Read the tutorial

15 November, 2017 02:44PM

Ubuntu Insights: Codetree Collect Info

We recently landed a feature in Codetree that I’m pretty excited about. Codetree is a tool for collecting code from various locations and assembling it in a specific directory structure. It can be used in a standalone fashion, but is also tightly integrated with Mojo, which we use to deploy and manage Juju models.

To install Codetree, just run “sudo snap install codetree --classic“.

As an example, let’s say you want to get a subset of the latest OpenStack charms. You could do so by creating a file called openstack-charms with the following contents:

charms                   @
charms/cinder            git+;revno=stable/17.08
charms/glance            git+;revno=stable/17.08
charms/hacluster         git+;revno=stable/17.08
charms/heat              git+;revno=stable/17.08
charms/keystone          git+;revno=stable/17.08

You’d then run “codetree openstack-charms” and you’d have the charms assembled in a “charms” directory. So far so good.

But what happens three months from now when you want to upgrade the charms you’ve deployed to a newer version? Or two months from now if you come across a bug in the charms and are not sure exactly what version you deployed and when? Juju strips out any dotfiles and dot directories from charms it deploys, so you won’t be able to use git commands to query where the charms on disk in your deployed OpenStack came from.

This is where the new feature we’ve just added to Codetree comes in. Codetree will now inject a file called “codetree-collect-info.yaml” into any directory it downloads, and this file will then be queryable later to confirm what version you deployed. This can be done in situ on your deployed OpenStack instance. For example:

juju ssh keystone/0 “head -4 /var/lib/juju/agents/unit*/charm/codetree-collect-info.yaml”
collect_date: '2017-11-01 14:32:55.815503'
collect_url: git+;revno=91490b7daf7511a717f75f62b57fc3f97cc6d740
  LICENSE: cfc7749b96f63bd31c3c42b5c471bf756814053e847c10f3eb003417bc523d30

Now you can easily see the specific revision the charm was collected from, when it was collected, and the hashes allow you to query if any of the files on disk have been changed.

Our next planned step from here is to add a “charm-report” phase to Mojo to allow us to query this information with one simple command.

15 November, 2017 02:25PM

hackergotchi for Tails


Have your cake and eat it, too!

Reproducible Tails builds

We have received the Mozilla Open Source Support award in order to make Tails ISO images build reproducibly. This project was on our roadmap for 2017 and with the release of Tails 3.3 we are proud to present one of the world's first reproducible ISO images of a Linux operating system.

From source code to binary code

When we write software, we do this using programming languages which a human can read and understand. This is called the source code. One can imagine source code much like a very precise recipe. Such a recipe describes an exact procedure: which ingredients and which amount of ingredients do you need? How should they be mixed together at which temperature should they be cooked or baked? The recipe will even describe the expected outcome: how the meal should look and taste like.

When we generate a Tails ISO image, our source code and the Debian packages we include are assembled into a binary ISO image, much like when the ingredients of the recipe are mixed together, one obtains the meal. The amounts and ingredients of this meal cannot be easily reverse engineered. The result of our cooking process is a Tails ISO image which users download and install onto a USB stick.

We, chefs and aides in the kitchen (Tails developers and contributors), provide you, our users, with several means to verify that this ISO image is indeed the one we want you to download, either using our Firefox add-on which does this verification automatically for you or by using our OpenPGP signature. Both of these verification methods simply tell you that the ISO image is the image which we want you to download: That the meal you get is indeed the meal that you've ordered, and not a meal which has been poisoned or exchanged by an evil waiter (such as a download mirror).

However, even with such sophisticated verification methods, it is still impossible to trace back the meal to the recipe: Does the meal contain only the ingredients it is supposed to contain? Or could unauthorized personnel have broken into the kitchen at night, and then poisoned the ingredients and made the oven cook at 50 degrees higher than displayed? In other words, could a malicious entity have compromised our build machines? That's what reproducible builds help verify and protect against.

What's a reproducible build?

Reproducible builds are a set of software development practices that create a verifiable path from human readable source code to the binary code used by computers. (quoted from

In other words, with reproducible builds, each cooking process of the same recipe is exactly repeatable.

At Tails, we have worked during a year to implement such a set of practices. This makes it now possible to compare ISO images built by multiple parties from the same source code and Debian packages, and to ensure that they all result in exactly the same ISO image.

Or again, using our cooking metaphor: Several of us will cook the meal, compare that we all cooked the same meal and only once we're sure about that, we will deliver it to you.

We all can thus gain confidence that no broken oven has introduced malicious code or failures: or we would notice it before delivering the meal.

What does this mean for you as a user?

This does not change anything in the way you download and install Tails, and you don't have to make additional verifications. It simply helps trust that the Tails ISO image that we distribute is indeed coming from the source code and Debian packages it is meant to be made of. With reproducible Tails, it only takes one knowledgeable person to build Tails and compare with the ISO image the Tails project distributes to uncover some kinds of backdoors.

And by the way, not only our ISO images are now reproducible, but so are our incremental upgrades. And you are benefiting from this improvement without even noticing :)

Thank you

Besides Mozilla's Open Source Support and the Reproducible Builds community that provided critical help where we strongly needed it, we'd also like to thank all members of our community who helped us test this process. You giving us a hand is much appreciated!

Technical implementation

If you are interested in the technical details of our implementation, we invite you to read our report to the Reproducible Builds community about how we did it.

We've also published technical instructions to verify one's own build.

Help us make Tails even better

Tails is a self organized free software project. We depend on partnerships, grants and most importantly on donations by individuals like you.

Care to give us a hand to make Tails bake even better cakes in the future?

Known issues

Any reproducible build process is reproducible… until proven otherwise. In our case last-minute issues were discovered and should be fixed in the next Tails release:

15 November, 2017 10:00AM

hackergotchi for Deepin


Deepin 15.5 Beta——Small and Beautiful Features

deepin is a Linux distribution devoted to providing beautiful, easy to use, safe and reliable system for global users. Compared to previous editiond, deepin 15.5 Beta applied the new Web application framework, added Wi-Fi hotspot sharing and color temperature adjustment besides fully compatible WUXGA screen and Flatpak application format supported. More importantly, network module and desktop environment are fully optimized. About the optimization on network module and desktop environment, let’s have a look at some new and small features. VPN Export and Import Optimized proxy function, fast export configured VPN and import existing VPN file. Application Proxy Function When set ...Read more

15 November, 2017 09:59AM by jingle

hackergotchi for Ubuntu developers

Ubuntu developers

Kees Cook: security things in Linux v4.14

Previously: v4.13.

Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:

vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)

One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.

set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).

Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.

SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.

Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)

Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn’t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB).

setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.

randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.

structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)

improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot).

eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.

seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.

Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.

That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

15 November, 2017 05:23AM

November 14, 2017

hackergotchi for SparkyLinux


Sparky 5 Desktop Editions screenshots






First Run
First Run



Back to -> Screenshots main page

14 November, 2017 09:02PM by pavroo

Cumulus Linux


Linux Mint 18.3 “Sylvia” MATE – BETA Release

This is the BETA release for Linux Mint 18.3 “Sylvia” MATE Edition.

Linux Mint 18.3 Sylvia MATE Edition

Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.3 MATE“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.3 MATE

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.
  • It will also be possible to upgrade from Linux Mint 18.2. Upgrade instructions will be published after the stable release of Linux Mint 18.3.

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

14 November, 2017 06:03PM by Linux Mint

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 14 Nov 2017

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Ubuntu Server Office Hours

As proposed, the Ubuntu Server team will now host weekly office hours in the #ubuntu-server IRC channel. The Canonical Server team will be present during these office hours to discuss and work-through any questions or bugs about Ubuntu Server. The office hours will replace the previous structured IRC meeting, but continue to occur at the same time: Tuesdays at 1600 UTC.


  • All integration tests now function with the nocloud-kvm backend
  • Fix apport for cloud-name options (LP: #1722564)
  • Improve warning message when templates aren’t found (Robert Schweikert) (LP: #1730135)
  • Perform null checks for enabled/disabled Red Hat repos (Dave Mulford)
  • Fix openSUSE and SLES setup of /etc/hosts (Robert Schweikert) (LP: #1731022)
  • Catch UrlError when #include’ing URLs (Andrew Jorgensen)


  • Completed SRU of revno 532 (LP: #1721808)
  • Fixed common test infrastructure issues causing frequent CI failures (LP: #1655842)

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Bionic)

apache2, 2.4.29-1ubuntu1, mdeslaur
asterisk, 1:13.18.1~dfsg-1ubuntu1, costamagnagianfranco
curtin, 0.1.0~bzr541-0ubuntu1, smoser
golang-github-gorilla-mux, 1.1-4, None
golang-github-mattn-go-sqlite3, 1.2.0+git20170928.5160b48~ds1-1, None
golang-go.crypto, 1:0.0~git20170629.0.5ef0053-1ubuntu2, mwhudson
golang-golang-x-sync, 0.0~git20170317.0.5a06fca-1ubuntu2, mwhudson
golang-gopkg-inconshreveable-log15.v2, 2.11+git20150921.0.b105bd3-0ubuntu13, mwhudson
golang-yaml.v2, 0.0+git20170407.0.cd8b52f-1ubuntu2, mwhudson
lxcfs, 2.0.8-1ubuntu2, stgraber
parallax, 1.0.2-1, None
php7.1, 7.1.11-0ubuntu2, doko
qemu, 1:2.10+dfsg-0ubuntu5, paelzer
samba, 2:4.7.1+dfsg-1ubuntu1, doko
Total: 14

Uploads to Supported Releases (Trusty, Xenial, Zesty, Artful)

bind9, zesty, 1:9.10.3.dfsg.P4-10.1ubuntu5.3, paelzer
bind9, xenial, 1:9.10.3.dfsg.P4-8ubuntu1.9, paelzer
cloud-init, xenial, 17.1-27-geb292c18-0ubuntu1~16.04.1, smoser
cloud-init, zesty, 17.1-27-geb292c18-0ubuntu1~17.04.1, smoser
cloud-init, artful, 17.1-27-geb292c18-0ubuntu1~17.10.1, smoser
dnsmasq, xenial, 2.75-1ubuntu0.16.04.4, paelzer
golang-1.6, xenial, 1.6.2-0ubuntu5~16.04.4, mwhudson
juju-core, xenial, 2.2.6-0ubuntu0.16.04.2, mwhudson
juju-core, zesty, 2.2.6-0ubuntu0.17.04.2, mwhudson
libseccomp, xenial, 2.3.1-2.1ubuntu2~16.04.1, adconrad
libvirt, zesty, 2.5.0-3ubuntu5.6, paelzer
libvirt, xenial, 1.3.1-1ubuntu10.15, paelzer
lxcfs, xenial, 2.0.8-0ubuntu1~16.04.2, stgraber
lxcfs, zesty, 2.0.8-0ubuntu1~17.04.2, stgraber
lxcfs, artful, 2.0.8-0ubuntu1~17.10.2, stgraber
lxcfs, xenial, 2.0.8-0ubuntu1~16.04.1, stgraber
lxcfs, zesty, 2.0.8-0ubuntu1~17.04.1, stgraber
lxcfs, artful, 2.0.8-0ubuntu1~17.10.1, stgraber
php7.0, zesty, 7.0.25-0ubuntu0.17.04.1, nacc
php7.1, artful, 7.1.11-0ubuntu0.17.10.1, nacc
sssd, xenial, 1.13.4-1ubuntu1.9, paelzer
ubuntu-advantage-tools, trusty, 10ubuntu0.14.04.2, slashd
ubuntu-advantage-tools, xenial, 10ubuntu0.16.04.1, sil2100
ubuntu-advantage-tools, zesty, 10ubuntu0.17.04.1, sil2100
ubuntu-advantage-tools, artful, 10ubuntu0.17.10.1, paelzer
ubuntu-advantage-tools, trusty, 10ubuntu0.14.04.2, slashd
Total: 26

Contact the Ubuntu Server team

14 November, 2017 06:03PM


Linux Mint 18.3 “Sylvia” Cinnamon – BETA Release

This is the BETA release for Linux Mint 18.3 “Sylvia” Cinnamon Edition.

Linux Mint 18.3 Sylvia Cinnamon Edition

Linux Mint 18.3 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.3 Cinnamon“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.3 Cinnamon

System requirements:

  • 1GB RAM (2GB recommended for a comfortable usage).
  • 15GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.
  • It will also be possible to upgrade from Linux Mint 18.2. Upgrade instructions will be published after the stable release of Linux Mint 18.3.

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

14 November, 2017 06:02PM by Linux Mint

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Dell Precision Machines Available With Ubuntu Pre-Installed

Looking for a new PC with Ubuntu pre-installed? We thought we’d run through the latest Dell Precision computers that come pre-installed with Ubuntu. These are systems developed by and for developers, and are available in form factors ranging from sleek ultrabooks to powerful workstations. Here’s a quick runthrough of the latest offerings!

Dell Precision 5720

Certification Details
More Info

Dell Precision 5520

Certification Details
More Info

Dell Precision 3520

Certification Details
More Info

Dell Precision 7520

Certification Details
More Info

Dell Precision 7720

Certification Details
More Info

14 November, 2017 04:03PM

hackergotchi for Xanadu developers

Xanadu developers

El perceptrón y perceptrón multicapa ¿Qué es y con que se come?

El perceptrón dentro del campo de las redes neuronales tiene dos acepciones. Puede referirse a un tipo de red neuronal artificial desarrollada por Frank Rosenblatt y, dentro de esta teoría emitida por Rosenblatt, también puede entenderse como la neurona artificial … Sigue leyendo

14 November, 2017 01:13PM by sinfallas

hackergotchi for Tails


Tails 3.3 is out

This release fixes many security issues and users should upgrade as soon as possible.


Upgrades and changes

  • Update Tor to which saves bandwidth when starting.

  • Update Tor Browser to 7.0.10.

  • Update Thunderbird to 52.4.0.

  • Update Linux to 4.13.0.

Fixed problems

  • Fix UEFI support for USB sticks installed using Universal USB Installer. (#8992)

  • Fix errors on file system creation in Tails Installer when the target USB stick is plugged before starting Tails Installer. (#14755).

  • Fix Tails Installer on Debian sid and recent versions of udisks2. (#14809)

  • Fix the screen reader and screen keyboard in Tor Browser and Thunderbird. (#14752, #9260)

  • Make the configuration of the keyboard layout more robust when starting a session. (#12543)

For more details, read our changelog.

Known issues

  • Due to an issue in Tor Browser, the documentation shipped in Tails doesn't open in Tor Browser anymore and lacks our sidebar. The warning page of the Unsafe Browser also lacks graphical design. (#14962)
  • Starting Tails 3.3 from DVD takes more than twice as long as earlier releases. (#14964)

See the list of long-standing issues.

Get Tails 3.3

What's coming up?

Tails 3.5 is scheduled for January 16.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

14 November, 2017 12:34PM

hackergotchi for Xanadu developers

Xanadu developers

Quantum de cerca: ¿qué es el motor de un navegador web?

Esta es una traducción del artículo original publicado en el blog de Mozilla Hacks. Traducción por Sergio Carlavilla Delgado. En octubre del año pasado Mozilla anunció el Proyecto Quantum – nuestra iniciativa para crear un motor de navegación web de nueva … Sigue leyendo

14 November, 2017 11:49AM by sinfallas

hackergotchi for OSMC


Win a Vero 4K!

In February, we announced Vero 4K. The latest update to our flagship device and the best way to experience OSMC.

We're pleased to announce that we're giving away one Vero 4K and a runner up prize of £50 store credit. For your chance to win, enter here.

The competition will close on 24th November 2017.

Don't miss your chance to get your hands on the best way to experience OSMC for free -- good luck!

14 November, 2017 01:24AM by Sam Nazarko

November 13, 2017

hackergotchi for Purism PureOS

Purism PureOS

Announcing the Librem Phone Ringtone Contest winners

As part of our Librem 5 phone campaign page, we included a public ringtone contest. The response was overwhelming, and our team did not have an easy task of picking winners: we had to listen and rank over 150 sounds sorted in 5 categories! The most intense battle took over the ringtone category, where the winner won by merely 3% of our votes. Now that the list of winners and runner-ups is final, we will contact winners to inform them that they won a Librem 5 phone! Here are the top-ranked entries we received.


1. by Feandesign — 14.8%
2. by Imre Gombos — 11.1%
3. by Nohumanconcept — 11.1%
4. Others — 63.0%


1. by Feandesign — 25.0%
2. by Yuri Witte — 17.3%
3. by Úlfur (“Quantum Ringtone”) — 11.5%
4. Others — 46.2%


1. by Antonio Paternina Alvarez — 21.8%
2. by “Brad in NZ” — 14.5%
3. by Dinesh Manajipet — 7.3%
4. Others — 56.4%

Text message

1. by Baptiste Gelez — 22.9%
2. by Oliver Owen (“Upward Bell”) — 14.6%
3. by Oliver Owen (“Pop”) — 12.5%
4. Others — 50.0%

Email notification

1. by Baptiste Gelez — 36.8%
2. by Pablo Somonte — 16.3%
3. by Feandesign — 10.2%
4. Others — 36.7%

Congratulations to all the winners, and many thanks to all who participated! The #1-ranked sounds above will be featured as the default sounds used by the Librem 5 phone. You will of course be able to choose to use your own sounds if you prefer—it is, after all, your phone.

13 November, 2017 10:15PM by Mladen Pejaković

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD Weekly Status #23


The main focus this past week has been on merging a pretty large refactoring branch on top of LXD. This moves a lot of code around to make it more testable and easier to plug in a new database implementation in preparation for some clustering features.

We’ve done a few minor improvements like adding a new “lxc operation” command, letting users peak into what LXD is currently doing in the background. And have been expanding our static analysis tests to catch typos and a number of potential issues (unchecked variables).

We’re also excited to see a number of students from the University of Texas get involved in LXC and LXD. We’ve already included a small change to LXC coming from them and expect to see more contributions coming from them very soon!

On the LXC front, a bunch of work has been done to improve the console handling, supporting an in-memory ringbuffer to show the console backlog, new API functions to query and reset that backlog and a number of cleanup around the detach key binding and associated messages.

And that’s before the usual set of bugfixes and stable release work for all projects!

This week, we’re going to be releasing LXD 2.20 and do quite a bit of work on the LXD stable branch following that big refactoring we did this past week.

Upcoming conferences and events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.




Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.


  • LXD 2.19-0ubuntu1 was finally released to 18.04 users
  • LXCFS 2.0.8-1ubuntu2 was uploaded to 18.04, fixing an upgrade issue
  • The pending SRUs for LXCFS have all been updated to fix an upgrade issue


  • The LXD snap is now built using Go 1.9.2

13 November, 2017 09:12PM

Cumulus Linux

NetDevOpEd: The power of network verification

Microsoft just published information on their internal tool called “CrystalNet” which Microsoft defines as “a high-fidelity, cloud-scale network emulator in daily use at Microsoft. We built CrystalNet to help our engineers in their quest to improve the overall reliability of our networking infrastructure.” You can read more about their tool in this detailed ACM Paper. But what I want to talk about is how this amazing technology is accessible to you, at any organization, right now, with network verification using Cumulus VX.

What Microsoft has accomplished is truly amazing. They can simulate their network environment and prevent nearly 70% of the network issues they experienced in a two-year period. They have the ability to spin up hundreds of nodes with the exact same configurations and protocols they run in production. Then applying network tests, they verify if proposed changes will have negative impact on applications and services. This work took the team of Microsoft researchers over two years to develop. It’s really quite the feat!

What I find exciting about this is it validates exactly what we at Cumulus have been preaching for the last two years as well. The ability to make a 1:1 mirror of your network, with matching ports, protocols, software and features, and the ability to run automated tests against this environment is the next frontier in network management.

When we released Cumulus VX, our virtual Cumulus Linux platform, in 2015 we knew immediately how powerful of a tool it would be. Our field teams are able to do greater than 90% of all their testing and training on Cumulus VX. Our QA teams use Cumulus VX to test any software based features like routing or TACACS. Even our consulting team has moved to 100% Cumulus VX based training for our instructor-led bootcamp training courses.

A common question I see from customers is “what’s different about Cumulus VX, compared to other vendor VM platforms?”. The difference is subtle, but incredibly important. First, a quick recap on the Cumulus Linux architecture. Cumulus Linux is a complete Linux distribution, based on Debian Jessie. Cumulus relies on the Linux kernel for the source of truth for all things on the system. This means every application that runs on Cumulus Linux based switches are just unmodified Linux applications. In fact we have customers using our routing suite, FRR, directly on their Linux based servers.

network verification
What’s highlighted in the image is “switchd” our switch driver that takes the information from the Linux kernel, like VxLAN tunnels, routes or MAC addresses, and programs them into the switch hardware, to give line rate performance. What is important here is that switchd relies on the Linux kernel for this information. Switchd only programs the hardware based on what is in the Linux kernel software. If it’s not in the software, it’s not in the hardware.

But again, how is this different from the VMs provided by other network vendors? The difference is that for everything we do, we rely on the software to be the source of truth. For our competitors, they frequently write “platform dependent” features with no software layer at all. The CLI commands will directly program an ASIC or line card, with no software layer in between. This means that without hardware (like in a VM) the feature doesn’t work at all. Have you ever used GNS3 and found out you couldn’t enable a VLAN or an ACL or a VxLAN tunnel? This is exactly why. A VM without the features you are running in production isn’t very useful, now is it?

By relying on the software as the source of truth, any feature that works on a switch will work exactly the same in Cumulus VX. Furthermore, we can use standard Linux techniques to map virtual interface names to exactly what is cabled in production. Even if you skip ports, utilizing our open source topology converter tool, no matter what ports are in use, we can produce a virtual environment that is an exact replica of your physical switch.

With the ability to build a system of 100s of virtual switches together, cabled exactly as you would cable them in production, with the exact same software features you have in production, the possibilities are endless. We’ve shown how customers can build automated continuous integration/continuous delivery (CI/CD) so that any proposed network change can automatically be validated against a set of user defined tests. Some of our customers have even shown off what they are doing. We’ve made this even easier for them with Cumulus NetQ; by leveraging the “netq check” commands, customers don’t even need to manually write test suites as part of their testing pipeline. Imagine replacing 100s of lines of python code with a simple “netq check bgp” to see if BGP is running correctly on every device, no matter if there 4 switches or 400.

If you want to know how you can do in weeks what took a Microsoft research team two years, reach out to your friendly neighborhood sales team to learn more about Cumulus VX and NetQ.

This blog post is part of a series called “NetDevOpEd” where various Cumulus employees and partners write an “op-ed” style piece on an industry topic.

The post NetDevOpEd: The power of network verification appeared first on Cumulus Networks Blog.

13 November, 2017 06:36PM by Pete Lumbis

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Latte Dock v0.7.2 arrives in KDE and Kubuntu backports PPA

Latte Dock, the very popular doc/panel app for Plasma Desktop, has released its new bugfix version 0.7.2. This is also the first stable release since Latte Dock became an official KDE project at the end of August.



Version 0.7.1 was added to our backports PPA in a previous round of backports for Kubuntu 17.10 Artful Aardvark.

Today that has been updated to 0.7.2, and a build added for Kubuntu 17.04 Zesty Zapus users.

The PPA can be enabled by adding the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications and Plasma releases, so please be aware that enabling the backports PPA for the first time and doing a full upgrade would result in a substantial amount of upgraded packages in addition to Latte Dock.

~ The PPA will also continue to receive further bugfix updates when they become available, and further updated releases of Plasma and applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list:
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
3. Kubuntu PPA bugs:

13 November, 2017 06:10PM

hackergotchi for VyOS


Permission denied issues with AWS instances

Quick facts: the issue is caused by an unexpected change in the EC2 system, there is no solution or workaround yet but we are working on it.

In the last week a number of people reported an issue with newly created EC2 instances of VyOS where they could not login to their newly created instance. At first we thought it may be an intermittent fault in the AWS since the AMI has not changes and we could not reproduce the problem ourselves, but the number of reports grew quickly, and our own test instances started showing the problem as well.

Since EC2 instances don't provide any console access, it took us a bit of time to debug. By juggling EBS volumes we finally managed to boot an affected instance with an disk image modified to include our own SSH keys.

The root cause is in our script that checks if the machine is running in EC2. We wanted to produce the AMI from an unmodified image, which required inclusion of the script that checks if the environment is EC2. Executing a script that obtains an SSH key from a remote (even if link-local) address is a security risk since in a less controlled environment an attacker could setup a server that could inject their keys to all VyOS systems.

The key observation was that in EC2, both system-uuid and system-serial-number fields in the DMI data always start with "EC2". We thought this is a good enough condition, and for the few years we've been providing AMIs, it indeed was.

However, Amazon changed it without warning and now the system-uuid may not start with EC2 (serial numbers still do), and VyOS instances stopped executing their key fetching script.

We are working on the 1.1.8 release now, but it will go through an RC phase, while the solution to the AWS issue is needed right now. We'll contact Amazon support to see what are the options, stay tuned.

13 November, 2017 05:33PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: The $150 Personal Development Kit

I am a pretty firm advocate of personal development. I don’t mean those cheesy self-help books that make you walk on coals, promise you a “secret formula” for wealth, and merely bang on about motivation and inspiration. That stuff is largely snake oil.

No, I mean genuine personal development: building discipline and new skills with practice, focus, and patience.

This kind of work teaches you to look at the world in a different way, to sniff out opportunity more efficiently, to treat challenges and (manageable) adversity as an opportunity to grow, to treat failure as a valuable tool for improvement, and to get a better work/life balance.

There is no quick pill or shortcut with this stuff: it takes work, time, patience, and practice, but it is a wonderful investment in yourself. It can reap great rewards in happiness, relationships, productivity, and more.

Sometimes I recommend some personal development resources (that I have found invaluable) when I speak at conferences, and it struck me that it might be helpful to package this up into a $150 Personal Development Kit: a recommended collection of items you can buy to get you a good start. It is a worthwhile investment.

IMPORTANT NOTE: these are merely my own recommendations. I am not making money from any of this, there are no referral links here, and I am not being asked to promote them. These are products I have personally got a lot of value out of, but of course, your mileage may vary.

Overall Approach

The items I am recommending in the kit are based upon what I consider to be the five key goals we should focus on in ourselves:

  1. Structured – with so much detail in the world, we often focus on only the urgent things, but not the important things. As such, we get stuck in a rat race. We should aim to look ahead, plan, and use our time and energy wisely so we can balance it on the things we need to do and the things we love to do.
  2. Reflective – we should always evaluate our experiences (both good and bad) to see how we can learn and improve. We want to develop a curiosity that manifests in positive adjustments to how we do things.
  3. Stoic – life will throw curveballs, and we need to train ourselves to manage adversity with logic, not emotion, and to find opportunity even in challenging times. This will strengthen us.
  4. Mindful – we need to train ourselves to manage our minds to to be less busy and have a little more space. This will help with focus and managing stress.
  5. Habitual – the only way in which we grow and improve is to build good habits that implement these changes. As such, we should be explicit in how we design these habits and stick to them.

Let’s now run through these recommendations and I will provide some guidance on how to use them near the end of this post.


Reading is a critical component in how we grow. Much of humanity’s broader wisdom has been documented, so why not learn from it?

One of the most valuable devices I have ever bought is an Amazon Kindle because it makes reading so convenient. If you are strapped for cash though, go and join your local library. Either way, make a few moments for reading each day (for me it is before bed), it is worth it.

Seven Habits Of Highly Effective People

While the title may sound like a tacky self-help effort, this book is fantastic, and a good starting point in this kit. It is, for me, the perfect starting point for personal development.

Essentially it teaches seven key principles for focusing on the right opportunities/problems, being proactive, getting the most value out of you work, building your skills, and more.

These are not trendy quick fixes: they are consistent principles that have stood the test of time. They are presented in simple and practical ways and easily applicable. This provides a great framework in which to base the rest of the kit.

The Obstacle Is The Way

I have become quite the fan of stoicism, an ancient philosophy that teaches resilience and growth in the most testing of times. Stoicism is a key pillar in effective personal development: it builds resilience and strength.

While the seven habits touches on some stoic principles, this book delves into further depth. It teaches us that in every challenge there is an opportunity for learning and growth. It helps us to train ourselves to manage challenging situations with logic and calmness as opposed to emotion and freaking out.

This book is one that I always recommend to people going through a tough time: it is wonderful at resetting our perspectives and showing that all scenarios can be managed more effectively if we approach them with the right mental perspective. This gives us confidence, resilience, and structure.

The Daily Stoic

When you have read The Obstacle Is The Way, this book is wonderful at keeping these stoic principles front and center. It provides a daily “meditation”, a key stoic principle to read, consider, and think about throughout the day.

I have found this really helpful. Part of personal development is building new ideas and mental frameworks in your head in which to apply to your life. This book is handy for applying the stoic piece so it doesn’t just remain an abstract concept, but something you can directly muse on and apply.

As with all of these methods and principles, they only stick if you practice. This book is a great way to build this discipline.


The previous books are designed to build your psychological and organizational armor. While not strictly a personal development book, Nudge is more focused on our approach to problems.

In a nutshell, Nudge demonstrates that we make effective changes to problems with lots of small “nudges”. That is, instead of running in there with a big new solution, apply a collection of mini-solutions that move the needle and you will make more progress. This is huge for solving organizational issues, dealing with complicated people, taking on large projects, and more.

Services and Apps

In addition to the above books, there are also some key services and apps that I want to include in this kit.

Headspace 1 Year Subscription

Our lives are riddled with complexity, and as we get increasingly connected with social media, cell phones, and more, our minds are busier than ever before.

As such, meditation is a key personal development tool in managing our minds. In much the same way the previous books help shape a healthier and more pragmatic perspective, meditation is a key companion for this. There are numerous scientific benefits to meditation, but I have found it to be an invaluable tool in maintaining a calm, logical, and pragmatic perspective.

While there are various meditation services, I love Headspace. It is a little more expensive, but it is worth it. All you need is a pair of headphones and a computer/phone/tablet to get started.

You can join a plan on a month to month basis, but I included the 1 year plan in the kit because this should not be a temporary fad…it is a critical component throughout the year.


  • Free

The key to making all of the above stick is to practice every day until it becomes a habit. The general wisdom is that it takes 66 days to build a habit, so simply try to practice all of these principles once a day for 66 days straight. After this long you generally won’t have to think about doing something, it will just be part of your routine.

HabitBull (and many similar apps) simply provide a way to track these habits and when you stick to them. This is helpful in seeing your progress, just make sure you use it!

How To Use These

Now, before you get started, it is important to know that benefitting from these different elements of the kit is going to take some discipline.

There is no magic pill here: it will take practice and you will have some good days and bad days. Remember though, even doing a little each day has you lapping those doing nothing.

So, this is how I recommend you use these resources:

  • In HabitBull add some habits to track. Our goals is to stick to these every day for 66 days. Add items such as:
    • Reading (10mins a day)
    • Meditation (10mins a day)
    • Exercise (10mins a day)
  • Start by reading The Seven Habits of Highly Effective People.
  • At the same time start using Headspace and run through the three Basic packs which will take 30 days (10mins a day).
  • The next book to read is The Obstacle Is The Way. Again, while reading books, continue using headspace and move onto the themed Headspace packs. Focus on the Prioritization pack next and then the Stress pack. Also listen to the Daily Headspace session which is only 3 mins long each day.
  • When you have completed The Obstacle Is The Way, start reading an entry every day from The Daily Stoic (add a habit to HabitBull to track this) and also begin reading Nudge. Again continue using Headspace throughout this.

The most important thing here is building the habit. Do something every day. Even if it means putting it in your calendar, make sure you apply yourself to the above every day.

Further Recommendations?

These are my recommendations for the kit. What else do you think should be included?

What other approaches and methods have you also found to be helpful?

Share your thoughts in the comments!

The post The $150 Personal Development Kit appeared first on Jono Bacon.

13 November, 2017 04:00PM

Ubuntu Insights: How to deploy one or more Kubernetes clusters to a single box

This article originally appeared at Rye Terrell’s blog

In a recent collaboration between the Linux Foundation and Canonical, we designed an architecture for the CKA exam. In order to keep the exam as affordable as possible, we needed to optimize our resource utilization — ideally, by running multiple Kubernetes clusters on a single VM. This is how we did it.

Kubernetes has developed a dedicated following because it allows us to build efficient, robust architectures in a declarative way. Efficient because they are built on top of container technology, and robust because they are distributed.

While the distributed nature of the applications we build on top of Kubernetes allow them to be more robust in terms of availability, that same architecture has the potential to, ironically, reduce application robustness in terms of ease-of-testing. Consider the following generic architecture:

Generic multi-cluster deployment of Kubernetes on top of VMs

Testing an application built on top of such a deployment has at least two major challenges. One is simply the cost of the resources needed for a test deployment. You may need many VMs to model your application sufficiently. If you want to parallelize your testing, you’re going to be paying that cost for each deployment.

Another challenge is the time required to bring up the deployment — we need to wait for VMs to be instanced, available, provisioned, and networked. Any time that can be saved there can be devoted to more testing.

We need to go deeper.

We made our applications more efficient by leveraging container technology against our processes. What if we could do the same thing, but leverage it instead against our machines? What if we could make our deployment look like this:

Generic multi-cluster deployment of Kubernetes on top of Linux containers

…a single box with multiple containers acting as the nodes of our clusters. With this architecture, we save on resources because, while we may require a larger machine to serve as the host, we’re not wasting resources on many more underutilized smaller boxes. Additionally, we save significant time not waiting for the containers to be instanced and become available — they’re nearly as instant as docker containers. And perhaps most exciting, we can save this deployment as a virtual machine image to both reduce the provisioning time costs tremendously and make our deployment reproducible.

We can rebuild him. We have the technology.

While the container technologies utilized by Kubernetes wrap processes, we need something that will containerize an entire system. Linux containers are well suited to this — from their documentation:

The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.

Perfect. Let’s try it out.

$ lxc launch ubuntu:16.04 hello-kubernetes

Creating hello-kubernetes 
Starting hello-kubernetes 

$ lxc exec hello-kubernetes /bin/bash 

root@hello-kubernetes:~# lsb_release -a 

No LSB modules are available. 
Distributor ID: Ubuntu 
Description: Ubuntu 16.04.3 LTS 
Release: 16.04 
Codename: xenial 

root@hello-kubernetes:~# systemctl list-units 

dev-sda1.device loaded activating tentative 
dev-sda1.device -.mount loaded active mounted / 
dev-.lxd\x2dmounts.mount loaded active mounted 
/dev/.lxd-mounts dev-full.mount loaded active mounted 
/dev/full dev-fuse.mount loaded active mounted /dev/fuse 

Boom. Complete containerized system.

It’s alive!

Now that we know how to create a containerized system on our host, building out a Kubernetes cluster looks like normal. You can use whatever tool you like for this, but I’ll be using conjure-up with CDK here because it already knows how to create linux containers and deploy kubernetes to them.

Let’s get started. First, we’ll tell conjure-up we want to deploy Kubernetes:

$ conjure-up canonical-kubernetes

This will bring up a wizard in the terminal, which will first ask where we want to deploy to. We’ll select localhost:

Next click on “Deploy all 6 Remaining Applications”:

Then wait a bit as the cluster is brought up:

Conjure-up will grab kubefed and kubectl for you. Click Run:

That’s it, there’s now a Kubernetes cluster running on top of Linux containers on your VM:

$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

Let’s make another one.

There’s no reason we can’t deploy multiple Kubernetes clusters, either. I’ll continue to use conjure-up here because it makes it easy, but feel free to use the tool of your choice. Let’s run conjure-up in headless mode this time:

$ conjure-up canonical-kubernetes localhost conjure-up-localhost-642 cluster2
[info] Summoning canonical-kubernetes to localhost
[info] Creating Juju model.
[info] Juju model created.
[info] Running step: pre-deploy.
[info] Deploying kubernetes-master...
[info] Deploying flannel...
[info] Deploying kubernetes-worker...
[info] Deploying easyrsa...
[info] Deploying kubeapi-load-balancer...
[info] Deploying etcd...
[info] Exposing kubeapi-load-balancer.
[info] etcd: deployed, installing.
[info] kubeapi-load-balancer: deployed, installing.
[info] easyrsa: deployed, installing.
[info] flannel: deployed, installing.
[info] kubernetes-master: deployed, installing.
[info] Setting relation easyrsa:client <-> etcd:certificates
[info] Exposing kubernetes-worker.
[info] kubernetes-worker: deployed, installing.
[info] Setting relation flannel:cni <-> kubernetes-worker:cni
[info] Setting relation easyrsa:client <-> kubeapi-load-balancer:certificates
[info] Setting relation kubeapi-load-balancer:apiserver <-> kubernetes-master:kube-api-endpoint
[info] Setting relation kubeapi-load-balancer:website <-> kubernetes-worker:kube-api-endpoint
[info] Setting relation flannel:cni <-> kubernetes-master:cni
[info] Setting relation etcd:db <-> flannel:etcd
[info] Setting relation easyrsa:client <-> kubernetes-master:certificates
[info] Setting relation kubernetes-master:kube-control <-> kubernetes-worker:kube-control
[info] Setting relation kubeapi-load-balancer:loadbalancer <-> kubernetes-master:loadbalancer
[info] Setting relation easyrsa:client <-> kubernetes-worker:certificates
[info] Setting relation etcd:db <-> kubernetes-master:etcd
[info] Waiting for deployment to settle.
[info] Running step: 00_deploy-done.
[info] Model settled.
[info] Running post-deployment steps
[info] Running step: step-01_get-kubectl.
[info] Running step: step-02_cluster-info.
[info] Running step: step-03_enable-cni.
[info] Installation of your big software is now complete.
[warning] Shutting down

And here’s our second cluster:

$ kubectl cluster-info

Kubernetes master is running at http://localhost:8080 Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy


Let’s make it reproducible.


Now that we have a reasonably sophisticated deployment, let’s see about reproducing it quickly. First we’ll create an AMI of our instance:

$ aws ec2 create-image --instance-id i-02c547825d34d345e --name cluster-in-a-box

And then kick off an instance of it and ssh in:

$ aws ec2 run-instances --count 1 --image-id ami-5f9eb33a --instance-type t2.xlarge --key-name mykey

Let’s check on our clusters. First the original cluster:

$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

And our second cluster:

$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

Looks good!


Linux containers allow us to containerize full linux systems. We can utilize this to deploy Kubernetes clusters quickly & cheaply — ideal for testing deployments without unnecessary resource & time overhead.

Can you think of other uses for a Kubernetes cluster-in-a-box? I’d love to hear about it — leave a comment below!

If you found this interesting, Marco Ceppi and I will be giving a talk about it at Kubecon 2017. Hope to see you there!

13 November, 2017 03:12PM

Matthew Helmke: A Practical Guide to Linux Commands, Editors, and Shell Programming, Fourth Edition

I was the sole editor and contributor of new content for A Practical Guide to Linux Commands, Editors, and Shell Programming, Fourth Edition.

I want to note that I feel I am standing on the shoulder of a giant as the previous author, Mark Sobell, has been incredibly helpful in the hand off of the book. Mark is retiring and leaving behind a great foundation for me.

13 November, 2017 01:51PM


Monthly News – November 2017

Many thanks for your donations. Your help and support is greatly appreciated. It empowers us of course but it’s also a huge boost in confidence and motivation. Many thanks to all of you who help our project.

Linux Mint 18.3 BETA

The BETA for the Cinnamon and the MATE editions will be released this week.

We hope you’ll enjoy them and we look forward to receiving your feedback. We’ll announce their official release in a couple of days.

Last minute changes

Some improvements got into Linux Mint 18.3 at the last minute. These include:

  • Better out of the box support for spell-checking and synonyms in English, German, Spanish, French, Italian, Portuguese and Russian.
  • Easy installation of Skype, Google Earth and WhatsApp in the Software Manager
  • In MATE’s mintmenu: Recently used apps

There are also a couple of changes in Xfce and KDE, but it’s a bit too early to report on them (we’ll release these editions after Cinnamon and MATE).

Read the Docs

We’re happy to announce we’ll be porting our documentation to a promising service called “Read the Docs“.

Our Linux Mint User Guide helped us for years. It was updated continuously and translated in more than 20 languages. It’s been a great resources for new users. Although the LibreOffice format in which it was written made it easy for anyone to review it or translate it, it’s also become an issue which made its maintenance and development harder than it should have been. Changes are hard to keep track of and the various documents available in each different language are becoming more and more different.

If we were to rewrite a paragraph, that effort would only benefit one language in the short term, and likely only a select few long term.

The structure of the user guide is also problematic. It’s a big document which covers many different aspects. It tries to do to much, and doesn’t necessarily do it very well. It goes too much in the details for some of the topics but also lacks important information on others.

During the Linux Mint 19 development cycle we’ll write small guides, each dedicated to their own topic. In particular, we’ll write:

  • An installation guide
  • A developer guide
  • A troubleshooting/bug-reporting guide
  • A getting-started guide with an overview of the Mint project and its community

Read the Docs was chosen primarily because it separates content and layout and because it allows us to simply write content and have it automatically translated, built and hosted into documentation you can read online (in HTML) or offline (in PDF/ePUB).

The documentation will be written in reStructuredText (RST) and version-controlled on Github. We’ll be using Gettext to generate translations templates which will be imported into Launchpad. In other words, we’ll have the documentation written and translated the same way we already write and translate our software projects. Read the Docs will then continuously update the hosted documentation. Whenever it changes, it will automatically be rebuilt.

Note: Flatpak is already using Read the Docs. Don’t hesitate to visit to see how it looks. Their documentation is available in English, French and Spanish. We’ll need some time to write our guides, but from experience and knowing how amazing the translation teams are on Launchpad (the pace at which they translate our code is quite humbling… in a matter of days we’re usually ready to ship any new features, translated in all major languages), we’re quite confident we’ll have documentation in more than 20 languages pretty fast.



Linux Mint is proudly sponsored by:

Donations in October:

A total of $8,998 were raised thanks to the generous contributions of 483 donors:

$200, Luc S.
$131, Mark W.
$109 (5th donation), Sten L.
$109 (3rd donation), Frank B.
$109 (2nd donation), Jean-yves B.
$109, Detlef S.
$109, Tom S.
$100 (4th donation), Markus S.
$100 (3rd donation), Steve D. aka “taosld”
$100 (3rd donation), Gary M.
$100, Flemming M.
$100, Steven J.
$100, Per H. L.
$100, Andrew S.
$100, Dindar N.
$65 (3rd donation), Ion L. I.
$60 (3rd donation), Martin R.
$60, Wayne R.
$54 (8th donation), Claude M.
$54 (7th donation), Volker P.
$54 (2nd donation), Christian T.
$54, Barretteau M.
$54, Kevin D.
$54, Hans J. L.
$54, Rafael S. A.
$54, Krol S.
$54, Andreas A.
$54, Philipp R.
$54, Jacek S.
$54, Frederick K.
$50 (21st donation), Anthony C. aka “ciak”
$50 (5th donation), Thomas T. aka “FullTimer1489”
$50 (4th donation), Kenneth P.
$50 (4th donation), Douglas J.
$50 (4th donation), Cody W. H.
$50 (4th donation), JimM
$50 (3rd donation), Michael S.
$50 (2nd donation), Zhatkin A.
$50 (2nd donation), Fred W.
$50, Mark T.
$50, John C.
$50, Michael P.
$50, Odd I.
$50, K-Fi D Marketing Communications
$50, Mark H.
$50, Brian R.
$50, Graeme L.
$50, Henry S.
$50, Richard O.
$50, Michael C.
$49 (34th donation), Mark W.
$44, Jakob V.
$44, Olivier H. W. aka “Kuripot”
$40, John B.
$40, John W.
$40, Lance W. G.
$40, Josip B.
$40, Erik C.
$38 (2nd donation), Wolfgang CP Gensch
$36, JBHoren
$35 (5th donation), Jeff S.
$35, Charles A.
$35, Jeffrey K.
$33 (92th donation), Olli K.
$33 (8th donation), Julian M.
$33 (4th donation), John H.
$33, Jean-pierre J.
$33, Carl A.
$30, Rex H.
$30, Mickael D.
$27 (13th donation), Ky LMDE
$27 (6th donation), Jon Marks aka “ESL Materials Writer
$27 (4th donation), Jan B.
$27 (3rd donation), Martin W.
$27, Antonio aka “pengu73”
$27, Bernd J.
$27, David W.
$27, Brian L.
$27, Günter S.
$25 (75th donation), Ronald W.
$25 (31st donation), Curt Vaughan aka “curtvaughan ”
$25 (6th donation), Eric W. aka “powerwagon75”
$25 (6th donation), Michael Welch aka “Dr. Mike
$25 (5th donation), Bill R.
$25 (3rd donation), James M.
$25 (3rd donation), Dennis B.
$25 (3rd donation), J. C. .
$25 (3rd donation), Roberto O. L.
$25 (2nd donation), Gary B.
$25, Joe K.
$25, Earl P.
$25, Ted S.
$25, Robert F.
$25, Tatesawa O.
$25, Bruce R.
$25, Jim H.
$25, Marco C.
$25, Rogulin P.
$25, Andrew S.
$25, Nathaniel V.
$25, Stephen P.
$25, PCSW Inc
$25, Morten K.
$22 (10th donation), Ross M aka “ro55mo”
$22 (9th donation), Johann J.
$22 (4th donation), Malte J.
$22 (3rd donation), nobody
$22 (3rd donation), Andreas R.
$22 (3rd donation), CySoTec
$22 (3rd donation), nobody
$22 (3rd donation), Tom B.
$22 (2nd donation), Thomas L.
$22 (2nd donation), Florent G.
$22 (2nd donation), Michael T.
$22 (2nd donation), Wolfgang B. aka “Bösi”
$22 (2nd donation), Jean-philippe P.
$22 (2nd donation),
$22 (2nd donation), Eric V. C.
$22, Der S.
$22, Jean R.
$22, Baptiste Z.
$22, Peter P.
$22, Wim W.
$22, Hannu K.
$22, Jean-pierre G.
$22, Ciprian T.
$22, Pavel K.
$22, nordtapete
$22, Olivier R.
$22, Gabriel G.
$22, Francis N.
$22, Ronald M.
$22, Lars H.
$22, Christophe B.
$22, Siegmar H.
$22, Linda R.
$22, Derek R.
$21 (5th donation), T. P. .
$20 (31st donation), Curt Vaughan aka “curtvaughan ”
$20 (13th donation), Jeffery J.
$20 (6th donation), Hubert Banas
$20 (5th donation), Samarth M.
$20 (5th donation), Arrowhead Computer Consulting, LLC aka “Jim (JR)
$20 (5th donation), Lars Händler
$20 (2nd donation), Duncan M.
$20 (2nd donation), Peter G.
$20 (2nd donation), Vitali V.
$20 (2nd donation), Miguel G.
$20 (2nd donation), Kleiner Funk-Electronic
$20 (2nd donation), Bezantnet, L.
$20 (2nd donation), Daniel H.
$20, Eleanor L.
$20, Renan D.
$20, David S.
$20, Dominic P.
$20, David P.
$20, Yamamoto T.
$20, Vincenzo S.
$20, Edward H.
$20, Thomas J. M.
$20, Adam C.
$20, Stephen S.
$20, Ireneusz D.
$20, Alain S.
$20, John S.
$20, John M.
$20, Leopold D.
$20, Luiz N.
$20, Hallvard P.
$20, Dani Metzger
$20, Maurice S.
$20, Griffin C.
$20, Patrick K.
$18 (13th donation), Dominik K. aka “doko
$16 (11th donation), Ib O. J.
$16 (7th donation), Peter B.
$16 (3rd donation), Marc S.
$16 (2nd donation), Stefan P.
$16 (2nd donation), Sebastian B.
$16 (2nd donation), Francesca S. S.
$16 (2nd donation), Vadim G.
$16, Juan P. M. C.
$16, Laurent D.
$16, Gal J.
$16, Montefusco E.
$16, Bruno S.
$16, Marcin S. aka “senkal”
$16, Oliver S.
$15 (8th donation), Lance M.
$15 (4th donation), Robert D. aka “Wilbobob”
$15 (2nd donation), Benjamin P.
$15 (2nd donation), Susanne E. W.
$15, Kenneth W.
$15, Keven W.
$15, Mark W.
$15, PagesAtHome
$15, Constantin M.
$15, Greg S.
$15, Philip K.
$15, David K.
$13 (18th donation), Anonymous
$13, John B.
$12 (79th donation), Tony C. aka “S. LaRocca”
$12 (25th donation), JobsHiringnearMe
$12, Liu Y.
$12, Stonecot
$12, Тягливый А.
$11 (6th donation), Tom M.
$11 (5th donation), Michal W.
$11 (5th donation), Francois B. aka “Makoto
$11 (4th donation), Florian U.
$11 (4th donation), Andrew V.
$11 (4th donation), François L.
$11 (4th donation), Jacques S.
$11 (4th donation), Sachindra Prosad Saha aka “Love you grand dad”
$11 (4th donation), Joss S.
$11 (3rd donation), Rick H aka “tinyworlds
$11 (3rd donation), Emanuele Proietti aka “Manuermejo”
$11 (3rd donation), Birger T.
$11 (2nd donation), Finn H.
$11 (2nd donation), Giorgio O.
$11 (2nd donation), Martin H.
$11 (2nd donation), Łukasz S.
$11 (2nd donation), Eskild T.
$11 (2nd donation), Hans-detlef D.
$11 (2nd donation), Philipp H.
$11 (2nd donation), Christian F.
$11 (2nd donation), C. F. .
$11, Grzegorz K.
$11, Jochen S.
$11, Hans-werner H.
$11, Arturo S.
$11, Theodoros K.
$11, Richard S.
$11, Matthias O.
$11, Gunars A.
$11, Hubert R.
$11, Ingo B.
$11, Gerhard H.
$11, Martin W.
$11, Irk D. F.
$11, Matthias G.
$11, Thomas P.
$11, Alfred S. aka “Fredo”
$11, Enrico B.
$11, Roland H.
$11, Miroslav Š.
$11, Massimo T.
$11, Thibaut M.
$11, Pavel Hrnčíř aka “Milhouse”
$11, Mark Stuart aka “codeasone”
$11, Clive D.
$11, Gabor P.
$11, Spiridon T.
$11, viktor
$11, Gary W.
$11, Alexis M.
$11, Kevin C.
$11, Maciej T.
$11, Lorenz S.
$11, Zoran G.
$11, Herberth M.
$11, Frank G.
$11, Gilles D.
$11, Michael L.
$11, Dias J.
$10 (23rd donation), Thomas C.
$10 (15th donation), Julie H. aka “Kjokkenutstyr
$10 (14th donation), Frank K.
$10 (14th donation), Paul O.
$10 (10th donation), Dinu P.
$10 (9th donation), 杉林晃治
$10 (9th donation), Dinu P.
$10 (8th donation), Masaomi Yoshida
$10 (6th donation), Agenor Marrero
$10 (6th donation), Kristian O.
$10 (6th donation), Andreas S.
$10 (4th donation), Ray M.
$10 (3rd donation), David W.
$10 (3rd donation), Michael P. K.
$10 (3rd donation), Andy McBride
$10 (2nd donation), Sourav B. aka “rmad17
$10 (2nd donation), Daniel G. Marconi aka “DarkSatan
$10 (2nd donation), Sebastian D. L. aka “Sebadamus”
$10 (2nd donation), Waybackdownloader
$10 (2nd donation), Jordon B.
$10 (2nd donation), CW P.
$10 (2nd donation), Ian C.
$10 (2nd donation), Clint M.
$10 (2nd donation), Jayadevan C. R.
$10 (2nd donation), George C.
$10, Demetrios C. A.
$10, shuhari aka “shuhari”
$10, Mark H.
$10, Jose Luis Gonzalez Becerril aka “Mint Friend Forever”
$10, Justin M.
$10, Eugene C.
$10, Nikolas K.
$10, Jack H.
$10, Sergey Z.
$10, P R.
$10, Sahil Ahuja aka “GMETRI
$10, Øyvind K.
$10, Robert H.
$10, merimaat
$10, Ruston R.
$10, George C.
$10, Aaron S.
$10, Arthur L.
$10, Gordon E.
$10, Daniel-Teodor S.
$10, Abdulaziz A.
$10, Derek H.
$10, Hamilton Southern Holdings LLC
$10, Dario C.
$10, Andrew J.
$10, Anand S.
$10, Ricky G.
$10, Jairo C.
$10, Greg R.
$10, Divya S.
$10, Jan G.
$10, Abdul K.
$10, Jerry P.
$10, How W.
$10, Humberto A. S.
$10, Stephen W.
$10, Jonathan M.
$10, Kit S. H.
$10, Andrew M.
$10, Dusan T.
$10, Colin H.
$10, Alexander H.
$10, Chris K.
$10, William B. aka “TheMesquito
$8.44, Rare Earth Computing
$8 (2nd donation), Josef H. R. H.
$8, Wilfred F.
$8, Yannick S.
$7 (15th donation), CV Smith
$7 (2nd donation), Erwin B. S. M. aka “The Teacher”
$6 (4th donation), aka “AsciiWolf”
$6 (2nd donation), Krzysztof D.
$5 (17th donation), Eugene T.
$5 (14th donation), Todd A aka “thobin”
$5 (13th donation), Kjell O. B. aka “kob”
$5 (11th donation), Jim A.
$5 (9th donation), Bhavinder Jassar
$5 (6th donation), Blazej P. aka “bleyzer”
$5 (6th donation), J. S. .
$5 (6th donation), Vyacheslav K. aka “veZuk”
$5 (5th donation), NAGY Attila aka “GuBo”
$5 (4th donation), GaryD
$5 (4th donation), Jonathan Gaddi Giomini aka “JonnyBarbun87”
$5 (4th donation), Clinton Aarts
$5 (4th donation), John M.
$5 (3rd donation), Arkadiusz T.
$5 (3rd donation), Jeroen V. D. B.
$5 (3rd donation), John M.
$5 (3rd donation), Ellert H.
$5 (2nd donation), Piotr S.
$5 (2nd donation), Ovidiu I. D.
$5 (2nd donation), Patrick M.
$5 (2nd donation), Adrian N.
$5 (2nd donation), Laura NL aka “lauranl
$5 (2nd donation), Andre B.
$5 (2nd donation), Athol P.
$5 (2nd donation), Ondrej D. B.
$5 (2nd donation), Alexey K.
$5 (2nd donation), rptev
$5, Doug K.
$5, Dino Brugnolaro aka “brgdni
$5, Paweł B.
$5, Robert G.
$5, Simon S.
$5, Liu J.
$5, Arkādijs S.
$5, James B.
$5, Murray Y.
$5, Mark H.
$5, Samuel G.
$5, Ivan H.
$5, Conrad T.
$5, Udo W.
$5, Mario Filho
$5, David B.
$5, Nicholas J.
$5, David M.
$5, Susanne K.
$5, Robert W.
$5, Nikolay aka “niketechno”
$5, Alessandro M.
$5, Simo P.
$5, Jochen W.
$5, Коробейников А.
$5, Binyamin B. E.
$5, Christian U. aka “Jowe1999
$5, Tree Service Kansas City
$5, Giorgi M.
$5, Benjamin S.
$5, Robert T.
$5, PDFsam
$5, Marco S.
$5, Union Hills Software
$5, Pan W. aka “Clay”
$5, Peter R.
$5, Adam B.
$5, Agri Shop 2000
$5, Pierre E.
$5, Žarko J.
$5, Daniel K.
$5, Matthias K.
$5, Brent F.
$5, Иван Ложкин aka “Berry”
$5, Michael D.
$5, Rodney F.
$5, Iliyan P.
$5, Pierre B.
$5, Iacopo B.
$4 (9th donation), Alessandro S.
$4, Harm R.
$4, Marcelo T.
$4, Ljubiša P.
$3.43, Ethan L.
$3 (4th donation), Micoworker – Diseño de Páginas Web
$3 (3rd donation), Karl H.
$3 (3rd donation), Nuno F.
$3 (3rd donation), cheval a vendre
$3 (3rd donation), Cassio Raposa aka “Zadig
$3 (2nd donation), 정우 이
$3 (2nd donation), Octavain B.
$3 (2nd donation), Erich G.
$3 (2nd donation), Matteo P.
$3 (2nd donation), Adrien L.
$3, Flavio L. Cavalcante
$3, Yassin A.
$3, Ariel R.
$3, Wellness Geeky
$3, Timo P.
$3, Mateus M.
$2 (13th donation), Terry Poe aka “Exclusive
$2 (6th donation), Илья Кругликов aka “Ilis”
$2 (4th donation), Maxime H.
$2 (4th donation), Alfred Tan aka “pinuno”
$2 (2nd donation), Marco B.
$2 (2nd donation), Denis C.
$2 (2nd donation), Star Tipster
$2 (2nd donation), 정우 이
$2, Ernesto N. A. G.
$2, Stefano B. aka “StefanoB26”
$2, Ryszard J.
$2, Cuauhtemoc U. aka “regicide
$2, 合同会社hartanah
$26.75 from 26 smaller donations

If you want to help Linux Mint with a donation, please visit


  • Distrowatch (popularity ranking): 2578 (1st)
  • Alexa (website ranking): 4275

13 November, 2017 10:21AM by Linux Mint

hackergotchi for Ubuntu developers

Ubuntu developers

Serge Hallyn: Genoci and Lpack


I’ve been working on a pair of tools for manipulating OCI images:

  • genoci, for GENerating OCI images, builds images according to a recipe in yaml format.
  • lpack, the layer unpacker, unpacks an OCI image’s layers onto either btrfs subvolumes or thinpool LVs.

See the for both for more detailed usage.

The two can be used together to speed up genoci’s builds by reducing the number of root filesystem unpacks and repacks. (See genoci’s for details)


While the project’s readme’s give examples, here is a somewhat silly one just to give an idea. Copy the following into recipe.yaml:

  base: empty
  base: cirros
  pre: mount -t proc proc %ROOT%/proc
  post: umount %ROOT%/proc
  run: ps -ef > /processlist
  run: |
    cat > /usr/bin/startup << EOF
    echo "Starting up"
    nc -l -4 9999
    chmod 755 /usr/bin/startup
  entrypoint: /usr/bin/startup

Then run “./genoci recipe.yaml”. You should end up with a directory “oci”, which you can interrogate with

$ umoci ls --layout oci

You can unpack one of the containers with:

$ umoci unpack --image oci:weird
$ ls -l weird/rootfs/usr/bin/startup
-rwxr-xr-x 1 root root 43 Nov 13 04:27 weird/rootfs/usr/bin/startup


I’m about to begin the work to replace both with a single tool, written in golang, and based on an API exported by umoci.


The opinions expressed in this blog are my own views and not those of Cisco.

13 November, 2017 04:37AM

hackergotchi for SparkyLinux


Linux kernel 4.14.0


The first version of Linux kernel of the 4.14 line – 4.14.0 just landed in Sparky “unstable” repository.

The Sparky’s Linux kernel is available in Sparky “unstable” repository, so enabled it to upgrade (if you have older version already installed) or to make fresh installation:

Follow the Wiki page: to install the latest Sparky’s Linux kernel.

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel tool.


13 November, 2017 12:13AM by pavroo

November 12, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: I wrote a Web Component

I’ve been meaning to play with Web Components for a little while now. After I saw Ben Nadel create a Twitter tweet progress indicator with Angular and Lucas Leandro did the same with Vue.js I thought, here’s a chance to experiment.

Web Components involve a whole bunch of different dovetailing specs; HTML imports, custom elements, shadow DOM, HTML templates. I didn’t want to have to use the HTML template and import stuff if I could avoid it, and pleasantly you actually don’t need it. Essentially, you can create a custom element named whatever-you-want and then just add <whatever-you-want someattr="somevalue">content here</whatever-you-want> elements to your page, and it all works. This is good.

To define a new type of element, you use window.customElements.define('your-element-name', YourClass).1 YourClass is an ES2016 JavaScript class. 2 So, we start like this:

window.customElements.define('twitter-circle-count', class extends HTMLElement {

The class has a constructor method which sets everything up. In our case, we’re going to create an SVG with two circles: the “indicator” (which is the one that changes colour and fills in as you add characters), and the “track” (which is the one that’s always present and shows where the line of the circle goes). Then we shrink and grow the “indicator” circle by using Jake Archibald’s dash-offset technique. This is all perfectly expressed by Ben Nadel’s diagram, which I hope he doesn’t mind me borrowing because it’s great.

So, we need to dynamically create an SVG. The SVG we want will look basically like this:

<svg viewBox="0 0 100 100" xmlns="">
  <circle cx="50" cy="50" r="45" 
  style="stroke: #9E9E9E"></circle>
  <circle cx="50" cy="50" r="45" 
  style="stroke: #333333)"></circle>

Let’s set that SVG up in our element’s constructor:

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    /* You must call super() first in the constructor. */

    /* Create the SVG. Note that we need createElementNS, not createElement */
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");

    /* Create the track. Note createElementNS. Note also that "this" refers to
       this element, so we've got a reference to it for later. */
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    /* And create the indicator, by duplicating the track */
    this.indicator = this.track.cloneNode(true);


Now we need to actually add that created SVG to the document. For that, we create a shadow root. This is basically a little separate HTML document, inside your element, which is isolated from the rest of the page. Styles set in the main page won’t apply to stuff in your component; styles set in your component won’t leak out to the rest of the page.3 This is easy with attachShadow, which returns you this shadow root, which you can then treat like a normal node:

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);

    let shadowRoot = this.attachShadow({mode: 'open'});

Now, we want to allow people to set the colours of our circles. The way to do this is with CSS custom properties. Basically, you can invent any new property name you like, as long as it’s prefixed with --. So we invent two: --track-color and --circle-color. We then set the two circles to be those colours by using CSS’s var() syntax; this lets us say “use this variable if it’s set, or use this default value if it isn’t”. So our user can style our element with twitter-circle-count { --track-color: #eee; } and it’ll work.

Annoyingly, it doesn’t seem to be easily possible to use existing CSS properties for this; there doesn’t seem to be a good way to have the standard property color set the circle colour.4 One has to use a custom variable even if there’s a “real” CSS property that would be appropriate. I’m hoping I’m wrong about this and there is a sensible way to do it that I just haven’t discovered.5 (Update: Matt Machell mentions currentColor which would work perfectly for this example, but it only works for color; there’s no way of setting other properties like, say, font-size on the component and having that explicitly propagate down to a particular element in the component; there’s no currentFontSize. I don’t know why color gets special treatment, even though the special treatment would solve my particular problem.)

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true); = "var(--track-color, #9E9E9E)"; = "var(--circle-color, #333333)";
    let shadowRoot = this.attachShadow({mode: 'open'});

We want our little element to be inline-block. To set properties on the element itself, from inside the element, there is a special CSS selector, :host.6 Add a <style> element inside the component and it only applies to the component (this is special “scoped style” magic), and setting :host styles the root of your element:

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true); = "var(--track-color, #9E9E9E)"; = "var(--circle-color, #333333)";
    let shadowRoot = this.attachShadow({mode: 'open'});
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";

Next, we need to be able to set the properties which define the value of the counter — how much progress it should show. Having value and max properties similar to an <input type="range"> seems logical here. For this, we define a little function setDashOffset which sets the stroke-dashoffset style on our indicator. We then call that function in two places. One is in connectedCallback, a method which is called when our custom element is first inserted into the document. The second is whenever our value or max attributes change. That gets set up by defining observedAttributes, which returns a list of attributes that we want to watch; whenever one of those attributes changes, attributeChangedCallback is called.

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  static get observedAttributes() {
    return ['value', 'max'];
  attributeChangedCallback(name, oldValue, newValue) {
  setDashOffset() {
    var mx = parseInt(this.getAttribute("max"), 10);
    if (isNaN(mx)) mx = 100;
    var value = parseInt(this.getAttribute("value"), 10);
    if (isNaN(value)) value = 0; = this.circumference - 
        (value * this.circumference / mx);
  constructor() {
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true); = "var(--track-color, #9E9E9E)"; = "var(--circle-color, #333333)";
    /* We know what the circumference of our circle is. It doesn't matter
       how big the element is, because the SVG is always 100x100 in its own
       "internal coordinates": that's what the viewBox means. So the circle
       always has a 45px radius, and so its circumference is always the same,
       2πr. Store this for later. */
    this.circumference = 3.14 * (45 * 2);

    let shadowRoot = this.attachShadow({mode: 'open'});
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";
  connectedCallback() {

This works if the user of the component does counter.setAttribute("value", "50"), but it doesn’t make counter.value = 50 work, and it’s nice to provide these direct JavaScript APIs as well. For that we need to define a getter and a setter for each.

window.customElements.define('twitter-circle-count', class extends HTMLElement {
  static get observedAttributes() {
    return ['value', 'max'];
  attributeChangedCallback(name, oldValue, newValue) {
  setDashOffset() {
    var mx = parseInt(this.getAttribute("max"), 10);
    if (isNaN(mx)) mx = this.defaultMax;
    var value = parseInt(this.getAttribute("value"), 10);
    if (isNaN(value)) value = this.defaultValue; = this.circumference - (
        value * this.circumference / mx);
  get value() {
    var value = this.getAttribute('value');
    if (isNaN(value)) return this.defaultValue;
    return value;
  set value(value) { this.setAttribute("value", value); }
  get max() {
    var mx = this.getAttribute('max');
    if (isNaN(mx)) return this.defaultMax;
    return max;
  set value(value) { this.setAttribute("value", value); }
  constructor() {
    var svg = document.createElementNS("", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "");
    this.track = document.createElementNS("", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true); = "var(--track-color, #9E9E9E)"; = "var(--circle-color, #333333)";
    this.circumference = 3.14 * (45 * 2);
    let shadowRoot = this.attachShadow({mode: 'open'});
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";
    this.defaultValue = 50;
    this.defaultMax = 100;
  connectedCallback() {

And that’s all we need. We can now create our twitter-circle-count element and hook it up to a textarea like this:

<twitter-circle-count value="0" max="280"></twitter-circle-count>
<p>Type in here</p>
<textarea rows=3 cols=40></textarea>
twitter-circle-count {
  width: 30px;
  height: 30px;
  --track-color: #ddd;
  --circle-color: #333;
  --text-color: #888;
// we use input, not keyup, because that fires when text is cut or pasted
// thank you Dave MN for that insight
document.querySelector("textarea").addEventListener("input", function() {
  document.querySelector("twitter-circle-count").setAttribute("value", this.value.length);
}, false);

and it works! I also added a text counter and a couple of other nicenesses, such as making the indicator animate to its position, and included a polyfill to add support in browsers that don’t have it.7

Here’s the counter:

Type some text in here:

  1. I relied for a lot of this understanding on Google’s web components documentation by Eric Bidelman.
  2. All this stuff is present already in Chrome; for other browsers you may need polyfills, and I’ll get to that later.
  3. Pedant posse: yes, it’s a bit more complicated than this. One step at a time.
  4. It would be possible to have color apply to our circle colour by monitoring changes to the element’s style, but that’s a nightmare.
  5. QML does this by setting “aliases”; in a component, you can say property alias foo: and setting foo on an instance of my component propagates through and sets bar on the subelement. This is a really good idea, and I wish Web Components did it somehow.
  6. Firefox doesn’t seem to support this yet, either :host or scoping styles so they don’t leak out of the component, so I’ve also set display:inline-block and position:relative on the twitter-circle-count selector in my normal CSS. This should be fixed soon.
  7. Mikeal Rogers has a really nice technique here for bundling your web component with a polyfill which is also worth considering.

12 November, 2017 11:31AM

November 11, 2017

hackergotchi for Tails


Tails report for October, 2017


Documentation and website

  • We improved our donation page in preparation of the donation campaign to mention CCT instead of Zwiebelfreunde and be better structured.

User experience

  • We installed LimeSurvey on our infrastructure and advertised a first survey on file storage encryption from the homepage of Tor Browser in Tails. Our users have been very responsive to our call and since then we have gathered 30 complete answers to the survey each day on average, reaching 375 in total on October 30.

  • We extensively tested older and newer versions of UUI to understand why cloning from a USB stick installed using UUI sometimes fails. Everything is fine as long as "Format in FAT32" is checked which is documented but not always applied by users, so we should either improve UUI, our documentation, or Tails Greeter to prevent that.

  • We continued to port our verification extension for Firefox to Web Extensions and we now have a working prototype! that computes the checksum in a reasonable time (45 seconds).

Hot topics on our help desk

  1. Tails Installer treats drives differently depending on when they are plugged

  2. Install by cloning sometimes silently fails from a stick installed with UUI

  3. A few users with NVidia graphics reported some issues.


  • We consolidated our system administration team by hiring a new member. Welcome, groente!

  • For our installation of LimeSurvey, we implemented an automatic monitoring of new upstream versions to notify system administrators when a security update is available (Git repository).


  • The proposal that we submitted to the Lush Digital Fund in May was accepted. It will fund the integration of a web-based translation platform in the build of our website and documentation.

  • We published our finances for 2016.

  • We launched our 2017 donation campaign and blogged about Many hands make Tails.


Past events

  • The Hackmitin 2017 at Rancho Electronico in Ciudad Monstruo, Mexico featured two Tails workshops.

  • Some of us attended the Reproducible Builds World summit in Berlin, Germany.

Upcoming events

Press and testimonials


All the website

  • de: 54% (2825) strings translated, 7% strings fuzzy, 48% words translated
  • fa: 40% (2107) strings translated, 9% strings fuzzy, 43% words translated
  • fr: 89% (4631) strings translated, 1% strings fuzzy, 87% words translated
  • it: 39% (2025) strings translated, 5% strings fuzzy, 34% words translated
  • pt: 24% (1290) strings translated, 9% strings fuzzy, 21% words translated

Total original words: 54735

Core pages of the website

  • de: 76% (1454) strings translated, 13% strings fuzzy, 77% words translated
  • fa: 34% (651) strings translated, 11% strings fuzzy, 35% words translated
  • fr: 98% (1870) strings translated, 0% strings fuzzy, 99% words translated
  • it: 76% (1447) strings translated, 13% strings fuzzy, 77% words translated
  • pt: 44% (850) strings translated, 15% strings fuzzy, 45% words translated

Total original words: 17292


  • Tails has been started more than 683188 times this month. This makes 22036 boots a day on average.
  • 11166 downloads of the OpenPGP signature of Tails ISO from our website.
  • 137 bug reports were received through WhisperBack.

11 November, 2017 12:34PM

hackergotchi for VyOS


1.1.8-rc1 release is available for testing

The long overdue 1.1.8 release candidate is available for download from

While a number of people have already been running 1.2.0 nightly builds in production, we do acknowledge there are people who are not in position to install updates that are not completely stable, and recently discovered vulnerabilities in dnsmasq that potentially allow remote code execution are impossible to ignore (unlike many older vulnerabilities that are only locally or aren't practical to exploint).

It's stable for all practical purposes, but since it includes pretty big updates and a few new features, I suppose it's better to go through the release candidate phase. If in say a week no one finds any issues.

The release is only available for 64-bit machines at the moment. We can provide it for 32-bit, but we are wondering if anyone still wants it, when even small boards have 64-bit CPUs.

You can read the full changelog here: 

Among package updates, there are openssl 1.0.2l and dnsmasq 2.72. Since squeeze is long EOL, the OpenSSL update required re-compiling everything that depends on OpenSSL ourselves, which took longer than we hoped.

Among VyOS fixes and features, there are user/password authentication for OpenVPN, as-override option for BGP neighbors, as-path-exclude option for route-map rules, tweakable pipe (buffer) size for netflow/sflow (too small hardcoded value could cause pmacct crash on high traffic routers), peer-to-peer VXLAN interfaces, and multiple fixes for bugs of varying severity, such as overly high CPU load on KVM guests or protocol negation in NAT rules not working.

A lot of features from 1.2.0 are not backportable due to big code changes and dependencies on way newer software versions than 1.1.x could provide, so features for cherry-picking had to be carefully chosen and even that needed quite a bit of merge conflict resolution. Quite a few of those were meant for the ill-fated "lithium" release that was supposed to be named 1.2.0 and be the last squeeze-based release, but then squeeze EOL'd, then serious life circumstances forced Alex Harpin to put all his VyOS work on hold thus leaving the maintainers team even more understaffed, and then the company we started to fund VyOS development through commercial support and services had a hard time when it almost reached the point of bankruptcy and dissolution (and, since it's self-funded, its founders almost reached the point of personal bankruptcy along with it), so by the time we could get things back on track a feature release based on squeeze wouldn't be feasible, especially considering how much we had to change to make the old codebase run on jessie. In a sense, it's a lithium that could have been, at least partially, rather than a straight maintenance release with nothing but bugfixes.
But, many of those features spent so much time in the limbo without making it into a release called stable that we felt compelled to include at least some of them.

I would like to say thanks to everyone who contributed and made this release possible, namely: Kim Hagen, Alex Harpin, Yuya Kusakabe, Yuriy Andamasov, Ray Soucy, Nikolay Krasnoyarski, Jason Hendry, Kevin Blackham, kouak, upa, Logan Attwood, Panagiotis Moustafellos, Thomas Courbon, and Ildar Ibragimov (hope I didn't forget anyone).

11 November, 2017 11:02AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S10E36 – Therapeutic Devilish Birthday - Ubuntu Podcast

This week we perfect the roast potato, discuss Google Code In, bring you some GUI love and go over your feedback.

It’s Season Ten Episode Thirty-Six of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

11 November, 2017 09:05AM

David Tomaschik: Hardware Hacking, Reversing and Instrumentation: A Review

I recently attended Dr. Dmitry Nedospasov’s 4-day “Hardware Hacking, Reversing and Instrumentation” training class as part of the event in San Francisco. I learned a lot, and it was incredibly fun class. If you understand the basics of hardware security and want to take it to the next level, this is the course for you.

The class predominantly focuses on the use of FPGAs for breaking security in hardware devices (embedded devices, microcontrollers, etc.). The advantage of FPGAs is that they can be used to implement arbitrary protocols and can operate with very high timing resolution. (e.g., single clock cycle, since it’s essentially synthesized hardware.)

The particular FPGA board used in this class is the Digilent Arty, based on the Xilinx Artix 7 FPGA. This board is clocked at 100 MHz, allowing 10ns resolution for high-speed protocols, timing attacks, etc. The development board contains over 33,000 logic cells with more than 20,000 LUTs and 40,000 flip-flops. (And if you don’t know what those things are, don’t worry, it’s explained in the class!) The largest project in the class only uses about 1% of the resources of this FPGA, so there’s plenty for more complex operations after the class.

Dmitry is obviously very knowledgable as an instructor and has a very direct and hands-on style. If you’re looking for someone to spoon feed you the course material, this won’t be the course you’re looking for. If, on the other hand, you prefer to learn by doing and just need an instructor to get you started and help you if you have issues, Dmitry has the perfect teaching style for you.

You should have some knowledge of basic hardware topics before starting the class. Knowing basic logic gates (AND, OR, NAND, XOR, etc.), basic electronics (i.e., how to supply power and avoid short circuits), and being familiar with concepts like JTAG and UARTs will help. I’ve taken several other hardware security classes before (including with Joe Fitzpatrick, another of the instructors and organizers) and I found that background knowledge quite useful. If you don’t know the basics, I highly reccommend taking a course like Joe’s “Applied Physical Attacks on Embedded Systems and IoT” first.

The first day of the class is mostly lecture about the architecture of FPGAs and basic Verilog. Some Verilog is written and results simulated in the Xilinx Vivado tool. Beginning with the second day, work moves to the actual FPGA, beginning with a task as “simple” as implementing a UART in hardware, then moving to using the FPGA to brute force a PIN on a microcontroller, and finally moving on to a timing attack against the microcontroller. Many of the projects are implemented with the performance-critical parts done in Verilog on the FPGA and then communicating with a Python script for logic & calculation.

I really enjoyed the course – it was challenging, but not defeatingly so, and I learned quite a few new things from it. This was my first exposure to FPGAs and Verilog, but I now feel I could successfully use an FPGA for a variety of projects, and look forward to finding something interesting to try with it.

11 November, 2017 08:00AM

November 10, 2017

Salih Emin: ucaresystem core 4.2.3 : One installer for Ubuntu and Debian based distributions

I am pleased to announce that ucaresystem core version 4.2.3 has been released with some cool features. Now either you have an Ubuntu or Debian based distribution, you just need only one deb package installer. Why bother ? Here is the thing… I love creating automas (bash scripts). So until now there were 3 waysContinue reading "ucaresystem core 4.2.3 : One installer for Ubuntu and Debian based distributions"

10 November, 2017 11:13PM

Ubuntu Insights: Ubuntu Desktop Weekly, Nov 3rd & 10th: Bionic daily images available


The big news this week is the availability of Bionic daily ISOs.

As always, we’re beavering away on the desktop:


We’ve started a conversation on the GNOME mailing list to talk about locking down extensions which are part of a “mode” (a set of compulsory extensions and other settings). Ubuntu ships a mode to implement the default session – with some extensions including the Ubuntu Dock. Should an update to one of these extensions be published elsewhere, for example on, the session will use the updated extension. It’s intended that these come from the system and are subject to the distribution’s QA processes – there is obvious potential for breaking a lot of systems if this isn’t followed. Our proposal is to always load the system installed version of a mode’s extensions. We have a proof of concept patch to enable this which is being tested.

We’ve rebased the Ubuntu dock on Bionic and uploaded it.

Fixes for blurry fonts and Xwayland crash dumps and system monitor have landed upstream:

A security fix for zesty for a lockscreen bypass issue when auto-login is enabled was released.


  • PulseAudio 11.1 has been merged from Debian for Bionic.
  • Chromium 62.0.3202.75 is published to all supported series. 62.0.3202.89 is currently building in the stage PPA. We also updated Chromium beta to 63.0.3239.30 in PPA and snap beta channel.
  • The LibreOffice stable channel snap is updated to 5.4.2.


We’ve created a snapd interface to allow access to GNOME Online Accounts, and this will allow us to package confined apps which need to access G-O-A. This is still in testing, but we will make an announcement when it’s ready for user testing.

In The News and On The Hub

  • There is a lively discussion about Tracker on the hub
  • The Free Software Showcase (aka The Wallpaper Competition) is discussed.
  • A community led theme for 18.04 is kicked off.
  • OMG covers colour emoji support.

Some news from last week as well:

Friday 3rd November 2017

A fairly short update this week, as we’ve been doing more tidying up in preparation for starting the 18.04 cycle. We had a mini-sprint in London to make sure our Trello board for the 18.04 cycle included everything we needed it to. You can view the board here.  And you can see there are already a lot of tasks on there!

Some brief desktop updates from the last week:

  • We contributed to resolving a touchpad responsiveness issue:
  • Fixed a bug in system-monitor to improve the accuracy:
  • Jeremy has synced Cairo and merged fontconfig ( to start bringing colour emoji support to Bionic!

Snap Updates

We found and fixed a bug with desktop theme support in snaps on 17.10 and made some other improvements to the desktop helpers so that XDG directories appear correctly in the file picker, and we backported some Artful fixes to the GNOME 3.26 platform snap.


  • Chromium 62.0.3202.62 is published for all supported releases, and 62.0.3202.75 is ready for release very shortly. The Chromium snap stable channel is now also at 62.0.3202.75 and 63.0.3239.18 is available in the beta channel for testing.
  • LibreOffice 5.4.2 will be promoted to the stable channel soon.

In The News


10 November, 2017 03:54PM

hackergotchi for Deepin


Deepin Picker V1.0 is Released —— So Easy To Pick Color from Screen

Deepin Picker is a fast screen color picking tool developed by Deepin Technology. The RGB, RGBA, HEX, CMYK and HSV code can be obtained according color picked and auto saved to the clipboard. Zoom In to Pick Color, Right-click Switch Zoon in the color picking area like a magnifier, just move to rea time capture, right click to view and switch color code. Color Code, Auto Identification System will auto obtain the current area value after selected the color code. One click to get and auto copy to cipboard. Welcome to use Deepin Picker V1.0 by upgrading the system or ...Read more

10 November, 2017 09:01AM by admin

Deepin Clone V1.0 is Released——Backup and Restore, Security and Reliable

A new member comes to Deepin Family! Deepin Clone is a tool to backup and restore developed by Deepin Technology. It supports to clone, backup and restore disk or partition. And works with Deepin Recovery to fix the boot, partition and so on.   Disk and Partition Mutually Independent, No Operation Interference Start Deepin Clone, you can freely select to operate on the whole disk or partition. It will remember what you select for next time.   Backup and Restore Partition, Quickly Operate by Needs Important files or data in the system are generally stored in a specific partition, then ...Read more

10 November, 2017 08:43AM by melodyzou

hackergotchi for Univention Corporate Server

Univention Corporate Server

How to Integrate SAML Single Sign-On in ownCloud App

If you need to use various services online, which is by the way the norm, there’s nothing more conventient than using single sign-on (SSO). SSO allows you to log in to all available services in a domain with one password only. UCS provides this feature via the SAML Identity Provider since UCS 4.1.

We chose to implement SAML as the first single sign-on technology in UCS, because of its popularity in the enterprise sector, the high degree of security, and the positive experiences that we ourselves had made with SAML in the years before. Since then, a lot of services and Univention Apps already provide a SAML service provider. Now, we are working on integrating these into the UCS Identity Provider.

Today, we like to describe the configuration of the SAML integration that is required for the ownCloud Univention App. If you are absolutely new to SAML single sign-on, we suggest to read our article Brief Introduction to Single Sign-On first. It will give you a general understanding of the SSO concept.

This SAML integration for ownCloud was realized during one of our internal Univention Hackathons where some of us meet regularly to give exciting ideas and projects around UCS and UCS@school a go. By the way, during these hackathons many valuable apps, concepts and product features already have emerged.

So, how does the SAML integration for ownCloud work and what do I have to do?

Configuration of the SAML integration for ownCloudGraphic about the SAML integration of services to UCS

For the integration we prepared a Debian package, which does all the required configuration steps when it gets installed. Basically, you only need a UCS server, which has the ownCloud app installed from the Univention App Center.

The configuration of the ownCloud SAML service provider we provide is based on the official ownCloud instructions which are using the Mod Shibboleth (mod_shib) module of the Apache HTTP server.

After the package is installed, another link is added to the portal which provides the login via SAML. Note, the regular login, which uses LDAP authentication, is still usable as a fallback solution and alternative.

Preconditions to observe

Please observe was is needed before the package can be installed:

  • The ownCloud-App is installed on the UCS system.
  • Either ownCloud Enterprise or a 30 days evaluation copy of ownCloud is activated. The activation happens in two steps:
    • Enter your key: Login → Start menu → Market (directlink: /owncloud/index.php/apps/market/) → Add API Key → Save → Close

What happens during installation?

On installation of the Debian package, the following steps are executed:

  • Installation of the ownCloud SAML-App.
  • Activation and configuration of the ownCloud SAML-App.
  • Set up the Apache configuration for mod_shib in the Docker container of ownCloud.
  • Set up of an Apache reverse proxy rule for single sign-on on the host system(s).
  • Set up of a portal entry for the single sign-on to ownCloud.

Needed steps for operation

To put the whole into operation, the following steps are necessary:

  • If applicable, set the UCR variable owncloud/saml/path (default: /oc-shib) which defines where ownCloud is available via SAML.
  • For the installation of the Debian package there are two possibilities:
    1. Either download and install the package
      • Download the package from github

        root@ucs# wget

      • Install the package via dpkg

        root@ucs# dpkg -i univention-owncloud-saml_1.0-0.deb

    2. Or clone the git repository, build and install the package
      • Clone the git from github:

        root@ucs# univention-install git dpkg-dev debhelper univention-config-dev ucslint-univention root@ucs# git clone

      • Build the package:

        root@ucs# cd univention-owncloud-saml/; dpkg-buildpackage

      • Install the package via dpkg

        root@ucs# cd ..; dpkg -i univention-owncloud-saml_1.0-0.deb

  • Ensure that the joinscript was successfully executed via univention-check-join-status
  • Create an ownCloud user via UMC
  • Activate the ownCloud user for the SAML service provider via [Account] → [SAML settings]
  • Navigate to the portal site and log in using the new user


  • The changes for the file /root/owncloud/subpath.conf in the Docker container of the ownCloud app aren’t yet kept on an update of the App. Therefore the join script (40univention-owncloud-saml.inst) must be exectued again after each update of the ownCloud App.
  • The SAML Service Provider metadata are available via https://$fqdn//Shibboleth.sso/Metadata. For some debugging purpose there is also https://$fqdn//Shibboleth.sso/Session which shows information about the currently logged in user.

If you have further questions, please let us know. Either comment below or ask us via the Univention forum.

We are looking forward to your feedback!


Der Beitrag How to Integrate SAML Single Sign-On in ownCloud App erschien zuerst auf Univention.

10 November, 2017 08:41AM by Florian Best